Artificial intelligence, health care and legal liability issues

Many health officials see huge potential for artificial intelligence in healthcare, but the growing use of AI raises a host of legal questions.

Samuel Hodge, professor of law at Temple University, has considered these questions. He recently wrote an article on the legal implications of AI in healthcare in the Richmond Journal of Law and Technology.

In an interview with health directorHodge spoke about liability issues for hospitals, doctors, and some of the questions healthcare industry leaders should ask themselves.

“The law always lags behind medicine,” Hodge said. “It’s an area that is a classic example.”

Hodge says he’s a big proponent of the growing use of AI in medicine, calling it as potentially important as X-rays or CT scans. But he said the use of AI raises legal questions that have yet to be answered.

“It’s exciting, but AI has drawbacks and legal implications because the law lags behind the development of the technology,” Hodge said.

“There are no recorded cases on AI in medicine yet, so the area of ​​responsibility is open, and hospital administrators and physicians are really going to have to monitor the development of the area to keep abreast of the latest. developments.”

(See excerpts from our conversation with Samuel Hodge in this video. The story continues below the video.)

Liability issues

Recent studies suggest that artificial intelligence can help reshape healthcare, particularly by identifying patients before adverse events occur. Mayo Clinic researchers have found that AI could help spot patients at risk for stroke or cognitive decline. Another Mayo Clinic study focused on using AI to identify complications in pregnant patients.

Hal Wolf, president and CEO of the Health Information and Management Systems Society (HIMSS) said health director in a recent interview that he sees healthcare systems turning to AI to identify health risks earlier. “AI applications will help with predictive modeling of what to use, where to anticipate diagnoses, how to maximize resources in communities,” Wolf said.

Currently, less than 1 in 5 doctors regularly use augmented intelligence, but 2 in 5 plan to start doing so next year, according to a survey by the American Medical Association. The AMA describes augmented intelligence as “a conceptualization of artificial intelligence that focuses on the assistive role of AI, emphasizing that its design enhances human intelligence rather than replacing it.”

As doctors and health systems turn more to AI in treatment, Hodge said they will face new questions about accountability. If AI leads to an incorrect diagnosis of a patient’s condition that results in harm, Hodge asks, the question is who is responsible?

As a lawyer, he could see lawyers suing the doctor, the healthcare system, the software developer, and the AI ​​maker.

“The question that the court is going to have to resolve and address is who is responsible and to what extent? These are issues we’ve never had before,” Hodge said.

“It’s going to come with artificial intelligence, and no one knows the answer at this point,” he said. “All of this is going to have to be played out with litigation.”

As doctors and health systems turn more to AI in treatment, Hodge said they will face new questions about accountability.

“There are several issues that hospital administrators should think about,” Hodge said. “First, most doctors don’t buy the computers they use. Hospitals do. Therefore, they will end up vicariously liable for the doctors’ actions because they provided the computer that is used.

Change in standard of care

In addition, healthcare systems and physicians could also see new definitions of standards of care in malpractice cases.

Typically, a physician in a suburban health system would be judged by the standard of care in that area. The suburban doctor at a smaller facility wouldn’t necessarily be compared to a surgeon at a large, urban teaching hospital, Hodge said.

As artificial intelligence is used more in treatment and becomes more widely available, the standard of care may change, he said.

“Previously, in a malpractice case, the standard of care was the average physician in the locality where the physician practices,” Hodge said. “With AI technology, duty of care can be taken to a higher level, and then it can be a national standard, because everyone is going to have access to the same equipment. Thus, this standard of care can be made higher.

Also, as AI is used more often, doctors may be held to higher standards in the future.

“The problem is, what may not be malpractice today may be malpractice a year from now,” Hodge said.

Even if a doctor uses artificial intelligence in a diagnosis, Hodge said, “it doesn’t let the doctor off the hook.”

“Doctors will be able to come to conclusions much faster,” Hodge said. “Doctors need to understand that this is a double-edged sword in that they may be held to higher standards of care in the future, as they have access to all of this database that they didn’t have before.

“Bottom line: the doctor is the one who is responsible for the care of the patient, regardless of the use of AI,” he said. “It’s just a tool. It does not replace the doctor. »

Doctors could face informed consent issues with patients if they use AI to develop a diagnosis.

Some patients may resent the use of AI, even though it could lead to a more accurate diagnosis, Hodge said.

“Whenever medical treatment is provided, the physician should advise patients that these items are relevant,” Hodge said. “AI in medicine creates additional problems. For example, do you have to tell the patient that you have used AI to inform the diagnosis? If the answer is yes, how much information do you have to give the patient about the use of AI Do you have to tell them the success rate involving AI to make diagnoses?

“One of the things the research suggests is that AI in medicine, if you disclose its use, it can encourage more arguments between doctors and patients,” Hodge said.

Liability of software manufacturers

Aside from questions for hospitals, health systems and physicians at fault, Hodge said it’s unclear what exposure developers and software makers would have in AI-related lawsuits.

Software makers could also argue that the software worked well until it was changed by the healthcare system over time.

“There are defenses that a manufacturer or a software developer will use, and that’s the technology that’s designed to evolve,” Hodge said. “So I’m giving you the basic software, but it’s designed for the doctor or the healthcare professional to supplement with patient records, diagnostic imaging, so it’s designed to grow. Therefore, the argument is going to be, when the machine was supplied it was not faulty. It became defective by the materials which were uploaded by the health care provider at a later date.

Under product liability law, software makers cannot be held liable, Hodge said. While consumers can sue an auto company for a faulty car if the brakes don’t work, it will likely be more difficult to sue a software company for a botched diagnosis.

“Traditionally, courts have said that software is not a product,” Hodge said. “Therefore, you will not be able to sue under the product liability theory. This is a problem you are going to have.

Despite concerns about the legal implications of the growing use of AI, Hodge sees artificial intelligence as a tool to improve healthcare.

“I’m very excited about the development of AI in medicine,” Hodge said. “I really believe this is the wave of the future. This will allow physicians, wherever they are in the United States, in a small remote location or in a metropolitan city, to all have equal access to a database that will help them diagnose patients and provide appropriate medical care. . This will be a real boon for the industry.