fbpx

Who Is Responsible When AI Makes Medical Mistakes?

Featured - When AI makes medical mistakes

Written by LJBrooks

I am a Registered Nurse with a background in Health Technology, Education, and Managed Care. I love making complex topics understandable, and getting more people involved in Digital Health.

February 18, 2021

If a patient receives the wrong medication dose, there are options for tracing that mistake back to its root cause. Did the nurse who gave the medication make an error? Did the the pharmacist dispense the medication incorrectly? Or was it the physician who ordered the medication who made the mistake?

The reason to find out why it happened is to prevent mistakes like that from happening again.

But, who is responsible in a future where artificial intelligence (AI) makes decisions about what medication to order, dispense, and administer? Should we look to the person using it, the person who built it, or the artificial intelligence itself? Even though AI can be more accurate than humans does not mean it is fail-proof.

Mistakes will happen.

How can mistakes happen with AI?

If you have followed the news about driverless cars, you are probably not surprised that mistakes can happen with AI.

In March of 2018 a self-driving Uber car killed a pedestrian in Arizona. The car’s programming only recognized pedestrians at crosswalks, not when crossing in the middle of the street. This seems like a simple situation, and one that is easy to predict could happen.

Healthcare has so many more situations where human behavior is hard to predict.

AI makes medical mistakes

Did the patient take only half of her blood pressure medication because it makes her dizzy? Is the patient taking any herbal supplements he forgot to mention? Does the patient secretly smoke but they do not want their partner to find out?

All of these complexities make it even more likely mistakes will be made. So, who is liable when a mistake happens that harms someone?

In healthcare, the responsible party is still unclear

Healthcare is a very regulated and very litigious industry, so the answer is not clear.

Going back to the driverless cars case may offer some clues. In September of 2020, the woman behind the wheel of that self-driving Uber in Arizona was charged with negligent homicide. Adrian Flux, a British car insurance company, started adding AI-specific coverage to their policies, including:

  • Loss or damage because of the car’s driverless system
  • Coverage if hackers get into the system
  • Coverage if driver forgets to install software updates
  • Outages of navigation system

While the driverless car and insurance industry appear to be holding the driver responsible, healthcare has not yet answered this question. With the pandemic fast-tracking many healthcare AI applications, there are three parties who could be responsible if something goes wrong:

The owner of the AI – the entity that purchased it.

The manufacturer of the AI – the entity that created and programmed the AI.

The AI itself – Holding the machine itself responsible may seem like science fiction now but something to think about as AI becomes more conscious.

Scenario 1: Holding AI-makers responsible

Many of us immediately jump to the conclusion that whoever built the AI should be responsible if it harms someone. We think whoever built it just did not think through all of the possible scenarios the AI could encounter.

But it is not that easy. The opportunity for error and missing a scenario can come at any point along the process of implementing AI, from design to delivery.

Responsibility could be decided by looking at where and how the AI made a mistake. If it did not perform as it was designed to, the AI-maker would be responsible. If it performed as designed but was misused by the provider, the healthcare entity or provider would be responsible.

The Da Vinci Surgical System pairs robots with human surgeons. The robots can do things human hands cannot, and has allowed surgeries to become less invasive.

However, the company that makes the system, Surgical Inc, has had to do recalls of parts of the device, and has settled thousands of lawsuits, including 3,000 lawsuits in 2014 alone amounting to $67M. Injuries include lacerations, scarring, and burns.

In these cases, the injuries were more clearly due to the robotic system than the human surgeon operating it. But not all scenarios using AI are as clear-cut. For example, if AI is trained on data that has more men than women, and then it makes a mistake with a female patient, is the AI-maker at fault? Or the people who provided the biased data to begin with?

When AI works with humans, will it learn the human’s flaws and copy them?

Scenario 2: Should the people working with AI be responsible?

Humans could be liable when working with AI, but this is a legal gray zone. AI-human working relationships are still new, so the ways in which humans and AI will work together are unknown. Where does responsibility of each begin and end?

The clearer situation is when AI is fully autonomous, like in vehicles. In these cases humans are not making decisions, and responsibility is with the maker of the AI system. But in healthcare, uses for AI will include tasks like predictions, assessments, and categorization.

AI will support human decision-making, which is where it gets complicated:

Complication 1: Humans may accept AI decisions without critically thinking about them.

The scariest and most recent example of this is where AI is used in the criminal justice system. With budget cutbacks, many cities use AI algorithms to decide if someone is likely to commit a crime again. The problem is these algorithms that were trained on biased data that looks at things like the arrest rate in your neighborhood.

Many judges assumed the algorithms were somehow more objective, and accepted the scores without critically thinking about them. This means that just by living on a certain block, you could receive harsher punishment because the arrest rate in that location is higher than in other parts of the area.

This type of blind trust of AI negatively impacted minorities at a much higher rate than whites. Minority first time offenders received more extreme sentences than white repeat offenders.

This is exactly the opposite of what the AI should predict.

AI makes medical mistakes

In healthcare it is not hard to imagine tired, over-worked clinicians simply accepting what AI says without question.

It may even be hard to question AI in a healthcare system where administrators pressure clinicians to rely on tools over their own judgement. The administrators may fear lawsuits and regulatory penalties, and assume AI offers protection.

Complication 2: Humans may ignore AI if they get too many notifications, causing them to miss something important.

Overly sensitive algorithms that flag every abnormality may, quite frankly, get on a provider’s nerves. Part of the role of healthcare providers is to help patients understand what health issues they should focus on, versus which ones are less of a concern at the moment.

We have already seen how having medical information available online – both good information and misinformation – causes people to react in erratic and at times unhealthy ways.

AI is able to look at more information than humans can consume. Clinicians could feel overwhelmed if presented with too much information, and they may ignore it and miss something critical.

Complication 3: Humans may not understand what AI recognized, so decisions occur in a black box.

Because AI can process a lot more information that humans, it may reach a conclusion that humans do not understand. Going back to the issue of healthcare providers not questioning AI’s decisions, there is also the risk that they may simply not understand how AI arrived at a decision.

AI makes medical mistakes

This issue has led some people to call for better insight into AI algorithms.

One idea is to have a federal agency that evaluates algorithms – an ‘FDA for algorithms’. This agency would prevent dangerous algorithms from entering the market. The hope is this would create standards AI-makers have to follow and lead to less harm for humans.

There are also some cities, like Amsterdam and Helsinki, that have algorithm registers. They help the public to understand what algorithms the cities use, and invite feedback. The goal is to create transparency and help the algorithms to become more human-centered.

Both are forward-leaning ideas aimed at digging into the AI’s logic and correcting paths that could be harmful to humans. However, AI learns over time after it is already in use. At some point, do we look at the AI itself as being responsible for its behavior?

Scenario 3: Should AI itself be responsible?

With advances in AI technology, AI can learn and grow on its own in a structure called machine learning. In fact, the inspiration for machine learning systems is how human brains work. They are even called neural networks inspired by how biological creatures think and learn.

The idea of machine learning is that the machine is able to learn how to do new things without programming. AI is able to take information in from the real-world, including observing human behavior, and draw its own conclusions.

As AI becomes able to function alone, at what point does it become responsible for its own behavior? There is no real answer to this question at the moment. But now is the best time to ask about AI’s responsibility because our ideas about it are still forming.

Where do we go from here?

If we know mistakes will happen as AI rolls out into healthcare, what happens next? While no one knows for sure what will happen, there are three things we are likely to see over the coming years:

1: There will be lawsuits.

That is no surprise since healthcare in the United States is one of the most litigious industries. Lawsuits will determine who is at fault in each scenario where AI was involved in a medical mistake.

Since healthcare systems often have deep pockets and lots of malpractice insurance, they are the most likely target for lawsuits.

For this reason, they are likely to take more cautious approaches to purchasing and using AI, leading to two additional scenarios: government approval being a big deal, and AI working as an assistant to clinicians.

AI makes medical mistakes

2: Government approval will be a selling point for vendors.

Healthcare systems will likely look for AI vendors who have gone through government approval processes to remove some of the legal risk for AI’s mistakes. In the United States, AI-makers would seek approval from the FDA. But it depends on the type of FDA approval they obtained.

The FDA has an expedited approval processes, and a more rigorous premarket approval process. The expedited approval processes are for fast-tracking drugs and medical devices that offer beneficial treatment. This can shave significant amounts of time off getting these products to market.

In technology, getting your product out first can be the difference between success and failure.

However, going with an expedited process offers less legal protection. The Supreme Court has sided with vendors in cases where they went through the most rigorous approval process, and against vendors when they went through an expedited process. With healthcare systems practicing legal defense, the more rigorous approval process could outweigh being first to market.

3: AI will assist clinicians, not replace them.

Healthcare systems will most likely use AI as an assistant to qualified clinicians rather than a replacement. The clinician will still make the final decision. But this opens up new liability for clinicians if they disagree with AI and the patient has bad outcome.

To avoid blame, clinicians could end up running more tests to confirm AI findings, making healthcare more expensive not cheaper. Or clinicians could end up defaulting to what AI finds in an effort to avoid blame. The question will then become whether the clinician used the AI wisely or not.

Key Takeaways:

Even though there are a lot of questions without clear answers, there are some key takeaways to think about as AI moves into the healthcare industry.

  • AI can and will make mistakes. In healthcare, these mistakes could lead to harming a person.
  • The person who made the AI could be held responsible, but it is not guaranteed healthcare law will blame the creator.
  • The person using the AI could be held responsible. This means clinicians using AI could be responsible for medical mistakes.
  • Some day AI itself may be responsible, but that day is not today.
  • Healthcare systems will play legal defense. This means they are likely to take the safest path in using AI by looking for government approved vendors, and still relying on clinicians to make final decisions.

There are still many bridges we have to cross as we come to them. But it is important to think about these questions while AI is still fairly new in healthcare.

You may also like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest