How AI Is Infiltrating Healthcare And What We Can Do About It

Sign up here to receive The Checkup, MIT Technology Review’s weekly biotech newsletter, directly in your inbox every Thursday. This newsletter offers the latest updates from the world of biotechnology.

This week I’ve been considering whether, with the numerous stories indicating Artificial Intelligence (AI) can diagnose a range of diseases–the implication is that it is cheaper, faster, and superior to medical professionals–I could trust AI-generated medical advice.

Many of these technologies are notorious for their flaws, such as being trained on partial or inadequate data and performing insufficiently for women and people of color compared to their white male counterparts. Additionally, some of the information the systems teach is simply inaccurate.

As the incorporation of AI into healthcare settings continues, researchers note an increase in AI paternalism. This is a problem; AI paternalism can cause difficulties in certain aspects, such as people having less responsibility for their actions.

Paternalism in medicine has long posed a problem. Still, with the dawn of AI, it is possible that this medical practice could take precedence over patients’ narratives and physicians’ clinical decisions. This presents an extra challenge for the medical sector.

Sandra Wachter, a professor of technology and regulation from the University of Oxford in the UK, says that the true extent of AI’s adoption for healthcare is unknown. Nevertheless, some hospitals are already utilizing AI to triage patients, develop treatment plans, and help with diagnosis.

Wachter says:

“Sometimes we don’t actually know what kinds of systems are being used,”

As technology improves and healthcare systems seek methods to reduce costs, the adoption of telemedicine services is anticipated to rise. Despite this, its long-term implications are still unclear.

An AI system’s conclusions were compared with oncologists’ skin cancer diagnoses in a study published a few years ago. Surprisingly, many of them accepted the AI’s results, despite it being at odds with their diagnostic judgment. Doctors seem to be trusting these advanced technologies considerably.

Paternalism could be a potential solution to the greater reliance on technologies we may experience. Alarmingly, there is a real risk that our relationship with these technologies may become too strong.

McCradden and Kirsch, from the Hospital for Sick Children in Ontario, demonstrate that paternalism can be expressed by saying, “the doctor knows best.” This knowledge is presented in their recent scientific journal paper.

The decision-making for a patient’s medical treatment is thought to lie in the hands of a doctor due to their expertise from training. This means that those feelings, beliefs, culture, and other personal influences are not considered when selecting an appropriate approach.

McCradden and Kirsch say:

“Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI,”

“Rising trend toward algorithmic paternalism.”

AI has flaws, which could be highly problematic if decision-making were fully handed over. Historical data sets are used in training AI, which could lead to inaccurate predictions – an unacceptable outcome when vital decisions need to be made.

Wachter continues to say:

“You’re not sending an algorithm to med school and teaching it how to learn about the human body and illnesses,”

McCradden and Kirsch state that Artificial Intelligence (AI) can only make predictions, not possess understanding. For example, training AI systems to recognize patterns in skin cell biopsy likely to indicate cancer is possible.

Data collection from past diagnoses made by doctors may be unreliable, as they were more likely to miss cases in people of color. This means that the accuracy of this information is questionable.

By no means should trends from the past be used as an absolute indication of how to proceed with a patient’s treatment. Both doctors and patients need to be part of treatment decisions, especially in light of recent advances in AI implementation. Patient autonomy should remain preserved and respected.

Technologies can be designed to avoid such undesirable outcomes by training them on more comprehensive data. For instance, an algorithm could provide information about different communities’ beliefs and wishes and various biological data. Thus, these technologies can potentially prevent biased decisions from being made.

Undoubtedly, those looking for the cost-saving potential of AI may not find the investment in data collection attractive. However, collecting that data is necessary to make use of AI. Wachter stresses that we cannot effectively implement Artificial Intelligence without these datasets.

When developing these AI systems, designers must consider the requirements of those being judged by them while also keeping in mind that a technology suitable for one group might not necessarily work well for another; it could be due to physiological or ethical factors.

Wachter went on to say:

“Humans are not the same everywhere,”

By leveraging new technologies similarly to existing ones, it is possible to gain access to useful diagnostic insights. For example, X-rays and MRIs can help inform diagnosis alongside other health information.

People should be free to select whether they desire a scan and decide how to act upon the outcome. Rather than surrendering control to AI, we should utilize it as a tool.

My colleague Will Douglas Heaven delved into the challenging ethics of allowing AI to make life and death choices in an article from the mortality edition of our magazine, inspired by Dr. Death – or Philip Nitschke – who has set out to create an AI that can aid individuals in killing themselves.

Two years ago, despite developing hundreds of AI tools to help diagnose covid-19 and predict the severity of individual cases, Will reported that none had succeeded.

Will has illustrated how AI can seemingly perform optimally in a laboratory environment yet fail to produce the same result when applied in the field.

In a recent publication, The Algorithm, Melissa Heikkilä examined whether AI programs should carry warning labels similar to those found on cigarette packets.

Companies are eager to demonstrate they employ ethical AI tools. Hence, Karen Hao curated a list of the top fifty words companies can employ to illustrate their attention to morality without having legal repercussions.

Interesting Stories And News FROM AROUND THE WEB

In ancient Egyptian animal coffins, which had been sealed for a long time, the Imaging technique was used by scientists to reveal their contents. Among them were fragments of bones, a lizard skull, and scraps of fabric that are now uncovered.

According to genetic analysis, mutations that are less likely to benefit from targeted treatments for colorectal cancer are those among people with African ancestry than those with European. This finding emphasizes the immense value of utilizing data from a broad range of populations in research. (American Association for Cancer Research)

Officials in Sri Lanka are looking into possibly sending 100,000 monkeys indigenous to their country to be transferred to a private business in China. The government spokesperson has reported that these primates would be used for zoos. Nonetheless, conservationists have raised concerns that they could potentially end up in research laboratories. (Reuters)

Brain Stimulation is something that some people with a known risk of developing dementia may be willing to try; this observation was seen in a small study. Inserting electrodes in the brain to assist with treatment has been proposed as an option, and many individuals appear to be open to considering it.

The FDA is expected to decide on approving a gene therapy to tackle a debilitating disease that primarily affects the muscles in some young boys, even without any concluded clinical trials. (STAT)

It is important to remember that AI is not infallible and should be used as a tool rather than a replacement for human decision-making. The ethical implications of using AI in healthcare must be carefully considered, including privacy, bias, and accountability issues. It is essential to balance the benefits of AI and the need for human involvement in the decision-making process. Ultimately, AI should be viewed as a complementary technology that works alongside healthcare professionals to improve patient care and outcomes. We can create a more effective and compassionate healthcare system by leveraging AI and human decision-making strengths.

Source: MIT Technology Review

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top