Skip to main content.

AI: The Final Frontier

Artificial intelligence is already in patient care and biomedical research — but ethical dilemmas remain

illustration of a robot thinking with images of a brain scan superimposed

When OpenAI launched ChatGPT in September 2023, the internet exploded with curiosity, amazement and a certain degree of skepticism and anxiety. It seemed as though overnight, artificial intelligence (AI) was poised to infiltrate society, stoking fears that it would eliminate jobs, spread misinformation and eventually lead to the development of a power-seeking sentient being on a mission to wipe out humanity.

What is AI?

It may sometimes feel as though AI is a recent development. In reality, the groundwork for AI began in the early 1900s, although the biggest strides weren’t made until the 1950s, when mathematician Alan Turing first posited that machines could be capable of reasoning the way humans do. AI capabilities have been evolving steadily since then, with the big breakthrough coming with the development of artificial neural networks in 2012, which allow machines to engage in reinforcement learning and simulate how the human brain processes information.

Most experts agree that there are four general types of AI or AI-based systems:

  • Reactive AI: This is the oldest form of AI, and it has extremely limited capability. It emulates the human mind’s ability to respond to different kinds of stimuli but does not have memory-based functionality. Netflix’s recommendation algorithm is one kind of reactive machine AI — it looks at a viewer’s watch history to predict what other content they might enjoy.
  • Limited AI: Limited memory AI is capable of learning from historical data to make decisions. Almost all present-day AI applications, from chatbots and virtual assistants to self-driving vehicles, are all driven by limited memory AI.
  • Theory of mind AI: Theory of mind AI is the next level of AI systems that researchers are currently engaged in developing. A theory of mind level AI will be able to better understand the entities it is interacting with by discerning their needs, emotions, beliefs and thought processes.
  • Self-aware AI: This is the final stage of AI development which currently only exists hypothetically. This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, beliefs, and potentially desires of its own — this is the type of AI that technology doomsayers are most wary of.

Already, AI- and machine learning-enabled technologies are being used in transportation, robotics, the military, surveillance, finance and regulation, agriculture, entertainment, retail, customer service — and health care.

AI and medicine

While the medical field has been slower to adopt AI than some other industries because of necessary safety precautions, it is already a part of the U.S. health care system. AI is programmed into medical devices like insulin pumps to help patients better manage their condition. It can analyze huge volumes of data to create increasingly accurate outcome predictions. And recently, physicians all over the country are starting to use AI assistants in diagnostic imaging because of its ability to identify patterns.

“These methods have already revolutionized some of the ways we identify abnormalities that might be difficult to see,” said Matthias Salathe, M.D., vice chancellor for research at University of Kansas Medical Center. “But they're not to the point where you don't need a human being to make sure that it’s correct.”

Although clinical applications for AI are plentiful, much of the technology is still a work in progress, leading to an explosion of research into its potential uses.

“These are technologies that are going to be transformational for our society,” said Daniel Parente, M.D., an associate professor of family medicine and community health at the KU School of Medicine, whose research focuses on AI. “We haven't quite had enough time to really understand the full implications of them yet. But our role is to make sure that those changes are going to be helpful, and that they are well-implemented, so that we're helping the widest range of people that we can.”

How AI could help cure rare diseases

One of the ways AI helps Scott Weir, Ph.D., design new cancer treatments is by visualizing the structure of a disease target and its potential treatment. Weir explains it like the charger on your cellphone — a drug compound must fit just right into the structure of its target in order to be effective, the same way a cellphone only works with a certain charger. AI can help structural biologists create 3D models of those structures to predict and identify treatments.

“We use artificial intelligence to predict what a chemical structure might look like that fits best into the bottom of that cellphone,” said Weir, who is the director for the Institute for Advancing Medical Innovation at KU Medical Center.

portrait of Scott Weir
Scott Weir, Ph.D.

This technology helps with the identification and design of new drugs, but it also helps researchers with drug repurposing — or using drugs that are already FDA approved to treat conditions they weren’t originally intended for. One of the most famous examples of this is Viagra, which was first developed to lower blood pressure before it was repurposed to treat erectile dysfunction.

Drug repurposing can be especially helpful for finding treatments for rare diseases. Rare diseases are defined by the FDA as conditions that impact fewer than 200,000 individuals. In the United States, there are over 7,000 rare diseases for which there are no treatments, affecting over 30 million patients. About half of rare disease patients are children.

Research to treat rare diseases can be difficult to fund because drug companies want to put their money toward treatments that can be marketed to more people, Weir said. That’s where AI comes in.

Weir said AI can predict what already-approved drugs could be a good match for treating rare diseases by screening thousands of FDA-approved treatments against the disease target. Although researchers will still have to run trials to test the drug for a new use, AI can significantly speed up the process, allowing researchers to skip early preclinical testing because the drug is already on the market. AI can even help researchers design the drug trial itself.

Weir and his team have taken over 20 drugs to clinical trial as cancer treatments using this process.

“We can use AI to predict or identify a promising drug, but we still need to actually test it to make sure it’s safe and effective,” Weir said. “That’s not going to change any time soon.”

Once a drug is developed, AI can also help determine the best way to manufacture it. AI software can help predict how long drugs are shelf stable and how they might mix with ingredients like lactose or calcium carbonate that help bind the drug in a capsule or tablet, which helps researchers find the most efficient, effective manufacturing method.

Weir’s ultimate goal with this technology is to connect biology, pharmacology and disease information. When these three components are combined with machine learning, it opens the door for researchers to fight multiple diseases at the same time.

“As people identify disease targets, we are using AI to see if we can more efficiently figure out if a target is important for just one disease, or if it is important for other diseases,” Weir said. “Up to this point, those discoveries have just been serendipitous.”

To take the technology even farther, researchers hope to add genomic discoveries into the mix. Kansas City is home to a unique initiative that could help in that goal: Genomic Answers for Kids. Founded by the Genomics Medicine Center at Children’s Mercy Kansas City in 2011, Genomic Answers for Kids collects genomic data and health information from children who have, or may have, a genetic condition and then uses that data to help researchers understand the causes and potential treatments of the disease. They also gather data from family members to better understand the particular disease.

The program has already analyzed over 25,000 genomes, leading to over 1,600 diagnoses for patients. KU Medical Center and the University of Missouri-Kansas City partner with Children’s Mercy on the project.

“There is no other program in the country that is doing this,” Weir said. “If we can plug genomics data into a platform with biology, pharmacology and disease information, it can hopefully point us in the right direction to help patients with rare diseases.”

Managing the risks of AI

AI helps significantly with large datasets like the ones found in genomic research, Salathe said. AI can comb through and analyze amounts of data that would otherwise take years for researchers to sort out manually.

“If we are faced, as human beings, with these large datasets, we have a really hard time trying to sift through them,” Salathe said. “Those algorithms and machine learning can help us.”

But one of the issues with large datasets is that the analysis AI can perform is only as good as the data put into the machine. This means researchers need to be extra vigilant about the quality of the data they use.

“We need to be very thoughtful with each step and have very careful validation to make sure that these technologies are doing what we expect them to do,” Parente said.

Daniel Parente portrait
Daniel Parente, M.D.

Parente also worries about biases in the large datasets that train AI software. When this data is gathered, it reflects the willingness and ability of individuals to provide data, as well as the priorities of the researchers collecting the data. People of color are less likely to participate in medical research for a variety of reasons, ranging from mistrust and fear of the medical industry to exclusion by design — which means there isn’t as much data to represent them going into an AI program. Also, when research uses routinely collected data such as electronic health records, recording of data on race is often patchy and incomplete.

When biased data is used to train AI software, it can negatively affect the AI and, down the line, patients.

“What we don't want to do is take individuals who are already at structural disadvantage in our society and give them even more structural disadvantages because we're using these AI tools that have potentially poorly understood biases,” Parente said.

Despite the risk of negative biases, Parente said AI does offer some tools for increasing health equity. Physicians across the county have been using chatbots and generative text software to ease the burden of documentation and visit summaries. This can be especially helpful in rural communities, where resources may be spread thinner.

Although AI will not be the silver bullet that solves the physician shortage, Parente said, it helps primary care physicians be more efficient, which could help them manage more patients.

“This is going to be just one of the tools that we can use to try to help our primary care providers meet the needs of our community — both in places where already we have lots of resources and in places where there are fewer resources available,” Parente said.

AI and hallucination

Hallucination — or misinformation — has become another major concern for AI software. When AI hallucinates, it confidently gives the user an answer that is not based on factual information. It usually occurs because of limitations or biases in the data used to train the system. This is common with programs like ChatGPT, which gives users a warning when they sign on that the information it gives them might not be entirely accurate.

This becomes a major issue when people use ChatGPT in place of a more traditional search engine like Google to search for information on a medical condition. For example, a health care AI model might incorrectly identify a benign skin lesion as malignant, leading to unnecessary medical interventions.

“If you ask it to just summarize a science topic and cite its references, it will happily generate fake references for you,” Parente said.

The primary differences between a search engine and AI are in how information is gathered and how it is presented. Google takes the search terms the user types in and shows them a list of web pages where they can find information for themselves. On ChatGPT, users give the bot a question or prompt, and its answer is presented in plain text, with no sources or links for additional information.

ChatGPT seems more straightforward than traditional searching — it will give users a clear, singular answer, instead of making them click through different links and read web pages. But AI also takes away a user’s ability to easily discern whether they trust the source of that information. ChatGPT also cannot gather information in real-time from the internet the way Google does. Its knowledge is only as good as its last training data, which according to its website, was back in 2022.

As they are constructed today, Parente said tools like ChatGPT can’t totally be trusted to give accurate information. But with time and additional tools for verification, the risk of misinformation can be lowered.

The risk of hallucination exists in all types of AI, and if unchecked, it could lead to massive mistakes and misinformation. But that’s why Parente and Salathe emphasize the need for human verification at every step of the process.

“These are definitely tools that we need to not be afraid of using,” Parente said. “But we just need to be really thoughtful about how we're using it.”

AI and privacy

Then there is the matter of privacy. Lisa Hoebelheinrich, J.D., senior associate vice chancellor for research administration at KU Medical Center, said data privacy regulations are scrambling to catch up with new AI technology.

“Much of privacy policy was written maybe 10 or 20 years ago and wasn’t written with AI in mind,” Hoebelheinrich said. “People are doing this research, and it's legal, but the questions being asked now are, ‘Should we be thinking about it in the same way?’”

While trying to determine the parameters around AI privacy is uncharted territory, Hoebelheinrich emphasized the collaboration happening among various institutions to find the best way forward.

“We are working with our peer institutions and colleagues to talk about this together,”

Hoebelheinrich said. “We are aligning and hearing those different perspectives, so when we are operating in an environment where legal boundaries may be less clear, we at least know we are operating consistently with current industry standards, learning about those risks and carefully considering those risks.”

Looking beyond patient care

The potential applications for AI in the medical field are vast, but they won’t just be helpful in exam rooms. They can also ease some of the administrative burden for health systems and even help in medical education.

Doctors at KU Medical Center have been using generative AI software to help with documentation for several months, with plans to make the software available to all physicians. This software helps with visit summaries and makes the process of charting significantly more efficient.

The same generative text software that helps doctors summarize their visits can also help professors give better feedback to students.

“The idea is that it can help our experts more clearly and quickly express themselves to students, while keeping them closely in the loop so that all of their feedback remains their feedback,” Parente said. “It’s just a more advanced version of what your phone can sometimes do with next word prediction when you type.”

Parente added that a project is already underway to get these tools in the hands of educators at KU Medical Center.

Down the line, this kind of feedback tool could develop into a personalized tutor that helps students with subject matter that confuses them. But that’s still a vision of the future.

“I do think medicine is going to be slower than some areas to adopt this sort of technology. When you use this in, say, computer science, and teaching a student to write programming code — the risk is low. It gives you a wrong answer, the code won't compile, and the student will eventually figure that out,” he said.

“In medicine, the risk is much higher. If we teach students something wrong about medicine, that could potentially put patients in danger at some point in the future,” Parente said. “We are just going to need to be very careful about how we develop those tools — and they are certainly never going to replace humans when it comes to health care.”

Timeline of Artificial Intelligence in Medicine

Artificial intelligence has significantly advanced in the last several years, both technologically and in terms of its social impact. But the history of AI, and its history in medicine, is longer than many realize.

1950

Mathematician Alan Turing is widely known as the father of modern artificial intelligence. He created the Turing Test to determine whether a machine is intelligent. In the test, a human evaluator assesses a conversation between a machine and a person. If the evaluator can't definitively tell which is the computer, the machine passes the test.

1956

The first AI program is created—called Logic Theorists—to solve math theorems. That summer, John McCarthy coins the term "artificial intelligence" during the Dartmouth Summer Research Project on Artificial Intelligence.

1964

The first chatbot, ELIZA, is created.

1966

The first "electronic person," Shakey, is created. This is the first mobile robot that could follow complex instructions.

1972

MYCIN, a "backward chaining" AI system, is created. Based on patient information input by a doctor and its knowledge of about 600 rules, it could provide a list of potential bacterial pathogens and then recommend an antibiotic treatment plan, adjusted for the patient's body weight. The same framework was used to later develop INTERNIST-1, which has a larger knowledge base to assist primary care doctors in diagnosing.

1973

The Stanford University Medical Experimental-Artificial Intelligence in Medicine (SUMEX-AIM)time-shared computer program is created to help expand the capabilities of biomedical research.

1975

The first NIH-sponsored workshop on artificial intelligence is held at Rutgers University.

1976

The CASENET model is created. It could apply its knowledge about a disease to a specific patient to make treatment suggestions to physicians.

1986

DXplain is released by the University of Massachusetts. Physicians can input symptoms into this system to receive a differential diagnosis. It can also be used as an electronic medical reference, with detailed disease descriptions and references. When it debuted, it had information on about 500 diseases. Now, it's expanded to over 2,400.

2000s

Deep learning becomes possible with advanced computing power. Different from its predecessor machine learning, deep learning consists of complex neural networks that can learn and make decisions on their own.

2007

IBM begins developing DeepQA, an open-domain system that could generate probable answers for questions using language processes and various searches. Using information from a patient's electronic medical record, it could return evidence-based medicine responses.

2010

Computer-assisted diagnostics is applied to help an endoscope to improve the detection of differentiation between benign and malignant colon polyps.

2011

Apple programs Siri, a virtual AI assistant, into its phones.

2015

Pharmbot, a chatbot, is created to help with medical education for pediatric patients and their families.

2017

Mandy, a chatbot, is created to help with intake at a primary care office. Also, Arterys becomes the first FDA-approved clinical deep learning application in healthcare. The first Arterys product, CardioAI, was able to analyze cardiac magnetic resonance images in a matter of seconds. This application has since expanded to include liver and lung imaging, chest musculoskeletal x-rays, and non-contrast head CT images.


University of Kansas Medical Center

Office of Communications
3901 Rainbow Boulevard
Mailstop 3013
Kansas City, KS 66160

Media inquiries: 913-617-8698
Staff Contacts