From Science Fiction to Science
University of Kansas researchers are using artificial intelligence to bring futuristic ideas to present day.
In the first episode of the television series Star Trek: Voyager, the starship’s chief medical officer is killed, and the crew is forced to rely on an Emergency Medical Hologram — an artificial intelligence programmed with all known medical knowledge — for medical care. That is how a software program became a major character in the show.
Of course, Voyager flew missions to far-flung galaxies in the 24th century. Here on present-day Earth, health care professionals are just beginning to tap into the full potential of artificial intelligence (AI) in medicine. At the University of Kansas Medical Center, researchers from the Department of Otolaryngology, Head and Neck Surgery are leading projects that incorporate AI into clinical care and research.
In one project, a computer program may be able to diagnose cancer just by “seeing” it in a photo. In another, software may be able to predict a debilitating brain disease by analyzing someone’s sense of smell. The success of these projects is dependent on partnerships with computer programmers, hours of testing and a mountain of data.
CATCHING UP TO COMMERCE
Andrés M. Bur, M.D., assistant professor in the KU Department of OtolaryngologyHead and Neck Surgery, is a devoted disciple of both medicine and technology. He graduated first in his class with a bachelor’s degree in electrical engineering before entering medical school.
“I have always been interested in finding ways to combine my technical interests and my expertise in health care,” Bur said. “With advances in computing in recent years, we have seen an explosion in artificial intelligence in making better predictions and in helping us make better decisions for the patient.”
A subcategory of AI called machine learning has been especially explosive. Machine learning is dependent on massive amounts of data being fed into a computer algorithm that identifies patterns within the data to make predictions when provided with new data.
Machine learning is how social media apps like Facebook and Instagram can identify the names of individuals in posted photographs. By using an algorithm that distinguishes facial characteristics and their relation to each other, facial recognition can also be implemented as a security feature, such as access to a cell phone or to a secure facility.
Machine learning is also behind the algorithms of online shopping. The more a computer discovers about your previous shopping habits, the more it can suggest items you might buy in the future.
Bur’s machine learning project involves more than 50,000 images of the larynx, or voicebox, gathered from the clinical practices of physicians in his department.
In partnership with Guanghui Wang, Ph.D., former associate professor for electrical engineering and computer science at the University of Kansas in Lawrence, Bur is developing a machine-learning program that can identify the presence of a lesion and, if present, classify lesions into set categories.
“Essentially, we are trying to have the machine recognize if there is something abnormal in the image, locate it within the image, and then classify it. Is it cancerous? Benign? Or is it some other type of noncancerous disorder?” Bur said.
The program builds on the algorithm Wang created to detect and classify polyps from the inner lining of the colon and the rectum. These polyps can develop into colorectal cancer, the third-most diagnosed cancer in the United States.
Working with Ajay Bansal, M.D., KU associate professor of gastroenterology, and Amit Rastogi, M.D., KU professor of gastroenterology, Wang developed a machine-learning program using 157 video sequences from colonoscopies as the dataset.
According to their published paper, the computer learned to successfully classify two types of polyps with an accuracy comparable or better than reported among gastroenterologists. In other words, the machine-learning “doctor” could classify two types of polyps as well as a flesh-and-blood, human specialist could.
Bur is hoping for similar successful result from their larynx project. He foresees a day when the machine learning program they develop could be the second opinion for rural doctors.
“There is a lot of potential to be able to help support clinicians, especially those living in more rural communities where they may not have access to sub-specialty care,” Bur said.
Bur and Wang recently received a $150,000+ grant from the National Cancer Institute, a division of the National Institutes of Health, to fund the project. They will need to figure out how to overcome one big complication: the fact that the larynx moves. By virtue of its function, it opens for breathing and closes to keep food out. It performs anatomic magic that causes the vocal cords to vibrate.
Movement wasn’t a variable in the classification of colorectal-polyps. But Wang said larynx images and colonoscopy images still share common features.
“The results we achieved in the colonoscopy project will greatly benefit our research in the larynx-image analysis,” Wang said. “This is a very interesting topic to me since I am working on many machine-learning projects with applications in object detection and classification.”
USING AI TO DIAGNOSE NEUROLOGICAL DISEASE
While Wang and Bur’s larynx project gets its dataset from visual images, Jennifer Villwock, M.D., is creating data based on smell, not sight. Villwock, an associate professor of rhinology and skull-base surgery at KU, is working towards a day when a quick, noninvasive test of one’s sense of smell could signal the presence of a chronic disease such as Alzheimer’s disease, Parkinson’s disease or diabetes.
Villwock’s hypothesis is that as these chronic diseases progress, patterns emerge in olfactory decline, or the degree to which a person loses the sense of smell. It may be in only a specific scent ― say, rose or citrus ― but the decline would be predictable and could be compared to known patterns of olfactory decline in people with a certain disease.
“If we had a reliable way of tracking smell, that could help us potentially make clinical decisions,” Villwock said. “We know that sense of smell is something that can be objectively measured ― the literature supports that — so we want to develop a way to measure that is cost-effective and accessible.”
To achieve that goal, Villwock worked with a team from KU Medical Center to create a patent-pending “smell kit.” The Affordable Rapid Olfactory Measurement Array (with the aptly named acronym of AROMA) includes small plastic vials that resemble lipstick tubes. These tubes contain a medium that holds different concentrations of essential oils in different scent families.
A patient begins by smelling a middle concentration of a scent and indicating whether they can smell it, and if so, identifying it. Answers are recorded, and the patient will be presented with either stronger or weaker scent concentration in subsequent rounds, depending on if their responses were correct or incorrect.
Villwock is interested in building the database that will help us understand which patterns of specific sense-of-smell problems signal a corresponding medical problem.
“Let’s say I’m giving you a rose concentration, and you say, ‘Yes, I smell it,’ but then you say, ‘I smell licorice.’ Well, that’s incorrect. But what does that incorrect answer mean?”
What if the wrong answer ― or the inability to smell a certain scent at all ― could be mapped out so that it signals something to your doctor? You smell licorice? That has been proven to be a sign of fill-in-the-blank. Villwock’s first job is to find what goes in that blank, but she will need enormous amounts of patient data to do so. And that is where machine learning helps.
“There are all these different permutations of possible answers. So maybe it’s not licorice, but another wrong answer. We want to be able to track the answers and classify them and then decipher meaning from them,” Villwock said. “That would be difficult using traditional statistical methods, but machine learning is great for that type of complex data.”
So, how would one go about building that smell-problem database? The first step would be to test participants with known medical concerns to see if the machine-learning algorithm could classify them correctly. Villwock conducted a study of 81 clinical trial participants, with 24 participants with mild cognitive impairment, 24 participants with Alzheimer’s disease and 33 participants with no cognitive impairment.
All participants gave feedback in response to smelling the AROMA vials, and a portion of those responses ― about 80% ― were fed into a computer algorithm with its group information (Alzheimer’s, impairment or control). This process essentially taught the computer program what to look for when assigning a person’s AROMA results to a particular group. The remaining 20% of AROMA responses were then input to test how well the machine-learning program could classify them into the proper group.
Could the algorithm put the Alzheimer’s patient’s results with the results from the other Alzheimer’s patients in the group? Would the same happen for the other two groups?
In this initial stage, the machine learning program scored a B+, with an accuracy rate of 87%. With such a small sample, more research will need to be done, but Villwock is excited about the potential of the AROMA system and a tweaked algorithm.
Tests are already on the market to detect Alzheimer’s disease, Parkinson’s disease and diabetes, and Villwock doesn’t see the AROMA test and algorithm replacing these tests. Instead, she’s attempting to create a test that could be given easily, even at home, as an early indication of trouble.
Imagine if such a kit had been available for purchase during the COVID-19 pandemic. One well-known symptom of the coronavirus is a loss of sense of smell, and Villwock is currently working to quantify olfactory dysfunction in subjects who have a high likelihood of contracting COVID-19.
If Villwock’s research had been farther along, and the data on the coronavirus patients already known, sniffing a few lipstick-like vials could have been an early warning someone had contracted the disease. A definitive test would still be required, but AROMA and the algorithm could confirm initial suspicions.
“One of the reasons people lose their sense of smell with a viral illness is because the loss is your body’s own protective mechanism. Because your olfactory nerves come down from your brain, they are a direct conduit from your sinonasal cavity to your brain,” Villwock said. “So, one of the thoughts is that your body shuts down those nerves. It’s saying, ‘Let’s head it off at the pass.’”
So, how do other viruses affect the sense of smell? And at what stage of the illness does olfactory dysfunction happen? Again, there are more questions than answers right now, but Villwock sees machine learning as a big part of the process.
“When you have a methodology that is generating a ton of data, it becomes very cumbersome to analyze via traditional statistical models,” she said. “And that’s where the strength of machine learning comes into play.”