Automating care: How photonics and AI are changing medicine

01 January 2024
By Vineeth Venugopal
Photo credit: Shutterstock;

In the graduate class he teaches at Vanderbilt University, Michael King snapped a shot of human platelets with his iPhone and sent it over to ChatGPT. His GPT4 premium membership quickly calculated the volume of the platelets from the image and produced “the skewness” of the distribution in real time.

This calculation would normally take hours of coding involving computer vision, plotting software, and statistical packages, not to mention patience, grit, and caffeine. Now, thanks to artificial intelligence (AI), many tasks that require well-trained humans are literally just a click away.

AI is swiftly moving from the outskirts to the core of medical practice. It powers diagnostic procedures across radiology, ultrasound, and MRI with computerized precision, pinpointing the minutest of anomalies. In ophthalmology, devices such as optical coherence tomography (OCT) machines are shrinking in size and growing in autonomy, hinting at widespread accessibility in the near future. The frontiers of medicine are expanding as AI helps to unearth novel drugs, medical devices, and intricate links between disease processes at astonishing rates. Meanwhile, a web of sophisticated sensors is interweaving with our lives, monitoring our health from within and beyond, ushering in an era of pervasive, intelligent oversight of our well-being.

According to Benjamin Gmeiner, who is an AI researcher at Novartis, it is the easiest and most routine tasks in medicine that are likely first to be outsourced to an artificial intelligence.

“If you spend twenty minutes with your doctor today, they will look at you for hardly two or three minutes, because they need to take notes, look at your results, etc. They can miss subtle markers in your voice, mannerisms, or eyes that could be important biomarkers of disease. With new AI tools with speech- to-text modalities, your doctor could devote more of their time to you,” he says.

King recently wrote an article, “The Future of AI in Medicine: A Chatbot’s Perspective,” from an in-depth conversation with ChatGPT about the prospective breakthroughs in medical AI. Featured in the Annals of Biomedical Engineering, the exchange unveils the potential for AI to reshape healthcare, offering an outlook on technology’s evolving role in medicine.

In a metaphysical preview of things to come, the AI behind ChatGPT prophesied that “AI has the potential to revolutionize the way we approach healthcare, by providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. For example, AI can help doctors analyze large amounts of medical data, such as imaging scans or lab results, and identify patterns that might be missed by the human eye. This can help doctors make more accurate diagnoses and provide more targeted treatments for patients.”

Zhihao Ren at the National University of Singapore studies AI-driven biomedical sensors. “In the near future—the next five to ten years—AI will be used for drug screening and monitoring, while in the distant future—beyond ten years—AI will increasingly be used in disease prevention through early detection and diagnosis,” he says.

Indeed, efficient, accurate disease diagnosis seems to be an area where AI could make an impact. Traditionally, computers have excelled at sifting through vast datasets to detect patterns, a task accomplished by meticulously designed algorithms. These algorithms function as detailed recipes for identifying irregularities, and they demand extensive preparation, mathematical rigor, and seamless execution. However, the advent of deep learning, with its foundation in artificial neural networks, simplifies the process significantly. By introducing just a handful of examples to these models, they learn to discern what to watch for, often achieving accuracy that is nothing short of astonishing.

X-rays, ultrasound, MRI, CT (computed tomography), and nuclear imaging are the mainstay imaging techniques used in medicine. They generate 2D and 3D snapshots of nearly any body part, offering clinicians vital diagnostic tools. Unlike high-resolution visual microscopy, which requires specific sample preparation, these imaging methods are broadly applicable and relatively noninvasive. Additionally, dynamic techniques like MRI can capture temporal changes, creating 4D images that allow doctors to track physiological processes in real-time.

According to a recent survey on medical imaging informatics, medical facilities and research institutes routinely consolidate image data into clinical data repositories (CDRs). Alongside CDRs, various other databases such as electronic health and medical records (EHR/EMR), radiology and pathology archives, tumor registries, and biospecimen repositories, not to mention clinical trial management systems (CTMS), are being populated with vast quantities of annotated images. These annotations include clinical diagnoses and patient demographics. Such rich, well-documented repositories are proving indispensable for research that is reproducible, scalable, and transparent. Moreover, these data-rich repositories are instrumental in training AI models, as they provide a plethora of medical images for pattern recognition. The reliability and accuracy of these AI models are directly proportional to the size of the dataset they are trained on.

As early as 1995, AI was shown to be effective in detecting lung nodules in chest X-rays. Since then, these efforts have extended to all parts of the human anatomy from detecting lesions in the skin, to amyloid plaques in the brain (an early sign of Alzheimer’s disease). Recently, advanced AI algorithms have even been able to take one kind of image modality and convert it into another—by taking a CT scan and creating an MRI from it.

AI has emerged as a game-changer in cancer diagnosis, where early and accurate detection is crucial. By analyzing medical images from the mainstay modalities, AI can identify tissue irregularities with remarkable precision. Such capabilities can potentially spot early signs of cancer that may go unnoticed by clinicians, thereby significantly improving the chances of successful treatment and patient outcomes.

Indeed, real data from the human body is messy and complex, even after post-processing by a helpful computer. “In 3D images, these [biomarkers of disease] are hard to see,” says Gmeiner.

He has been studying the application of AI to OCT.

“OCT is the gold standard in retinal imaging,” says Evan Jelly, a researcher at Lumedica in North Carolina that makes 3D-printed portable OCT devices.

According to ChatGPT, OCT is “a noninvasive imaging test that uses light waves to take cross-section pictures of your retina, the light-sensitive tissue lining the back of the eye. With OCT, each of the retina’s distinctive layers can be seen, allowing your ophthalmologist to map and measure their thickness. These measurements help with the detection and treatment of diseases affecting the retina, such as age-related macular degeneration (AMD) and diabetic eye disease. OCT works much like ultrasound, except that it uses light rather than sound waves to create images, resulting in higher-resolution pictures. This technology enables clinicians to detect and treat eye conditions earlier and with more precision.”

Today, OCT is being applied to body parts other than the eye. There is substantial academic literature on the application of OCT to the ear, in addition to the digestive, cardiac, and pulmonary systems. In addition, “the eye is a door to different parts of the body,” says Gmeiner—particularly the brain—to find early signs of Parkinson’s. Via the eye, OCT can flag the risk of cardiovascular events.

Lumedica is prototyping a portable OCT device with 3D-printed parts, which the company claims is cheaper and more adaptable to different testing requirements. For example, the probe could be attached to the end of a tube and used endoscopically, according to Jelly.

Gmeiner has specifically studied the use of OCT in AMD, a condition in which fluid builds up in the retina causing blurred vision. “The largest risk factor for AMD is age. As we are living longer and longer, the chances of getting AMD increase,” he says.

OCT scans can evaluate fluid volume in the retina.

It is possible to control AMD and restore vision, provided that the disease is identified early.

AMD is diagnosed primarily using OCT scans, which can visualize fluid accumulation within the retina’s layers. By comparing these detailed images over time, physicians can monitor the progression of the disease closely, assess the effectiveness of treatment, and make informed decisions about patient care. This temporal comparison is a crucial aspect of managing AMD, as it can indicate whether the condition is stable, improving, or worsening.

AI can not only spot these channels but also quantify the volume of fluid in the retina. According to Gmeiner, different doctors can offer differing opinions on the same OCT scan—especially if the data is not self-evident. By basing its assessment on quantitative predictions, AI can do a better job in these circumstances.

AI can delve into the data harvested from imaging devices like OCT to uncover additional functional attributes, including birefringence and polarization properties. These attributes can enhance the analysis of vascular structures through techniques like angiography or by assessing the condition of other tissue types, such as cartilage. This capability not only enriches diagnostic precision but also opens new avenues for medical research and treatment planning.

According to Ren, “AI can enhance various optical sensors, especially spectral optical sensors. Biological signals often involve mixtures of multiple substances and exhibit significant variations with changes in the environment and among different patients. AI can directly extract features from spectral signals instead of making simple comparisons with standard databases. This greatly improves data-processing efficiency and can further enhance the recognition of hyperspectral images.”

Optical sensors use visual detection, usually in the infrared spectrum, to determine temperature, humidity, and chemistry. These sensors produce outputs by registering changes within optical devices. AI has been used in this area to increase the functionality of complex photonic devices such as waveguides and plasmonic nano-antennas which, in turn, make optical sensors more effective.

Besides optical sensors, wearable electronic sensors measure pressure, roughness, etc., using devices like accelerometers and gyroscopes. Ren’s research has demonstrated the feasibility of marrying optical sensors with wearable electronics by using photonic components made from aluminum nitride. The data generated from these components are processed and analyzed using AI, making them faster, more accurate, and more sensitive.

The data generated by these improved sensors, in turn, make the AI models better at what they do, fueling a positive feedback loop. Especially in the case of medical AI, gathering patient information can be challenging, Ren says. “Data collection is not an easy task, and it requires a complicated approval process and raises privacy concerns of patients. Gathering and using diverse, high-quality medical data while protecting patient privacy is a significant hurdle.”

Gmeiner and Jelly emphasize that outcomes from AI models should only be used as informative tools for trained medical practitioners. This underscores the importance of human oversight in clinical decision-making, even as AI becomes more integrated into medical practice.

This is because AI models that use neural networks are notoriously hard to interpret and do not provide insight into how a result was arrived at. These black-box models may contain hidden failure modes, which can be fatal when applied to clinical diagnosis.

But AI models are constantly improving and beginning to perform as well as, if not outperform, humans in some tasks. These improvements could usher in a new area of personalized medicine that is more preventive than diagnostic, helping us achieve better health metrics through constant supervision.

This can help people in remote communities in the near term, especially in countries like Australia where people outside of big population centers may have to travel for hours to reach the nearest medical facility. An autonomous OCT station could be set up in a local grocery store, for instance, and customers could use it to check for health-risk factors or to send health information to a primary-care physician.

Still, there are many ethical issues in using AI in medicine to be figured out. The same big questions concerning trust, ethics, and social restructuring that are being asked about AI in everyday life apply to healthcare as well. We need to make sure that we use AI in ways that are fair and safe, while keeping the human touch in medicine.

Vineeth Venugopal is a science writer and materials researcher who loves all things and their stories.

Recent News
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research