Latest Guides

Science and Technology

Ask a Caltech Expert: AI for Personalized Medicine

Published on Saturday, March 18, 2023 | 5:44 am
 

As part of Conversations on Artificial Intelligence, a webinar series hosted by the Caltech Science Exchange, Andrew and Peggy Cherng Professor of Electrical Engineering and Medical Engineering Azita Emami discusses how her lab incorporates artificial intelligence (AI) into medical devices to improve health and enhance quality of life.

The questions and answers below have been edited for clarity and length.

The era of personalized medicine always seems to be just around the corner but never quite here for the masses. Can you explain what “personalized medicine” means and what’s needed to make it a reality?

Personalized medicine, also known as precision medicine, refers to medical therapies, predictions, or even preventions that are carefully tailored to a particular individual. It has been around for quite some time but not very broadly. An early example of precision medicine that has been around for almost two decades is breast cancer treatment in which patients receive chemotherapy based on certain gene expression. In fact, genotyping cancer cells to determine whether a certain chemotherapy works is one of the most advanced examples of personalized medicine today.

But the promise of personalized medicine has been much bigger and broader. The hope is to use not only genetic information but also to include information from a variety of sources: different biomarkers, the patient’s history, images, and data from implantable devices that can continuously monitor patients. The hope is to use all this data and apply it to a broader set of conditions beyond cancer and chemotherapy.

What are the tools we need to get there? There are, of course, things on the medical side. But on the engineering and data science side, we need to generate very high-quality data that can be used for many, many individuals, perhaps over years. We need devices and imaging systems that can generate this data, and we need to include many individuals in our data collection. Finally, we need algorithms and data science approaches that can handle the complexity of this data.

How will artificial intelligence or machine learning play a role in this?

As I mentioned, the hope is that we collect data from many different sources and from many different people. People of different ages, genders—you name it. We then have to correlate this complex set of data to different conditions and also, perhaps, predict whether a patient is at higher risk. This is a place where machine learning and AI can play a huge role.

Another area in which AI and machine learning can be very helpful for precision medicine involves embedding machine learning into wearable and implantable devices. Very quickly we could train algorithms to the baseline of an individual patient and also to other patients. We could then use the devices to predict problems or detect dangerous conditions.

Can you give us a brief overview of your research and when you started incorporating AI methods?

Generally, in my lab, we are interested in biomedical devices—for instance, sensors and drug delivery systems or navigation tools for precision surgeries. We build tiny devices that can be implanted. They’re usually wireless and battery-less, and we try to make them as noninvasive as possible. I started working in this domain very soon after I arrived at Caltech in 2007, but I always wanted to work in the domain of neural interfaces.

One of the reasons was that one of my sisters suffers from epilepsy. From very early on, I observed that this was difficult for her. Drugs had side effects and sometimes were not successful in controlling her condition. That was something very close to my heart, and I always wanted to work in this domain. I was very lucky, about six years ago, that I could start working on early seizure detection, or prediction, for patients who have epilepsy.

About one-third of patients with epilepsy cannot take advantage of drugs. There is a subset of this group for whom, if we predict the seizure early enough, deep brain stimulation can stop it. That was one of my first projects that involved huge amounts of data. We were looking at neural data from electrodes under the skull, intracranial electrodes. The goal was to predict a seizure based on the signature of the neural data. As you can imagine, this can be very patient dependent. Fortunately, there are large publicly available data sets on epilepsy. First, we built systems that involved traditional signal processing, but we quickly realized that we needed an adaptive system that could learn from the data sets and become more personalized for each patient.

That was a turning point. We started using learning algorithms, and we could successfully show that we could train a network to very reliably predict the seizure early enough to stop it.

This was the beginning of using AI in my group, but now we have extended that effort to new brain-machine interfaces. We are also adding machine learning algorithms for heart monitoring to wearable devices.

What problems are you hoping to tackle with brain­-machine interfaces?

One of the projects we are very excited about is in collaboration with Professor Richard Andersen [James G. Boswell Professor of Neuroscience and Director of the T&C Chen Brain-Machine Interface Center], who has pioneered brain-machine interfaces and has been working in this domain for many years. For patients who have a spinal cord injury, the goal is to implant electrodes in certain regions of the brain and then use decoding algorithms to predict their intention as they think about moving a robotic arm or moving a cursor on a screen.

After working on the seizure prediction systems and when the Chen Institute for Neuroscience started at Caltech, there was an opportunity to start collaborating with Richard. In discussions with him, we realized that there is no small implantable device right now. Patients are connected to large wires and computers, and it’s a system that is not mobile. They have to sit in the clinic to use it.

So, one goal was to see if we could implement the system in a tiny chip. Another was to improve its performance. The reliability and the performance of the system has a lot more room to improve. We’ve been working with Richard and his group for almost three years now, and we have made a lot of progress.

How does the brain-machine interface work?

We have penetrating electrodes that, remarkably, can pick up small electric signals as neurons fire. Neurons are constantly firing. We have two to three arrays that each have approximately 100 electrodes implanted in the brain to collect that data. The analog data then goes through amplifiers to a module that gets digitized, and the data is very noisy. It’s a huge amount of data, so the next step is a lot of pre-processing and filtering of the data. Then we go to a feature-extraction unit, in which we use a threshold-crossing approach to determine whether a neuron has fired. In other words, if neural activity goes above a given threshold, we count it as one spike (like a spike on a graph).

We then measure how many spikes we have in a given period. That’s kind of the traditional way of defining the feature that maps the measurements to neural activity. This firing rate in this given amount of time is sent to a decoder. The decoder decides what direction, for instance, the patient wants to move a cursor.

How has your approach to tackling this problem evolved over time?

It has happened very naturally. It has never been because everybody’s doing AI or machine learning. Step by step, we’ve gotten to a point where our group realized, “OK, now the best approach is to use learning algorithms.”

Where do you see the field going in 10 or 20 years? Where will the technology be?

There is still a lot to be done. We need better electrodes that are less invasive. There is also an analog front-end system that we basically need to amplify, filter, and digitize the signals. This is still not at a place where we can integrate everything and create a very low-power energy-efficient system. There’s a lot of room left to miniaturize things, to make the technology less invasive, to make it more robust against micro movements, encapsulation, and electrode degradation.

Another area where more work is needed is training. At the beginning of each session, a lot of brain-machine interface systems need training. The patient needs to train the system so it can adjust to them. Our goal is to try and come up with systems that can basically learn in real time—using online learning or unsupervised learning—to adjust themselves automatically as patients use the device. That’s a big challenge.

I also think it is quite important is to create systems that are low cost and that people can use everywhere for a variety of tasks.

What do you think are the most difficult tasks for AI in dealing with medical applications?

I’m not a physician, but I have talked to many physicians about this. In medicine, the stakes are very high. It’s life and death, and we need very reliable algorithms if we want to rely on AI. As is the case for medical devices or drugs that require FDA approvals, algorithms are also starting to have requirements from the FDA.

There are certain applications, such as brain-machine interfaces, in which a mistake is not a huge deal. But if you want to really move toward personalized medicine and get predictions or suggestions from AI, we need to prove that the system is reliable. For that, as I mentioned before, we need huge data sets from many, many individuals. We need studies to prove that the system is robust and giving us useful information.

In what other areas might artificial intelligence–powered devices improve treatment?

There are areas where we believe it’ll make a huge difference. I have a collaboration with Professor Wei Gao [assistant professor of medical engineering, Heritage Medical Research Institute Investigator, and Ronald and JoAnne Willens Scholar], who is using variable sensors and measuring different biomarkers in sweat. He is trying to combine different biomarkers to evaluate things that are related to mental health, depression, anxiety. These are areas that sometimes people with severe cases have difficulty explaining. Or maybe they’re not as open about it, or tracking it is difficult for them. So, definitely for issues related to mental health, this would be significant.

How eager have students been to get involved in this research?

I find that students are really interested in this domain because it is an area that they can relate to. They see the impact. In fact, I started as a more traditional electrical engineer at Caltech. I worked on high-speed data communication systems, computing systems, and so on. I have continued that research, but gradually I get more and more students who are interested in medical devices. They’re so passionate because they also find multidisciplinary research rewarding. They learn from other groups in chemistry, biology, neuroscience. So far, it has been fantastic.

Here are some recent papers from Azita Emami’s lab:

Here are some of the other questions addressed in the video linked above:

  • How do you ensure privacy when it comes to patients’ data?
  • How long will it take for AI-powered implantable and wearable devices to gain FDA approval?
  • How will they get from the lab or clinic out to the broader world?
  • What other conditions might brain-machine interfaces help with?
  • How will you power and recharge implantable medical devices?

Get our daily Pasadena newspaper in your email box. Free.

Get all the latest Pasadena news, more than 10 fresh stories daily, 7 days a week at 7 a.m.

Make a comment

Your email address will not be published. Required fields are marked *

 

 

 

 

buy ivermectin online
buy modafinil online
buy clomid online
buy ivermectin online