14 october 2024
NeuTigers, Inc.'s strategic partnership with Princeton University , led by co-founder and Princeton's computer engineering head Professor Niraj Jha, integrates top-tier academic research with real-world AI applications. This collaboration drives NeuTigers to rapidly transform cutting-edge AI innovations into impactful solutions for healthcare and industrial IoT, leveraging Princeton’s research strengths to stay at the forefront of Edge-AI.
The following and excellent podcast from "Princeton Pulse" (the Princeton University's official podcast channel), helps demystify AI in healthcare, answering key questions and providing insights into its practical use and future potential.
Hosts:
Heather Howard - Professor of the Practice and Director, State Health and Value Strategies (SHVS) at Princeton University
- Niraj Jha - Professor of Electrical and Computer Engineering at Princeton University
- David Schweikert - Congressman at Arizona’s First Congressional District
You can listen to the podcast by clicking on the banner here below:
You can also read its full transcription here below:
AI in Health Care: Promise or Peril? SPEAKERS Heather Howard, Congressman David Schweikert, Professor Niraj Jha
Heather Howard 00:02 Hi and welcome to the Princeton Pulse Podcast. I'm Heather Howard, professor at Princeton University and former New Jersey Commissioner of Health and Senior Services. On campus and beyond, I've dedicated my career to advancing public health. That's why I'm excited to host this podcast and shine a light on the valuable connections between health research and policy.
Our show will bring together scholars, policymakers, and other leaders to discuss today's most pressing health policy issues, domestically and globally. We'll highlight novel research at Princeton, along with partnerships aimed at improving public health and reducing health disparities.
I hope you'll listen in as we put our fingers on the pulse and examine the power and possibilities of evidence-informed health policy.
One of the hottest topics in health care and in Washington is the use of artificial intelligence, or AI. In the simplest of terms, AI is technology that enables computers and machines to perform tasks like a human might. Using large amounts of data, AI can simulate human intelligence for the purpose of solving complex problems.
Research suggests that AI could revolutionize the delivery of health care, from pinpointing cancers that are invisible to the human eye to powering wearable devices that can detect abnormalities before a medical emergency occurs. Optimally, it could help clinicians make better, faster, smarter decisions and lead to improvements in medical care.
But the use of AI is not without risk. There are profound ethical and regulatory issues at play. For example, can patients have control over how their data is used? Could AI amplify bias and discrimination? Should insurance cover AI-driven services, and what role should government play in regulating emerging technologies?
Today, we'll explore the promise and perils of incorporating AI into our health care system with two guests. In the studio here with me is Niraj Jha, an engineering professor at Princeton University, who is developing a software package that could enhance the reliability of medical diagnoses. Joining us remotely is a congressional leader on these issues, Representative David Schweikert from Arizona's first congressional district, who has introduced legislation to promote the safe and secure integration of new AI tools.
Together, we will grapple with these thorny questions. Professor Jha, Congressman Schweikert, welcome to the show.
Congressman David Schweikert 02:48 One of the things we focus on is both our demographics and our debt. The fact of the matter is, over the next 30 years, the United States is expected to pile on $116, $120 trillion of debt. The vast majority of that is driven by health care costs. Instead of the circular firing squad conversation that keeps happening about what are you going to cut, many of us have been trying to build on a model of what could we do better, faster, cheaper. Because so often, the debate in Washington is a financing solution instead of a technology-embracing disruption.
So we've done lots of demographic work. I'm also the senior Republican over the Joint Economic Committee. I have a handful of Ph.D. economists that I'm blessed to have work for me. We've done the math in regard to obesity in America being the single biggest cost driver of health care, all the way down to some of the things that weren't on your introduction... the use of AI in billing, in clean claims, in the actual front office and back office of dealing with personnel costs. Some of that's not even on the diagnostic side.
And then the fact of the matter is, what is AI? It's the compilation of lots and lots of data sets. What are the manuals that are used by doctors? The compilation of lots and lots of data. Where I see the real ethical split is a need for security and protection in regard to using AI and synthetic biology, in the designing of new DNA strands and those things. But that's a very different world than AI reading the data coming off your wrist, or the blood glucose meter, or the thing you lick or urinate on, or those things which can take data from your body.
Heather Howard 05:09 Thank you. Professor, do you agree with that definition of AI? And then I want to get into how you've come to this. But I think the Congressman's given us a good definition to start with.
Professor Niraj Jha 05:19 Yes, essentially, that's what AI wants to do. We want to approach human intelligence and even maybe surpass human intelligence. There are many ways of getting there. The Holy Grail of AI is artificial general intelligence, which is more than just the inductive learning that we do with the data sets. We train what are called neural networks. Beyond that, we are good at curiosity. We are good at counterfactual thinking. We are good at continual lifelong learning. Those areas of AI are also maturing, slowly, not at the same rate as the inductive learning part. But AI is a vast area, and machine learning is a sub-area of AI that is driving the current trends. I think we'll look beyond machine learning. We'll look at decision-making and so on, which probably we'll talk about later.
Heather Howard 06:15 So you're a professor of electrical engineering and you teach classes on machine learning. You're going to be piloting a new class called "Smart Health Care." Doesn't that sound great? And you and your team have developed significant AI technology applications. Can you talk about what you've been working on?
Professor Niraj Jha 06:34 Sure. In the fall, I teach a course on "Introduction to Machine Learning." That's a junior level course. In the spring, for the first time, I'm introducing a course called "Smart Health Care," which essentially is an AI in health care course. And one of the topics is wearables. We look at wearables and the sensors, physiological sensors, that you have in your smartwatches and smartphones. Can I distinguish healthy status from disease status?
I'm a co-founder of a startup called New Tigers that is commercializing what we did for Type 1 and Type 2 diabetes. We have another model that is for depression, schizophrenia, bipolar disorder. We have another model for Covid-19. That's the first step, where you distinguish healthy status from disease or some condition. The next step is differential diagnosis, where you want to now also distinguish between Covid-19 and flu, not just Covid-19 and healthy status.
And then, after you've diagnosed, you want wellbeing. The physician may know I'm depressed, but then he or she doesn't want me to go into an extreme manifestation of depression. So wellbeing becomes important. And also ideally the therapies, the interventions, should be personalized. So maybe for my depression, cognitive behavioral therapy may be good, or drug one or drug two would be good. And there's a lot of trial and error in making that decision for this patient. What is the optimal intervention? Now there are AI frameworks that can allow you to answer that question.
Heather Howard 08:13 So Congressman, how does that resonate with you?
Congressman David Schweikert 08:17 It's brilliant. In many ways, I have a personal fixation on some of the new sensor technology that has gotten so remarkable. And I think, actually, where the professor just went is that ultimately you have a holistic circle here... the quality of the sensor technology, the quality of the architecture and the data and the AI, and sort of building the ability for them to talk to each other.
Let's do something that would be potentially very soon in our lives. Apple, a couple months ago, got its arrhythmia, some of its heart monitoring, in the next generation Apple Watch certified as a medical device. What if you had that? And the glucose meter will be over-the-counter -- actually, I think in the next week or two. Would you be willing to experiment with that? Would you be willing to do other things? The ability to have that phone, an AI, running in the background. You're almost being diagnosed. You have a medical lab walking with you, sleeping with you, those things.
Is the next generation of that hyper-personalized to find out whether I'm having a mental health issue or a physical issue, and how to jump ahead of it instead of allowing it to metastasize?
Professor Niraj Jha 09:52 That's exactly what we are doing. In addition, we want your smartphone to be your best friend. To be your best friend, it has to understand your physical state, your mental state, your emotional state. So now, with the physiological signals and the sensors that the Congressman was referring to, they're all integrated into our smartwatch and smartphone.
Five years ago, they were discreet, and that was not very convenient. But now, with about 10 sensors in our smartwatch, 20 sensors in our smartphone, these 30 sensors together become a very strong learner of a condition, of a disease. Alone, they are very weak learners. But now we have this technology, and we can go after any condition, any disease, and reduce costs dramatically.
If quickly we can find out what the diagnosis is, we will need more physicians to be able to go to the next step -- which is, what do you do with the therapy, or keeping me out of extreme manifestations of that disease, and so on?
Heather Howard 10:56 In terms of that diagnosis, Congressman, should insurance be paying for these watches, these wearables? What do you think?
Congressman David Schweikert 11:05 I think in some ways that's sort of an obvious yes, but please understand, 5% of the population is ultimately a little over 50% of all of our health care. There are other things that AI is doing... discovering new molecules that we just had a few months ago. AI discovered some new antibiotics. Now that's on the other side.
We're in actually a financial crisis, whether we want to know it or not. We're borrowing about $80,000 a second. The number of physicians per population, particularly in a decade when 22%, 23% of the population will be 65 and up. I have a series of structural issues around us that lead us to needing the professor to succeed.
You asked the insurance question, and my answer is, give me platforms. I have to work with the brain trust in Washington to help legalize the ability for the technology, particularly if the technology has great accuracy. Should it be allowed to prescribe? Should it be allowed to substitute certain medical professionals because there's just not the ability to get an appointment, or those sorts of things in someone's life.
So how do I make us a healthier society, but when there's a desperate, desperate need to have major cost mitigation?
Heather Howard 12:46 Professor, you talked about using the data for diagnosis. Can you explain how AI and these algorithmic approaches can personalize treatments?
Professor Niraj Jha 12:56 Sure. We can ask "what if" questions. And I'll give you an example. Suppose someone is depressed. Now, there are many options possible for the intervention. Which one is the best one? I may have 30 people who had depression and who were given cognitive behavioral therapy. I may have another 30 people who were given drug one, another 30 who were given drug two. Now a new patient comes in. I know this person is depressed.
I take about one hour of physiological signal data from the smartwatch, smartphone, and the question is, which of the three interventions is the best one? So what I'll do is I'll get a digital twin of this patient, with respect to the three cohorts of data that I have, and I'll predict the output trajectory. What would happen if cognitive behavioral therapy was done to this patient? What would happen if drug one was given to this patient? And whichever is the best output trajectory, that is the intervention I'd select.
This is called counterfactual decision-making, which is not only relevant for personalizing medical decisions, but you could even, and maybe the Congressman may be interested, you can personalize policy decisions. Which bill is ideal to pass?
Congressman David Schweikert 14:13 Professor, when we finish, I'll have my helpers get a contact for you. I have an article we were reading, it's a few months old, that was going down that formula. And they were adding in things like... okay, we can see the movement of someone feeling depressed, and it was in sync with the falling off of physical activity. So they were looking to see if part of the prescription is to go out for a walk, or eat certain foods, or engage in certain types of activities.
And that's the hyper-personalized data. You can keep it safe and encrypted, it's for you. But so much of our society goes to their medical professional for a pharmacological solution, instead of seeing all the options and how their body actually reacts to those options.
Professor Niraj Jha 15:14 Just to add to that... it's very interesting that this topic came up. We are developing something we call a wellbeing AI agent. The aim of this AI agent is to keep us well. That means it has to understand our states. My mental state could be I'm feeling sleepy. My physical state could be I'm actually sleeping. My emotional state could be I'm depressed or I'm happy or I'm in bliss.
So there are ways of figuring out, using these sensors that we talked about earlier, which state or which combination of states I'm in. And now the smartphone has access to all of this information, and it can recommend some therapy or maybe some interventions. For example, if you read only good news, your depression comes down. There is a good news network, network.org, where it reduces your depression. We thought classical music would also reduce, but it reduced in only 50% of the cases. Micro-meditation, just meditating for one minute, reduces your negative emotions.
So now your smartphone can be your friend and recommend these things in real time. It only takes 60 milliseconds for these neural networks to figure out what state you're in.
Heather Howard 16:32 But do you worry, Congressman, about the sort of scaling up and large adoption of this? How are you going to get people to wear these?
Congressman David Schweikert 16:42 Okay, let's do something. It's slightly on the border of sarcasm. But you remember a decade ago, a decade and a half ago, you and I would spend our Friday evenings in line at a place called Blockbuster Video to get a little silver disc, and the one we always wanted was never there. And didn't it feel like within a year or two, we go home and hit a button and the world's movie library is in front of us. The fact of the matter is that is the elegance of technology. It is capable of scaling at remarkable paces.
One of the great barriers right now, just as we went through with telehealth, is, let's have a moment of brutal honesty... it's about the money. If you're an incumbent business model health delivery model bureaucracy, you don't like to change how you deliver your service. And just as we went through with telehealth, where, for a while, telehealth was one of the most lobbied against things in Washington, then the pandemic hit, and it turned out grandma knew how to talk to her health professional on her phone. So you had that sort of punctuated equilibrium there to prove that lobbying was ultimately about the money.
How do we have an honest conversation about the stress of the medical professional shortage, the demographic changes in the country, the drivers of the debt, the fact that we're heading towards a time of rationing if we don't start to do things creatively? The adoption question is, "Will those who want to protect incumbent business models be more powerful than those who see the benefits to society?" And that's one of the reasons we do a podcast like this. We're trying to make it not scary, and actually sell a little bit of the optimism.
Heather Howard 18:49 And I think you share that optimism, Professor, right?
Professor Niraj Jha 18:51 Totally. I think next the five years are going to be revolutionary for smart health care.
Heather Howard 18:57 And how do you think about the adoption question?
Professor Niraj Jha 19:00 I think it's a question of there are two things. One is, once some of the clinics and hospitals adopt it, it may create the aura about the technology, because it will do predictions at higher accuracy than humans. So it's difficult to just say no to that kind of technology. But a hindrance right now, a roadblock right now, is when you go to a physician and you say, my neural network is 98% accurate in determining Type 1 diabetes, they'll ask what happens in the other 2% because maybe my patient will be hurt with the other 2% that is wrong.
So there's an aspect of AI called explainability and interpretability, which is not as mature yet, but will become mature in the next few years. We have some work to do ourselves in that area, which will allow physicians to say, "Oh, this is the reason the neural network is predicting that this person has Type 1 diabetes." When that becomes popular, I think the adoption will skyrocket very quickly.
Congressman David Schweikert 20:06 I think you're brilliant on that, and the fact of the matter is, much of that already exists now. We have to make it much more accessible. You sort of asked the insurance question before. I think you need to think a bit more broadly than just your medical plan. It's one of the projects we have of seeing if there's a way to have an ACA-compliant technology health plan. Some of the financial incentives designed in the ACA, many refer to it as Obamacare, was insurance companies getting a percentage of their book. Well, if your book is more expensive, you get a higher dollar amount. We need to change some of those incentives.
But also within that sort of adoption scenario is, how do you also sell the personal benefits, the ability for it to be instant? My wife and I adopted a couple of young children, for example. Try to find time [to go to the doctor] when one has the sniffles. By the time you get to your appointment, the sniffles are gone. So there's also a lifestyle that I believe will be one of the things that brings it forward much faster. That is, we're Americans, we're always busy.
Heather Howard 21:33 We want it, and we want it now, right? Especially in health care. Yeah, well, let's talk about some of the risks and maybe how we can mitigate those risks. You've both mentioned security. It seems to me that this question of adoption depends on people trusting, people being willing to trust their data being used. How do you think about that from a policy perspective, protecting security?
Congressman David Schweikert 21:55 I need to get you to even think a little more radically. An algorithm, as it's designed, is insurable. There are algorithms out there that make certain decisions, and there are insurance policies. So often you'll get, “Well, what if the AI damages a patient?” The fact of the matter is a properly designed AI is auditable, and that's actually one of the reasons I'm very optimistic. If it has implicit bias, you can audit that. There's actually an AI to look at AI for biases. But you're also heading towards a world where, how do we as policymakers say this new wearable is going to help provide a diagnostic. We want to make it insurable as part of my medical insurance plan. But we're also going to create a way where it can have a liability wrap too, and that way it fits in today's delivery health care system financial model.
Heather Howard 23:01 It seems to me that we've had a strain in health policy that's all been about giving patients control of their data, right? Do you worry about, at the same time, telling patients that they're giving away their data?
Congressman David Schweikert 23:14 They're really not giving away their data. De-anonymized data is used every single day. As a matter of fact, every state has a huge system, that the federal government contributes to, that I believe is dramatically underused in mining for diseases and potential demographics, and other issues. They're trying to figure out how you move to a curative. But the fact the matter is, if the next generation of these starts to have also the encryption chips, an AI chip, where much of it is done resident in the phone, I would argue that the phone is dramatically safer than the paper file that's sitting at your medical professional right now.
Heather Howard 24:04 Professor, you've developed tools for Covid and diabetes and mental health. How do you think about the security and privacy issue?
Professor Niraj Jha 24:15 We also look at cybersecurity issues. A long time ago, 10 or maybe 12 years ago, we had shown how you can hack an insulin pump, and we also provided the defense against that hacking. There are also very powerful de-anonymization algorithms that can de-anonymize the data. So that's the danger, as the congressman said. If my data never leaves my smartphone, it is secure unless the smartphone is hacked, and there are ways of hacking smartphones now as well. So that would be the security danger. Also, if the data are in communication, for example, between a smartwatch and a smartphone, and it's not encrypted. I can eavesdrop on that data and figure out what your health condition is. So we will have to have rules for encryption and so on. That adds to the energy that you drain from the battery. There are some trade-offs.
We've not talked about bias. The bias can come from the kind of data that we have collected. Maybe it's more male-dominated than female-dominated. We are applying what is good for males to females, and that's a bias. We have about 9,900 cognitive biases, and those biases get into our AI models as well. I get a bit upset when people say AI is biased. It's not AI that is biased. It's the humans that are biased, who are feeding AI the biased data and training it on biased data. These neural networks are very good students. If you teach them with good data, they'll learn to make good predictions. If you teach them with bad data, you will find hallucinations. A lot of language models hallucinate because they were trained with bad data.
Biases can be reduced. They should be audited. Are you taking all of these considerations into account before you can do any experiments with humans? Maybe we can have those mechanisms to make sure that these biases are made smaller, if not eliminated.
Congressman David Schweikert 26:33 Two months ago, we met with an AI company, but their AI is designed to audit AI. In many ways, it's a group of statisticians looking for biases, but also, in some ways, looking for somewhat of the reverse, when certain populations are underrepresented. Because there's certain diseases that will predominate in certain ethnicities or regions or those things, and so also making sure those things are properly normalized.
Heather Howard 27:10 I've always worried that this is going to entrench inequality. But you're saying that this could actually be the way to disrupt that?
Professor Niraj Jha 27:19 Yes, if you look at the 8 billion of us on this earth, the homo sapiens, maybe 3 billion of us are well taken care of in terms of health care. Health care is not evenly available throughout this globe. I think smart health care will democratize health care, so that all 8 billion of us at least know what condition we are suffering from. And unlike many others who say that we need fewer physicians if we have more AI, I think it's the other way. If all 8 billion of us are being well served in terms of diagnosis, then we need more physicians for the next few steps after that, therapy, wellbeing, and so on.
Heather Howard 28:03 I think, Congressman, you were nodding to this earlier, right? You're not talking about replacing health care providers. You're talking about supplementing them. Is that fair?
Congressman David Schweikert 28:11 In some ways. There's a number of health specialties that you're not going to be able to change. You need those professionals. Okay, I don't mean to be non-sequitur, but it's part of the thought experiment.
There are tiny sensors. You can even have one on your watch. You just had surgery, and it can pick up the tiniest movement in your body's temperature, and instantly know you have sepsis coming. You can catch that very, very early, when it is remarkably easy to deal with. When it becomes a fever that starts to be noticed by traditional monitors, you're already in trouble. You just reduced both the number of costs, but also the need for that professional at your bedside, and that was using sensors and technology.
But I wanted to also leap back. I'm from Arizona. Come with me sometime to my Navajo Nation, or parts that are quite rural. The current way we design policy almost institutionalizes different health care availability. We need to put a wire out to a chapter house in the middle of nowhere, instead of just putting a satellite dish. If that chapter house has a satellite dish, and people have the wearables to help manage their diabetes, and the ability to have that 24 hours a day, seven days a week, monitoring, in some ways, you have just democratized access.
You know, all of us on this podcast, and probably most of the people listening, are fairly well educated. We've all done fairly well in life. Let's be honest about it. Today, the biggest inequity in our society is not gender. It's not race. It's poverty. It's income. It’s wealth. What happens when this is how you break through that access. Because in many of our rural poor, urban poor areas, health is the biggest driver of income inequality, not education. It's actually health.
Heather Howard 30:37 It's interesting. We're talking about bias and how the models themselves might be encoding bias. We've talked about how you can account for that. I think what you're saying now, too, is there's also this question of who benefits from AI. And you're saying, if done properly, AI can actually benefit communities that have historically been disadvantaged, assuming that we do it right.
Congressman David Schweikert 31:01 We fight over access. What happens if you could take someone who may not have been as privileged as we are in life, and they have access, 24 hours a day, seven days a week, because it's attached to them.
Heather Howard 31:15 Congressman, I want to make sure we have time to talk a little bit about regulatory issues. Who should be regulating, and at what stage? And how do we balance supporting innovation and disrupting this disruption that you've been talking about with ensuring safety and quality? How do you, in Congress, think about your role, perhaps as a regulator? Are you standing in the way? Are you getting out of the way? How do you see your role?
Congressman David Schweikert 31:45 First, I need to explain that I'm a bit of a heretic here. If you come visit us in Capitol Hill, the armies of people you see walking up and down the hallways, they're there not for the love of their fellow man. They're there to protect their business model. They're there to protect their income or to find other types of income. The innovators, because the innovators are innovating, are not rent seekers in Congress. So the number of times we've had something that's disruptive, that could make our brothers and sisters healthier, those who will be displaced by that technology are often trying to lock in their incumbency.
There are a number of things you first have to do. How do you change the statutes that allow the technology to completely fulfill its mission, make it reimbursable, along with similar technologies, build a framework, so there's a liability wrap, just like other types of health care? But my great battle right now is less the regulation of the technology; it’s allowing the technology to exist in the marketplace.
Heather Howard 33:13 There was a lot of attention earlier this year to some Medicare Advantage insurers using algorithms in their prior authorization processes, when they're determining which types of health care services they should approve. Do you recall that? And then the Administration stepped in and said that insurance companies in Medicare can't use algorithms when they're doing prior authorizations. Was that the right call?
Congressman David Schweikert 33:42 It could have been done in a reverse, where you could use AI for clean claims. Remember what was happening with the Medicare Advantage plans is they were using technology to high grade their patients and therefore pursuing greater reversals. The Medpac report that came out about four months ago, almost half of it is about almost $50 billion annually of additional health care costs. Medicare Advantage was supposed to come in at 95% of fee for service. Today it's coming in about 107%, so you just hit one of the things I'm going to be spending months and months on trying to figure out what's gone wrong.
Wall Street Journal's written in just the last month a couple of major articles with some brilliant data in it. But that's an occasion where technology should have helped us catch that years sooner, and we don't run that type of technology on ourselves.
Heather Howard 34:45 Professor, how do you think about government regulation here? Is it in the way, or is it helping to spur these developments? How do you think of it?
Professor Niraj Jha 34:54 I think some government regulations may be necessary for some areas of AI where there are a lot of hallucinations. When the language model hallucinates, it can have a lot of negative impact. But there are other areas where regulations may hinder. For example, smart health care is one such area where the models, the AI models, are not in widespread use, so if you nip it in the bud, then the technology takes another five years to develop. Even though it may exist in the labs, it doesn't get to the marketplace because it is very difficult. It takes two years, three years to get it to the marketplace, and the startups may not have that much of a runway existing to get to that milestone. So I think it can hurt, or it can also help.
Heather Howard 35:48 And that's what you worry about, right?
Congressman David Schweikert 35:50 Well, the professor just touched on something that hasn't even come up in this discussion. The current regulatory model in health care, Medpac and CMS and those, in some ways it stifles the investment class, those who would come to a university, a brilliant University, a professor who has an idea, and say "I'd like to commercialize your product. Oh, by the way, we have no idea when we bring it to market, how we're going to get reimbursed." We have had multiple presentations in our office of just stunning developments... breast biopsies, things that could just be wonderful, making our lives so much healthier and health care so much less expensive. The number one complaint on that side of the ledger is that they can't get venture capital because they don't know if the regulatory state is going to allow it to prescribe or allow it to be reimbursed.
So we actually have a design barrier for disruptions coming that would be really good for society.
Heather Howard 36:58 How can you fix that in Congress?
Congressman David Schweikert 37:01 And there's where I get my head kicked in every week. I try constantly to add language and move things, saying, "Hey, if the FDA has approved a technology, allow it to do these things." And let's have a moment of brutal honesty. I get crushed by the incumbent bureaucracies, the incumbent business models, the incumbent investors, and often the fear of my fellow colleagues in Congress who don't want to have to explain that to their support base, that we're allowing competition. It's back to my Blockbuster Video argument. If Blockbuster had hired enough lobbyists, who would have slowed down the Internet, they could have slowed the adoption of Netflix.
Heather Howard 37:47 Professor, maybe we end on a broader political point. You said something tantalizing earlier, which is that you and your colleagues have been working on how to make better decisions, not just in health care, but maybe even in policy. Right? Do you have tools for how Congress can make better decisions?
Professor Niraj Jha 38:04 So let me take a minute to answer and give you some background. There was 2021 Nobel Prize in Economics given to David Card. He's at UC Berkeley. He did the work when he was at Princeton -- the Card/Krueger paper. It was from 1994 and the Nobel Prize was three decades later. This paper says that New Jersey increased the minimum wage, and next door, eastern Pennsylvania, did not. The traditional wisdom is that if you increase the minimum wage, then employment suffers. They found out with this lucky experiment that the impact on the employment was minimal. That was unexpected, so that's a Nobel Prize.
Now, 10 years later, there's an economist at MIT called [Alberto] Abadie, and he said, "Well, I want to put it on a mathematical footing. So I can collect data from units, for example, for policy for different countries or different states. I can figure out what they did." It could be GDP data, humidity data, temperature data, any type of data. And then there is an intervention, which is a policy. A new rule was passed, like in early 90s. California increased the price of a pack of cigarettes by 25 cents. So that was an intervention. Did that have an impact on lowering the number of cigarette packs sold in California? How do you know? You can't run that experiment. It's a "what if," it's a counterfactual experiment. But now, Abadie's frameworks use something called linear regression. So you have a matrix and you can figure out the optimal intervention.
What we've done is a framework called SCOUT, which looks at it from both a space point of view and time point of view. Because my ECG data from a month ago is correlated with my ECG data today. So there is a correlation among different people, and there is also a time correlation within your own data. It's a spatiotemporal problem. Now for policy, if I can get other people or other units or other states or countries that have passed similar policies, we collect that data. We form a matrix. Maybe there are three different policy options, and the question is, which one is the best for us, or New Jersey? We can form a digital twin with this counterfactual decision-making framework, and we can predict what would happen if you passed policy one versus policy two versus policy three. And whatever is the best outcome, and that depends on the metric you're interested in, we will say that that is the optimal policy.
So that framework already exists. But the condition there is that there are other units in the world from where I can get the data, who have passed similar policy. And that's the bottleneck.
Heather Howard 41:09 We're back to the quality of the data.
Congressman David Schweikert 41:15 And there's obviously a bit more in a data decision tree-type model, because then you have to actually try to figure out a way to put in local eccentricities. This area doesn't have broadband. This area does. This area has workplace, medical offices. This place does not. There is complexity. As a policymaker, one of our great sins, for those of us that get elected, is we often think very binary. It's a yes or no, instead of understanding. If I wanted to move to a wearable to help a population that has a chronic condition be healthier, you also must think of everything. Do they have access to broadband? Is there a way to capture the data? Is there a way to follow through? Is there a way to audit it? And so you have to sort of build that holistic wrap.
And there's where policymakers often fail, fail because we think of just what's in our jurisdiction. And there becomes our complexity. So often the things we must do to crash the price of health care but live healthier, falls across multiple committee jurisdictions. And the fact of that matter is -- and this sort of wraps back to your regulatory question -- when I start to regulate what I think is in my jurisdiction, I may destroy the progress that's happening somewhere else. What's the greatest success of a minimally regulated technology? You have to look at the internet over the last 30 years. It was a very soft touch, and it changed the world.
Professor Niraj Jha 43:17 Just to add to that a little bit. What the congressman is referring to, we call it covariates. The data set is no longer two-dimensional. It becomes three-dimensional because there may be 100 different variables I'm interested in. When we form a digital twin, it's a 3D digital twin. It's not a 2D digital twin. Some places have broadband. Other places do not have broadband. There'll be yes or no in that location in that 3D data set. So, of course, the quality of the data set... if it's much higher, then the decisions would be much better. But it is possible to predict what is the best policy among a set of policies given good enough data.
Heather Howard 44:01 Well, I think that's a great place to end with this hope for better quality decision-making and better policies. Thank you both for joining me. This is a fascinating journey exploring the exciting promise of AI and thoughtful ways to mitigate those risks. Thank you both. That was terrific.
Thank you for listening to the Princeton Pulse Podcast, a production of Princeton University's Center for Health and Wellbeing. The show was hosted by me, Professor Heather Howard, produced by Aimee Bronfeld, and edited by Alex Brownstein. We invite you to subscribe to the Princeton Pulse Podcast on Apple Podcasts, Spotify, or wherever you enjoy your favorite podcasts.
We hope you enjoyed: PODCAST: AI in Health Care – Promise or Peril?
Comments