From the course: AI Applications in Healthcare: A Conversation with Dr. Matthew Lungren
What does AI bias in healthcare look like?
From the course: AI Applications in Healthcare: A Conversation with Dr. Matthew Lungren
What does AI bias in healthcare look like?
- Can you please talk about what biases in training data and algorithms is, and also how we can address potential biases in AI algorithms that might affect patient care? - Absolutely, and you know, it's something that's just inherent to the way that these models are built. So if you think back, these models initially are trained to be next word predictors. And the words that they're trained on to learn how to predict the next word is essentially all of the internet. So we've all seen the internet, there are great parts of it, there are not so great parts of it. But all that information has inherent biases and linkages between terms and concepts that can come out with the use of the model. So our job is to find ways to mitigate anything that might, you know, be biased, particularly in the context of a healthcare situation. And so one of the ways we address that is by looking for solutions and tools that can improve the transparency and improve the sort of reference to, you know, reputable sources as we work with the model. So a good example of this would be, if I'm asking a question potentially about a medication. I want it to reference a source and then sort of show me where it got that information, where it got the answer that it's providing me, so I can be, again, this is that part of it, I have to learn how to work with the model so that we're better together. And so part of that is I'm a human in the loop. I'm checking with my judgment, looking at the source, that the model's providing and say, "Okay, I agree with that. And that seems like a reputable source." Because otherwise, we could find ourselves in situations where if we just accept anything that comes back at us, we could find ourselves sort of doing this, again, falling in this trap of automation bias. And if there's bias in the model, now you have a biased human and a biased model together, potentially, and that's obviously not ideal. So the good news is, is that this is not just a challenge in healthcare. This is a challenge across industries and it's also one that we're finding lots and lots and lots of approaches to make sure that this can be addressed in a very transparent way. The best example that anyone can check out, like right now is to use the Bing Browser, the Bing Copilot. And if you click it on Precise Mode, literally every answer it provides you, there's a link to the source. And so you can check for yourself, "Do I believe that source? And do I believe that answer is correct?" And you can get a fabulous amount of work done in a very short amount of time. Again, that would've normally taken you hours of hunting and pecking through the internet to find the answer to collate that answer. So I love that tool because it kind of exposes some of the capabilities that we will, I think, come to expect as we use these systems more and more in our daily lives. And ultimately, again, mitigate that bias to the point where we can be better working together.