This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2022/09/GettyImages-640351075.jpgAt the latest Fortune Brainstorm Health virtual discussion on Wednesday, experts from various parts of the medical field said that once these impediments are overcome, A.I. could be the key to improving patient outcomes, lowering overall costs, and reducing burnout and stress on overworked caregivers. One of the first steps, they agreed, is breaking down the barriers that prevent the collection and sharing of accurate, unbiased data.
“It’s perhaps the most important question of the day: how do we get systems to talk with each other?” said Dr. David Gruen, the chief medical officer of imaging at Merative. “[A.I.] has a broad concept of interoperability. How do we trust the data? How do we get unbiased data? How do we pull together the data that we have in our arms or in the apps on our phones into our health system’s record so that we really get a comprehensive picture? We believe that that’s going to be a huge hurdle [overcome] when we convince people that this is cost-saving, data-enhancing, and outcome-improving.”
As the director of innovation for Sonoma County’s information systems department, Carolyn Staats oversees the use of technology and collaboration for the area’s hospital system, social programs, and law enforcement. Over the past few years, she’s dealt with COVID, widespread homelessness, and wildfires in her part of California, with the pandemic particularly bringing health inequalities to light.
“We just don’t have the data to capture these things. Social determinants of health are a very good example. We’re finding that the more we incorporate sorts of social determinants of health, the better and more accurate these algorithms are,” she said. “At the beginning of COVID, we saw many instances where really there were certain populations that were hit hard by the disease. It was really widening these disparities. It’s broken the trust between the population and our health care system.”
Coming from the academic side, Stanford associate professor of medicine Dr. Tina Hernandez Boussard said there needs to be more thought put into how A.I. health tools are designed and implemented to keep the patient in mind.
“When we build these A.I. systems, we have to think about who is the end user. A lot of times, from academic research areas, we build these systems that benefit the hospital system, that benefit the workflow,” Hernandez Boussard said. “What we need to do is have a greater emphasis on team diversity because when you bring the community into the design and development of these algorithms, they really have a broader view of how things can have societal impact.”
The second step concerns standards, she continued, explaining there aren’t regulated standards for A.I. “You could have something that’s 65% accurate or 99% accurate and it doesn’t matter,” she explained. “There’s no regulatory aspect of that and that’s something we really need to think about.”
In addition to thinking of the patients, Gruen pointed out the ways A.I. could be used to take some burdens off of the doctor and allow them both the time and headspace to focus more on treatment. And for him, one of the biggest threats facing health care is the burnout of clinicians and the shortage of providers.
“We know that primary care providers, for example, spend an inordinate amount of their time in front of the EHR [the patient’s electronic health record]. They spend more time with a typewriter than they do with their patients,” he said. “If we can use natural language processing and voice recognition, and allow providers to have face-to-face encounters rather than having their back turned, typing to enter their data, they would improve outcomes. We need to find technology to allow people to practice at the top of their game, to reduce menial tasks, to get away from things that technology can do better, cheaper, and faster. It will address a lot of the burnout that we’re facing in the trenches.”
Another big issue is getting hospitals and physicians to adopt and trust in the technology. It’s only natural that some doctors, when given sub-par information on how these A.I. systems work, will rely more on their own experiences than some algorithm that’s advising them to treat a patient differently.
“It’s very hard when you give a clinician or health care system an outcome or prediction without any information to support it,” said Hernandez Broussard. “Understanding how and where you present this information and how the clinician can use that with their own biases when interpreting that information is a really big challenge. Where we really need to see it is in the community setting, in rural hospitals. How a clinician can use an A.I. tool and interpret that information in these more resource-scarce settings is really not well-known. It’s a big gap in how we move things forward.”
Staat agreed, adding that the A.I. tools need better explainability and ways to show why they’re making their recommendations: “Our clinicians and case managers, they’re saying, ‘I don’t know how you came up with this, but I definitely know more than whatever is in this system.’ And in many cases, of course they do. They have a bigger picture. Having the ability to drill down, I think, is critical so they can go, ‘Oh, this is why it’s recommending this.’”
That gets to the broader issue, as Hernandez Broussard pointed out, of the data being used. “Remember, an A.I. only learns and only predicts what we give it,” she said. “If we’re giving it information that does not represent the population we’re trying to apply it to, it will always be biased. It will never be accurate and it will never be reliable. So we need to think about what we’re feeding these algorithms to give predictions and make these assessments, and, more importantly, what we’re not getting right and which populations are missing from that.”
If that’s done properly and communication between currently siloed sectors of the health care system improves, Gruen believes A.I. could help eliminate the medical gaps between socioeconomic populations in the United States.
“It does have the opportunity to be the great equalizer,” he said. “If you happen to have money, insurance, resources, and access, you get better care than anywhere in the world. But on a statistical basis, we fall far below the mean. If we can get point-of-care services to those that need it most, who may not have access, we may be able to address some of those disparities and, in turn, lower costs and increase quality. That’s the power of this technology if we use it wisely.”