I have a fair number of friends in the Bio-pharma and Medical Device space. Many are thought leaders who spend much of their time immersed in the pursuit of new products for their companies.
There are several pitfalls they try to avoid in product development. First, they steer clear of acquiring technologies for which a need or market has not been identified. These technologies are problematic because the design has to be retrofitted to solve a problem it does not match.
Another thing they try to avoid is overreach and moonshots. As one of them remarked to me the other day, “I am not under the illusion we are going to cure cancer. It is a noble pursuit, but it is unattainable. Finding ways to stimulate the immune system to improve the quality and length of people’s lives who have cancer is an achievable goal.”
Finally, they want to be certain that they have a comprehensive understanding of the specific need or problem for which they are trying to find a solution so they can maximize the likelihood of the product being successful.
The news has recently been filled with stories on the failures of artificial intelligence or ontological software in reducing healthcare costs. These failures certainly fall under the rubric of the first two pitfalls: creating a moonshot technology and retrofitting it to a market. However, the real reason this approach has failed is because the people behind it lacked a comprehensive understanding of the need or problem they were trying to solve.
Let me expand on this last point by starting with the definition of artificial intelligence (AI) as found in the Merriam-Webster Dictionary: the capability of a machine to imitate intelligent human behavior.
We are trying to solve this problem with the same thinking we used to create it.
Proponents of AI decision support systems claim that it approximates expert opinion with a high degree of success. However, what they failed to understand is that imitating intelligent human behavior is not a viable solution to driving down healthcare costs. In fact it is because of “intelligent” human behavior that healthcare costs are out of control. Or to paraphrase Albert Einstein: we are trying to solve this problem with the same thinking we used to create it.
Want proof? Four years ago the Proceedings of the Mayo Clinic published a landmark study that examined the percentage of time experts at the top hospitals in the country made decisions that were consistent with the current state of evidence-based medicine. Shockingly they found, under the best of circumstances, thier decisions agreed with the evidence 60% of the time. That means an incorrect decision was made 40% of the time! When you consider that the United States spends over $3 trillion on healthcare per year, this is an enormously expensive problem.
So let’s do some simple math.
Proponents of AI claim it approximates expert opinion 90% of the time, and experts make truly evidence-based decisions 60% of the time. This means only 54% of AI clinical decisions are truly evidence-based. How are these technologies going to drive down healthcare costs if they are wasting money half of the time on incorrect and ineffective medical decisions?
As I said earlier, the creators of these AI products didn’t really understand the need or problem they were trying to address. A deeper understanding of the problem would have led them to recognize limits of human expertise and the need to develop a technology that assure physicians’ decisions are evidence-based at least 99% of the time.
Do we need ontological software or AI to ensure that 99% of healthcare decisions are evidence based? No. Will solving this problem reduce healthcare costs? Yes. At least in the clinical decision support space it will. After all, doing the simple math we are losing close to $1 trillion a year on poor clinical decision-making. The need is clearly there. Someone just needs to come up with a better solution.
Peter F. Nichol, MD/PhD
This article was initially published on MedAware Systems