
BY KIM BELLARD
I was at the barbershop the other day and overheard a barber talking with his elderly client about when – not if – AI robots would become barbers. I am not joking.
Now, I don’t usually expect to hear conversations about tech at the barbershop, but it shows that I think we’re at the point with AI where we were at with the internet in the late 90s/early 2000s: People’s lives were just starting to change because of it, new companies were rushing in with ideas of how to use it, and existing companies knew they were going to have to find ways to incorporate if they wanted to survive. Lots of missteps and false starts, but clearly a tidal wave that could only be ignored at one’s peril. So now it’s with AI.
I was glad that healthcare paid attention, probably sooner than they acknowledged on the internet. Every day, it seems, there are new developments on how various types of AI show utility/potential utility in healthcare, in different ways. There’s a lot of informed discussion about how best to use it and where the limits will be, but as a long-time observer of our healthcare system, I think we don’t talk enough about two crucial issues. To know:
- Who will be paid?
- Who will be prosecuted?
Now let me clarify that these are less blurry in some cases than in others. for example, when the AI aid in drug discovery, the pharmaceutical industry can produce more drugs and earn more money; when assisting health insurers claims processing Or prior authorizations, which translates into administrative savings that directly impact the bottom line. No, the tricky part is using AI in healthcare delivery, like in a doctor’s office or hospital.
Payment
There has been some cautious optimism that AI can help with diagnosis and suggested treatments. It can analyze more data, read and understand more studies, and apply more consistent logic to making such decisions. It has shown its value, for example, in the diagnosis dementia, heart attacksImy cancerAnd pancreatic cancer. Earlier and more accurate diagnoses should lead to better patient outcomes.
The problem is that in our health care system, no one gets paid – at least to a large extent – for better outcomes or even earlier diagnoses. Arguably, if it results in less care, some healthcare professionals or healthcare facilities will get less money. Like it or not, when it comes to payment, our healthcare system is built around doing more, not doing better.
Well, maybe these faster, more accurate diagnoses will allow doctors to see more patients, which will increase their throughput and therefore their revenue. Again, however, no one to my knowledge advocates that doctors see more the patients; there is a fairly widespread agreement that doctors are already seeing too many patients, which has had a negative impact on the doctor-patient relationship.
So if a doctor or a healthcare organization is evaluating how to apply AI, if they’re doing a cost-benefit, it’s kind of hard to see where the economic benefit is.
Well, wait; How about helping the doctors with all the paperwork, stuff”pajama time» they spend on administrative tasks? Well, yes there is some evidence that AI can help with that, but again, as Rod Tidwell told Jerry Maguire, show me the money. Giving doctors back some of their personal time might help reduce burnout and improve their quality of life — both laudable goals — but it doesn’t directly lead to more revenue. Good use of AI, but who gets paid for implementing it?
Payment will really become an issue when – as with barbers, not “if” – AI starts seeing patients directly. A single instance of AI could see thousands or even millions of patients simultaneously deliver these earlier and more accurate diagnoses. Maybe they’ll just triage, but it will dramatically change the landscape of healthcare. But who will be paid for these visits, and how much?
Would the AI itself get payment (leading to a whole rabbit hole of personality and licensing questions), the healthcare organization (presumably) that deployed it, or even the AI developer? In any event, if we base payment for AI on what a human doctor might receive, we would be paying excessively too much; at best, the “costs” are marginal costs for an almost infinitesimal amount of AI time.
For all of these reasons and more, we will need a new payment paradigm.
Responsibility
Let’s face it right now that our current health care accountability system is terrible. It fails to identify most errors or incompetence, fails to reward most patients harmed by the care they receive, fails to punish most health professionals and institutions providing harmful care, and probably over-reward some/many of the few patients it helps. Now throw the AI into this mix.
As long as human physicians retain the final say over care, even if assisted by AI, they will likely face any resulting liability. This will quickly become problematic as their ability to understand why an AI is making a recommendation becomes more difficult (the infamous “black box” problem).
They will quickly seek to shift the blame onto AI developers, just as they would for other software or medical equipment, but that line will be difficult to draw because the AI ”learns” from its instantiation in a particular health care practice or organization. Neither this organization nor the AI developer will want to accept responsibility.
In the world I ultimately expect, where the AI acts alone, at least to some extent, one would expect the AI to take responsibility for its actions, but that assumes that the IA has assets and is an entity that can be sued, neither of which is likely to be true anytime soon.
So if anything, as things stand, AI is likely to further muddle an already confusing healthcare accountability system. Hell, that should speed up adoption, right?
For all these reasons and more, we will need a new accountability paradigm.
———-
Health care is supposed to be about caring for people, improving their lives by improving their health (or, at least, reducing their suffering). Most healthcare professionals and institutions give it at least some semblance of praise, but the harsh truth is that, especially in the United States, healthcare is a business. As such, AI is going to face a slowdown in healthcare until we tackle key business issues like payment and liability.
AI will be ready for healthcare long before healthcare is ready for AI.
Kim is a former e-marketing executive at Big Blues, editor of the late and lamented Tincture.ioand now a regular THCB contributor