Totally! And, though I’m aware there are many substituting human connection, with AI friends/partners, will humanity survive if all interactions are AI-human?
My point being, as Nathan highlighted, we know that continuity-of-care improves quality of care, and the foundation of that continuity is relationship, trust and rapport.
Please let us know if there’s a study where AI performs as well as a senior GP in managing an undifferentiated case of an exacerbation of chronic complex medical conditions (e.g., CHF, CKD, COPD, hepatic derangement and intermittent depression, all while having to contend with being born into a rural community that has been systematically under invested in (e.g., education, health, etc) since childhood) . . .over years . . (not just as a single-episode of over-investigation, and hospital referral) . . .then I’ll start worrying about my job.
Assuming AI can’t replace quality human-human relationships, my main worry is how we train the future workforce, to navigate the complex dynamics of these relationships coupled with our medical training. We have so little capacity for human training as it is.
Maybe . . . but like any technological medical intervention, I’ll await some robust RCTs and systematic reviews to see if an AI predictive tool (for example), would be helpful running in the background, scouring notes and listening to the consultation, and produce an alert. We’ve already had ‘alert fatigue’, so this would have to be carefully deployed, like a new drug treatment.
Thanks for posting recent discussions: it was nice to read how are are all somewhat aligned in our enthusiasm mixed with skepticism Table summary from AI in Health Leadership Summit - AI in clinical decision making