Whats the biggest timewaster in healthcare right now?

Hi all and welcome to the group. Im sure some great discussions were had. Im interested to know what you think is the biggest timewaster in your day, that stops you getting on with the getting on?

Hi Alex. Great topic. Biggest time waster I find has two parts: duplication on roles (and therefore efforts) and hunting for the most up to date protocol or pathway to do your work without rejection. We want to do things efficiently and effectively but if decisions or actions are rolled back due to ā€˜using the wrong information’ or don’t have the right of way (as another team delivered the outcome faster without informing you) then it’s all amount to nothing.

2 Likes

The dreaded ā€œblock due to poor planningā€ or ā€œplanning on the wrong informationā€ - so frustrating! Do you think AI has a place in solving these problems Samuel?

Absolutely.

Intra agency AI can pick up all the latest information and provide details of who is doing what and what decisions is the best one based on the context of the pathway or acceptance criteria to do the work. I.e. You don’t want to use a different region’s protocol if it has different contracts, clinical acceptance criteria or resource planning, so knowing where to look and leveraging existing knowledge works better than trying to recycle efforts.

From a primary care perspective the inbox stuff is significant - I thought this abstract was helpful in terms of identifying some areas where AI could help.

https://academic.oup.com/jamia/article-abstract/32/6/1040/8121807?redirectedFrom=fulltext&nbd_source=campaigner&nbd=46039654811

1 Like

It isn’t just primary care!

Managing multiple email accounts across different organisations (and Microsoft tenants within an organisation) is hazardous - I do my best to not miss important clinically relevant emails.

In-app (e.g. in the EPR) messaging can significantly add to this burden (especially if they are inside multiple apps), and it makes for a very complex messaging ecosystem which can easily take over one’s day. Also, it is hard to find the time to deal positively with the onslaught.

I’m not sure that AI is necessarily the best tool for this job. Certainly, there is a lot of scope to improve the design / architecture of this space before bringing in the robots! Although I do appreciate the AI-powered spam filtering in my Gmail…

2 Likes

I think AI can definitely help with the inbox - agentic AI is now capable of multiple tasks that I would have given my EA in previous roles - and a lot faster too! What do we think the biggest risks are with this in a healthcare setting? For a clinican? For a manager? For other workforces?

My biggest current concern is that generative AI presentation is so slick that I think being biased towards AI results is inevitable therefore:

  1. Who is responsible when a hallucination was not spotted? human vs AI?
  2. Related to 1), is the human responsible if they discount AI summary and AI was right?

If the answer to both 1) and 2) is always clinician being responsible for both not identifying AI error, but ALSO, having to defer to AI this is a huge problem and means being a clinician has even more medico-legal risks . . . perhaps to the extent we won’t have people willing to accept such personal risk.

And, then, the next concern is related: How do we train our next workforce? Already, junior doctors are using ChatGPT for differential lists. How will AI errors be picked up, if our future clinicians have, actually, been trained BY AI?!?!

My current work-around is I encourage juniors to first do a human, manual clinical summary, differential diagnosis using static resources like HealthPathways, and a suggested plan . . and only then, look at what gen AI has produced.

The same will apply to me, when I get around to integrating transcription into my workflow (too many barriers for me at present from patient consent to the fact I hate re-reading text as touch-typing is part of my mental formulation process) . . . I will always write my impression BEFORE looking at how AI has summarized to reduce risk of my immediate bias in the magic of AI summaries :wink:

Regarding biggest timewaster, for the population I work in (remote, rural, multiple complex comorbidities requiring care elsewhere, across multiple settings and providers (e.g., ā€œcare coordinationā€), it is chasing up referrals and navigating the complexities of scheduling that actually works for people traveling huge distances. Definitely some tech solutions to be explored, but still a long way off from reducing time at present.

Current tech solutions involve patients only being notified of appointments via a hyperlink in SMS . . .which takes them to some online form that’s near impossible to complete on a small screen . . . which is only screen most people have . . . and clicking thru multiple fields with questions that even bamboozle us clinicians at times!!

1 Like

These are all great answers but we also can’t overlook the non-clinical admin drag, especially in the current system. Multiple people, including clinicians, at my former district have had to spend hours recently trying to figure out how I can pick up extra shifts there - a department that I’d worked in for 16 years and of which I’ve been the lead clinician. Our lack of reliable HR- and admin- processes and policies makes mountains out of molehills and block us from spending time on actually improving the system and caring for patients.

4 Likes

Triaging is very time consuming and varies across different clinicians. Whether a patient is referred to an outpatient service or to a speciality within the hospital, a clinician sits and reads the referral to determine whether they are suitable for the service and if so, what level of priority the patient sits at. This definitely can be automated by AI with careful planning.

AI could improve this by:

  • having a structured referral template. This currently varies between emails/mass text/structured referrals.
  • Being able to pick up on key words. E.g. ā€œsurgeryā€ = high priority, ā€œchronicā€ = lower priority.
  • This would provide much better consistency on how referrals are triaged.
  • Clinician time is now used to see patients, not trail through documents of patients who then no longer meet service criteria.

Also, AI note taking in clinics would be fantastic.

Edit to add: one day this week was ā€œstock countingā€ where non-clinical time was used to count all theraband, splints, tapes, straps, pregnancy belts… I’m talking 6 pages of A3 lists. There has to be a better way of doing this too. It’s only once a year but could save hours.

2 Likes

Hi Emily,

It’s really simple. If you sign it, you said it.

In NZ Radiology-Land we’ve been using increasingly accurate voice recognition dictation tools for about 15 years. We’re ā€œluckyā€ in so much that our lexicon is predictable, so what we say has been pretty well transcribed for a while.

Spoiler alert - we say things, VR transcribes something entirely different, and we sign it off.

All.
The.
Time.

So here’s my trolley car question for those that are embracing Generative AI usage.

Are you, in the absence of system-level air cover, happy to accept the medico-legal responsibility for any documentation created by an LLM that leads to direct patient harm at some point in the future?

If so, you might want to touch base with the HDC and your medical indemnity provider…..

1 Like

Conversely the latest AI studies are showing: Another trial yielded a similar result: When A.I. worked independently to diagnose patients, it achieved 92 percent accuracy, while physicians using A.I. assistance were only 76 percent accurate — barely better than the 74 percent they achieved without A.I..

Suggesting physicians may have bias in their own ability, (literature reflecting the amount of time physicians misdiagnose, appears to correlate with this suggestion).

In 2023 Googles MedPalm2 AI system scored 86.5% on the US Medical Licensing Examination Test, and ChatGPT4 scored 88% on the Turkish Medical Speciality Entrance exam, (still far from perfect, but better than the majority of students could achieve).

Considering AI computing power has been doubling every 3.4 months and the cost per million IOs has dropped from $20 to $3 in the space of a few years, I anticipate we "human change adoption’ may be the limiting factor.

This study used quite curated information fed straight into AI - a far cry from the reality at the coal face!!! It would not perform well with the sketchy data I’ve got at my fingertips let me tell you.

And good exam results do not correlate that tightly with good clinical skills. Unfortunately, we don’t seem to have worked out a better way to assess our human student - or even measure clinical performance beyond simple complication rates. Just remember, Harold Shipman appeared a high performer until we eventually worked out he was murdering many of his patients. AI might just do likewise…

It is funny though - just ask a good nurse, and they’ll tell you who the good/bad doctors are with frightening accuracy.

1 Like

This is an interesting one - and let’s unpack this more. We know there are studies that show inherent bias by clinicians - we all have it, and its almost lore amongst clinicians in that when we learn of a new diagnosis, we joke about ā€œoverseeingā€ it in everyone. Dont get me wrong - I believe we generally manage it relatively well. But its there, and its validated by great research especially in the equity space. And we know AI has benefits that we as humans done. How could AI and people work together - or people use AI - to get the best of both worlds? TO challenge our implicit bias?

I am skeptical when it comes to the structured referral template and picking up on keywords. We will start writing referrals that contain the right words to be picked up by AI. We already see this in HR: people are adjusting their CVs so they have a chance at getting through the first AI screening round. Don’t remember where I read this, but there are even HR people changing people’s CVs so they can get through to the interview round based on what the AI tool is looking for, as otherwise they would miss out on great candidates. Also, a focus on keywords will lead to frequent mistriaging - the example you provide of ā€˜surgery’ as high priority and ā€˜chronic’ as low priority already raises concerns to me that it would not pick up on what may be quite urgent. AI could definitely support triaging, but it would need a more sophisticated algorithm behind it and would need robust evidence for its performance. The latter is woefully absent in general from any proposed AI tools for healthcare, at the moment at least.

Totally! And, though I’m aware there are many substituting human connection, with AI friends/partners, will humanity survive if all interactions are AI-human?

My point being, as Nathan highlighted, we know that continuity-of-care improves quality of care, and the foundation of that continuity is relationship, trust and rapport.

Please let us know if there’s a study where AI performs as well as a senior GP in managing an undifferentiated case of an exacerbation of chronic complex medical conditions (e.g., CHF, CKD, COPD, hepatic derangement and intermittent depression, all while having to contend with being born into a rural community that has been systematically under invested in (e.g., education, health, etc) since childhood) . . .over years . . (not just as a single-episode of over-investigation, and hospital referral) . . .then I’ll start worrying about my job.

Assuming AI can’t replace quality human-human relationships, my main worry is how we train the future workforce, to navigate the complex dynamics of these relationships coupled with our medical training. We have so little capacity for human training as it is.

Maybe . . . but like any technological medical intervention, I’ll await some robust RCTs and systematic reviews to see if an AI predictive tool (for example), would be helpful running in the background, scouring notes and listening to the consultation, and produce an alert. We’ve already had ā€˜alert fatigue’, so this would have to be carefully deployed, like a new drug treatment.

Thanks for posting recent discussions: it was nice to read how are are all somewhat aligned in our enthusiasm mixed with skepticism :wink: Table summary from AI in Health Leadership Summit - AI in clinical decision making

1 Like