NEWS - eHealthNews.nz editor Rebecca McBeth
This is a companion discussion topic for the original eHealth News article:
NEWS - eHealthNews.nz editor Rebecca McBeth
In case anyone missed the article the first go-around, this is the NEJM Catalyst piece submitted by Kaiser to describe their experiences with the tool:
https://catalyst.nejm.org/doi/full/10.1056/CAT.23.0404
There are definite limitations to the accuracy of clinical summarization by generative AI, but appreciation of those limitations likewise requires recognition of the limitations of the functional capacity/limitations of human clinicians. An important field of implementation/evaluation into which we ought to see a whole host of studies published.
Certainly more interesting and relevant than âOh ChatGPT can pass X licensing examâ.
Definitely lots of questions with this type of use of GenAI - and lots of potential to cause harm. It is clearly a clinical use of GenAI and I am surprised these types of uses are so far sliding under the radar of regulatory authoritiesâŚ
Any GP using a tool like this really has to be very careful - any errors or harm caused by the content produced by these tools rests solely on the shoulders of the GP, and there wonât be much support from MDOs because this is not considered usual practice.
At a minimum I would insist that the tool clearly labels any note (even if reviewed by the GP) as AI generated, and preferably has a mechanism to allow the patient to review the note for accuracy.
I agree - the lack of regulation is highly problematic, meaning there are no real safeguards in place to ensure privacy/confidentiality nor evidence about the quality of the notes and impact on clinical reasoning. âKaiser Permanente are also using itâ does not solve that issue imho.
The podcast is more nuanced than the article, however I think some of the chosen snippets are a misrepresentation of reality. For example, âI am back to being a real person-based doctor who looks at patientâs eyes while I am talking to them, rather than a screenâ: a large part of my work on the PMS with the patient in the room is reviewing lab results, letters from specialists, reviewing and prescribing medication, showing patient information on websites etc, i.e., all screen work that cannot be replaced by AI scribe use (I do find it helpful to share the screen with the patient and to look at results together, but thatâs a different story).
I also wonder how much time it actually saves: it may gain time in typing notes, depending on the GPâs touch typing skills, but it also adds work as you need to read through the note to check it and make adjustments where needed. This is where you can easily see automation bias creep in: the less time you spend on reviewing the note, the more useful the AI scribe actually becomes. Another unresolved question is the impact of AI use on clinical reasoning/accuracy, see this recent study: https://jamanetwork.com/journals/jama/article-abstract/2812908. In the podcast, either Cole or Medlicott mentions they would not recommend it for starting GPs: why not?
I am not convinced having patients review notes for accuracy is a good solution: the responsibility for accuracy should lie with the AI tool, not with the GP or patient, for it to be actually useful in the clinical encounter. Not sure how shared responsibility would work in this space, but itâs something we need to consider more carefully, rather than putting it all on GPsâ shoulders.
I think AI scribes can be very useful in specific circumstances: for mental health consultations, conversations with patients around interventions and therapies with an emphasis on providing information and gaining consent, and probably also for consults with multiple questions or issues. We should talk a lot more about how to solve the risk of harm and impact on practice (notes becoming too long, does writing notes support clinical reasoning, how do we address confabulation, etc) and we have many other questions/issues in EHR use to solve that may improve our practice more than the scribe (a searchable inbox people!) that we are not addressing because itâs not âcoolâ.
AI scribes can be a useful tool but come with limitations and risks we need to manage - and at this point I am not satisfied it meets the ethical and clinical requirements for widespread use.
An article by Luke Bradford (RNZCGP) on AI in General Practice: