One of the drivers of GP burnout is dealing with what some describe as a never ending stream of in-box items that they have to apply their minds to. Sometimes this is correspondence with patients, sometimes it’s new work rolling down the hill from the secondary health system, but often GPs talk about it with reference to test results.
Is there a space in here for rules based/machine learning informed systems that would triage the results so that GPs only had to look at those that were of concern?
(apologies if this is treading old ground, I thought I had read about a pilot of this on NZDoc but couldn’t find anything).
I’ll put some thoughts at the end about some of the challenges from a efficacy/implementation point of view, but from a policy perspective I’m interested in how this could be done while addressing very legitimate concerns about liability when something goes wrong.
It seems to me that the Govt could either develop a rules set and then regulate so that Doctors who were using the rules set correctly (including patient records being up to date/code correctly/histories being taken) had some protection in the event of edge cases where there was something wrong.
Alternatively the Govt could set up a standard by which private sector rule sets could be approved.
The idea behind either of these being that you have a rules base that would replicate the decision making process that a normally competent Doctor would go through when reviewing test results. And if a normally competent Doctor wouldn’t have picked up a red flag, because the situation is an outlier, then it seems reasonable that they would be protected.
To be clear - then intent here is not to positively identify health issues, rather it’s to positively identify where the tests and history combine in such a way that a normally competent Doctor would feel no need to follow up.
Efficacy and practical issues:
-
Can we expect to have a robust enough database of historic test results combined with comprehensive and correctly coded patient histories? (I’m working from the assumption that the current work on data cleansing/normalisation/harmonisation is completed)
-
Are there populations which have poorer records or for whom less is understood due to historic underserving by the health system?
-
What happens to liability when patient history items have been incorrectly coded? What happens when those coding errors have been done by a GP three or four GPs ago as the patient has moved around a lot?
-
I’ve specifically not called this AI (partly because I think term is rampantly misused) but is there also a risk that this would be constantly at risk of a public scare around AI/machine learning that would see them close this down in a knee jerk reaction?
-
A GP reviewing a test result for a patient that they know and have ordered the test for is one thing; a GP reviewing a test result that was ordered by someone else (eg a locum or hospital specialist) seems different somehow - is the reason for the test a needed part of the information set?
There are others but I’m keen to hear from people thinking or working in this field.
Note - while I contract to the RNZCGP this is not a specific project on our runway at the moment, this is more just a policy question that’s been hanging around in the back of my mind for a while that I’ve wanted to kick around with people who have more knowledge.
Tom
[edit - found the NZ doc article - https://www.nzdoctor.co.nz/article/news/summer-hiatus/jamie-and-his-band-bots-scaling-paperwork-mountain]