GPT4 is available through Bing and will be integrated in Office products shortly. Google and Facebook are also integrating similar tools into their products. So banning would only be a short term solution.
I think employers do a have some responsibility around information security and misuse of tools like these (e.g. relying on them for clinical decision making) by employees and should include the use of this technology in information security training and general professional development/support training. Employees should all be aware that they should not enter in any confidential information or rely on the tools for factual information or making important decisions. Education could also include how to write good prompts as there is a big difference in usefulness between someone who just uses it like google and someone who writes good prompts (and iterates).
I agree with Inga about the need for public awareness campaigns. There are a few major dangers for healthcare and misinformation social media campaigns and phishing for access to health records are likely to get very sophisticated. Also need to educate parents about the dangers of unsupervised use by children, particularly for mental health issues (although this could affect adults too).
Hi Jon, agree that “banning” is a pointless exercise - you only have to look at the use of WhatsApp in hospitals still regardless of organisation rules…
I think the risk at present is related to the authority of how information is presented back by LLMs.
Using GPT at the individual personal level, with education, is something that can be managed using good professional judgement. As a clinician, the expectation is that you use your professional judgment when assessing sources of knowledge - and the medico-legal responsibility lies with you in how you use that information. So yes, just like Google and Wikipedia etc, if a clinician wants to utilise Chat GPT to augment their practice - good for them. Of course noting that the information that goes in does become part of the learning set and has the potential to be spat back out privacy be damned…
Where I think there is much greater risk relates to utilisation of LLMs at a system level i.e. trusted source of truth systems augmenting their product with generative AI. For example - let’s say a system decides to utilise an LLM to take info from a clinical record and summarise it. The nature of current LLMs means that there is a non-zero chance that the output may not just have irrelevant info due to training model bias etc but it could include factually incorrect info ie. hallucinations.
Again, at the personal professional responsibility level there may be tolerance of this - take a GP for example who knows a patient well and on reading the generated summary recognises errors and corrects them.
But if you’re dealing at the system level - i.e the house surgeon who needs the discharge summary and inherently trusts the systems LLM generated summary…well, you’d need to be VERY confident in that system’s integrity - and the onus of responsibility lies with the system i.e. the organisation deploying it, not the junior doctor in this example.
I can imagine that in the near future it won’t just be Privacy Impact Assessments required for solutions but also AIIAs (AI Impact Assessments) - and AI would presumably almost certainly fall under the category of SaMD requiring regulation in the Digital Therapeutics Act.
Using GPT4 in a non-Clinica setting I think would be perfectly acceptable but it probably has a long way to go before it should be considered even remotely safe to diagnose medical conditions.