Is anyone using Nabla or similar note taking helper software? what is your experience with it? if you are a clinician and are choosing not to use it, what are your reasons not to use it?
I have heard of it in use and my own GP is trialling it. Knowing what i do for work she shared the outputs with me and discussed the advantages and disadvantages. A significant limitation that I immediately noticed is how it doesnāt pick up on the non medical issues that relate to health and welfare. Our social determinants of health are critical in the community and need to be reported clearly when associated by the consumer with their health concerns. I am keen to see how this will evolve for electronic notes and AI.
I am going to try it. Was teaching s GP the other day who eas using it ad they found it useful.
Iām really interested to learn more about it, and alternative solutions.
Potential barriers to successful implementation for us include reliability around background noise (children, babies etc), reliability of recordings in community setting without microphone next to stationary clients, ability to pick up use of Te Reo in conversation, complexity of multiple issues at every visit and integration with our system/template to suit our system.
If we need to spend 10 minutes editing the content to fit our systems, itāll be no use.
But the potential is exciting +++ if we can navigate these issues.
great point Juliet - I am not familiar how it works exactly - I think it uses the microphone of your laptop and it should be able to filter out background noises (if zoom can do it, I am sure nabla can). I can see they have Spanish as well as English but Te Reo will take a while⦠although, maybe not⦠maybe some of our Te Reo experts could help here⦠Dan Te Whenua Walker???
Hi Andrea, also see recent article in NZ Doctor on Dragon Medical One (DAX) apparentl;y approved by Te Whatu Ora
Not sure the uptake in GP and Nabla may be more suited??
These tools (Nabla and DAX) can use your smart phone as a microphone/ listening device
Hey, yes, but in the community setting, (Plunket) weāre having our consults with families moving about the house, in the kitchen getting a toddler a snack, changing a nappy or moving to a better spot to breastfeed a baby - our staff have devices to record, but our clients may not be right by the device⦠Iām excited to learn more about the future potential though!! It will make such an impact on care delivery if we can focus on conversation, not documentation while we are with whÄnau ![]()
Few interesting things to unpack here. Nabla runs two LLMs/models it appears from their blog. One which summarises the content, the other which appears to organise the content into a clinical summary. It gathers this all from a transcript.
This suggests in your case Becky it could have fallen down in four places:
- It wasnt a subject of conversations so it wasnāt transcribed (ie. it was known but no one said it out loud)
- Someone talked about it but the voice to text didnt capture it
- The LLM didnt deem it important
- The clinical notes model didnāt deem it important
I have put some slides together which describe some of the limitations and risks and we are currently examining whether it is feasible to build a more local service with infrastructure under our control.
In general we see some of this becoming built in to PMS type systems over time.
Improving Note taking copy.pdf (636.8 KB)
Jon
The main issue with using a GPT-wrapper like Nabla is that they rely on OpenAI which has very fragile and complex governance.
Hi I think this is being used in general practice quite a lot. I have concerns about the privacy factors & AI hallucinations. Really glad this has been posted.
Nablaās data protection agreement and terms seem to put the onus on the GP to check they are allowed to use it and abide by the regulations. It looks to me (non-lawyer!) like this is because the data is stored in the GPās browser and sent straight to OpenAI: https://www.nabla.com/legal-documents/ (If I were a GP about to use Nabla, I would read these documents or have my lawyer read them before use).
Te Whatu Ora on LLMs:
NAIAIEAG advises that Te Whatu Ora employees and contractors:
- Must NOT:
- Enter any personal, confidential or sensitive patient or organisational data into LLMs or Generative AI tools
- Use these LLM or Generative AI tools for any clinical decision, or any personalised patient-related documentation, or for personalised advice to patients
UK MHRA on LLMs:
āWhile it may be difficult for LLM-based medical devices to comply with medical device requirements, they are not exempt from them, as those conditions are necessary to ensure that the device is safe and effective.ā https://medregs.blog.gov.uk/2023/03/03/large-language-models-and-software-as-a-medical-device/
(note - MHRA and FDA classify software as āmedical devicesā if intended to be used for medical purposes)
FDA:
āAs of October 19, 2023, no device has been authorized that uses generative AI or artificial general intelligence (AGI) or is powered by large language models.ā https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
I think LLMs are going to be very useful in the in healthcare in the near future (they are amazing technology), but they must be properly evaluated and approved by regulators as there are dangers for privacy and hallucinations. As Jon says, it might be that Te Whatu Ora is the best organisation to provide a safe and secure LLM for the NZ healthcare sector.
NB, if you havenāt tried the new ChatGPT voice app then you should - it is like HAL from 2001 where you can just chat with your computer and ask any questions, etc.: https://twitter.com/OpenAI/status/1727065166188274145
Yes - fortunately there is an upcoming MPS medicolegal webinar looking at this.
Thanks for the ongoing rich discussion @jon_herries (for providing your slide deck) and @chris.paton (for potential risks of Nabla using OpenAI and the link to Nablaās legal documents). I am unsure if I agree that Nabla Co-Pilot is considered an LLM-based āmedical deviceā as it does not act as a CDSS (Clinical Decision Support System) or any clinical diagnostic or measurement function. I, therefore, do not see that it would require FDA or MedSafe approval - unless, of course, Iām missing something. Nabla Co-Pilot not only complies with SOC 2 Type and ISO 27001 certified, but also complies with USA HIPAA rules for the storage and transmission of PHI (Personal Health Information). I donāt know what that means in Aotearoa context and if a PIA (Privacy Impact Assessment) is required?
Is it reassuring that in an interview with Nablaās co-founder, Martin Raison, says that āwhen we generate a consultation note, we limit ourselves to structuring and summarizing things that were explicitly said during the consultation. We specifically do not try to make any inference: that remains the clinicianās job. In that sense, even a correct inference would be treated as a bugā?
@jon_herries it would not be core business for Te Whatu Ora (TWO) to attempt to develop a solution. IMO that is better left to industry. GPs (and many other healthcare providers) are armās length from TWO, and tend to find their own solutions. I wonder if a workshop or open debate (including current NZ users) is required as some of the potential risks mentioned in your slides would have strong counterarguments. Open to discuss ![]()
@WernerP I would suggest it is probably more a lego build more so than ābuilding our own LLM from scratchā. I think that data and understanding it is probably our core business, as is being Kaitiaki of it and the Nabla solution has a number of unknowns and/or risks so looking at how we ensure we meet those expectations and can deliver is how I justify doing something.
Further - when we do this work, plenty of this is with help from āindustryā which of course takes many forms (not just SaaS or products) we have lots of contract development and technical expertise help us manage our data which is a useful alternative purchasing model because we then have some ownership and control of the solution.
@chris.paton - the best bit of the advice is that it says at the bottom to contact us (me really) to talk about use cases - this allows us to bring it to the NAIAEAG who can give some good advice on use cases (we have had a few come through already). Because we donāt operate one at the moment it is easy to blanket exclude the use - that might change as we find use cases/start describing acceptable use cases.
I think most countries would consider GPT-4 used for medical purposes as a āmedical deviceā - I donāt really agree with this - there should be a new category of regulation imho as they are quite different from ārealā medical devices. On the data-side I think Nabla are talking about any data they store, rather than the data stored on the GPās browser (GPās responsibility) or data sent to OpenAI (which presumably OpenAI is responsible for). Anyway - anyone thinking of using Nabla - talk to @jon_herries!
Respectfully, I donāt agree that Te Whatu Ora has Kaitiaki of PHI (patient data) held by GPs, Urgent Care Clinics, private physios/optemetrists, dentists, medical specialists , Dept of Corrections, etc. I donāt think GPs or Urgent Care doctors will wait for a final lego built by TWO if the upside of a low risk solution like Nabla Co-Pilot is already changing the work-life balance for more and more NZ doctors. The risk assessment debate is very urgent since the horse is bolted. I wonder if @SamanthaMurton or other primary care doctors have any comments regarding this topic?
Further update team. I sent a query to Nabla Co-Pilot after discussing with @karl on this too. The response from Nabla:
" > Hello Dr. Pohl,
Thanks for your message and your interest. I understand your concerns about the inherent risks of LMMs. At Nabla, we donāt use OpenAIās model but a GPT-4 model from Microsoft Azure and are also working on finetuning our own LMM with our own dataset as we donāt want to rely on āsomeone elseā's model. We expect that the models from Mistral or Meta Llama will reach the accuracy and performance of GPT-4 in a few months and will switch to these open-source models then.
I hope it helps
BestLaurent
And furthermore:
we are also working with Karl Cole on drafting a compliance doc for NZ"
āwe donāt use OpenAIās model but a GPT-4 model from Microsoft Azureā - Iām pretty sure (happy to be corrected) that the Azure GPT-4 service uses OpenAIās GPT-4.
Good to see Karl is on the case!
Any thoughts after attending the MPS webinar @mca?
Thanks @WernerP
I found it helpful to know where to find relevant guidance:
- https://privacy.org.nz/publications/guidance-resources/ai/
- https://privacy.org.nz/assets/New-order/Resources-/Publications/Guidance-resources/AI-Guidance-Resources-/AI-and-the-Information-Privacy-Principles.pdf
- https://privacy.org.nz/publications/guidance-resources/ai/generative-artificial-intelligence/
Also some key considerations:
- consent is needed if data is being used for LLM training or some other purpose
- need to do your homework to make sure external organisations that are processing your data have adequate security measures in place for transfer &/or storage
- any identifiable information needs to be correctable or deletable
- before using new technologies, perform a privacy impact assessment
https://privacy.org.nz/publications/guidance-resources/privacy-impact-assessment/