@MValentine has allowed me to post these tasty examples of ChatGPT talking SNOMED and FHIR
Impressive. Does that mean we have just been made redundant?
Tools to help with coding and programming have been around for a while. But they’ve always been niche and expensive. The impressive thing here is not necessarily that ChatGPT can do it, but that what has been released as a (currently) free, general-purpose consumer chatbot can do it. The barrier to entry for AI-enabled products has just dropped precipitously.
We’re going to need humans for some time. The list of SNOMED codes needs some editing, and I think an understanding of programming will still be needed for a while to in order to get useful code generated. But the direction of travel is clear. I heard a quote in a podcast recently that “the next programming language will be English”.
Just a couple of things to be aware of:
- ChatGPT isn’t always accurate, it can quite happily make stuff up and present it as the truth. It is not a search engine, its a natural language model. For instance it will quite happily fabricate references complete with fake urls.
- The information it was trained on is only up to 2021, any changes to SNOMED codes after that will not be represented in ChatGPT responses
I bet not. I just finished watching a documentary on VisiCalc, one of the original PC-based spreadsheet applications. The guy who designed VisiCalc was a programmer. He saw the writing on the wall with the arrival of Fortran and Cobol inevitably making programming irrelevant. So he did an MBA to move to a safer profession like working for a multinational corporation in middle management. During the training, he saw a niche and the rest is history.
If only all my NZ affiliates could be as up-to-date with respect to SNOMED releases!
If you pay (US$20 per month) you get the model trained on more recent stuff so that isn’t such an issue (unless you are a cheapskate)
Disclaimer - I am no ChatGPT expert. However, I can see this type of tool becoming useful in healthcare once refined. I am wondering on how to validate quality of output from ChatGPT when variables may always be changing and apparently it is smart enough to adjust based on input. And what type of regulations would need to be considered (in NZ as an example) or created for a tool like this to be used in HealthCare?
There is definitely a use case for AI in healthcare, but ChatGPT as it stands now, is not the tool to be using in this context. It’s great as a demonstration of what is possible with AI if the data it provides is accurate, but for healthcare a more specialised AI with stricter controls should be trained and used.
For example, the first screenshot in this thread gave fake SNOMED codes for Hypoxia and for Oxygen Saturation, the SNOMED code it gave for “Mid thigh” is the code for “Disorder of endocrine system”.
That’s super sleuthing! Could be some phone numbers, for example, in there
You’re absolutely right, @Cameron_McNabb. I was just trying to see if I could take this further with some longer lists of diagnoses. I was getting somewhat goofy results, so I went back to this initial output. Turns out I had been lulled into complacency by the fact that it had previously correctly coded a single concept, myocardial infarction, for me. I then double-checked the first concept in this list, shortness of breath, and it was correct. Alas, every subsequent code in the list is wrong.
Furthermore, it’s not wrong in a consistent manner that can be easily corrected. There are a variety of errors that it makes.
What is interesting is that it is not bad at picking out the correct terms that need coding. All of the concepts it identifies have correct SCTID codes available. Where it breaks down is then finding that correct code for the concept. That may be easier to fix.
Overall though, @Cameron_McNabb 's initial warning was correct - ChatGPT will confidently present incorrect information. We will need humans in the loop for a while still.
What is interesting is that you can “argue” with ChatGPT and challenge it when it is wrong. Sometimes it admits its mistake and agrees with you (if you have provided the corrected fact).
You can train an AI to use only the context you give it. This article is a good primer and example of what is possible, using a completely different use case: https://www.lennysnewsletter.com/p/i-built-a-lenny-chatbot-using-gpt
I gave it a test and was able to feed the AI some clinical content that it didn’t previously know, and have it answer a question in a similar way to a Chatbot (but accurately)
Presumably ChatGPT’s view of SNOMED is limited to what’s on public web pages and misses all the release file content that’s only available to licenced users. That could be another source of these anomalies. SNOMED International has agreements with some of the titans, so no reason there couldn’t be a better link in future
Yup, this isn’t surprising due to SNOMED being inaccessible without a subscription / agreement. Which is why many in the space advocate for fully open standards.
If SNOMED-CT is to be accessible to future machine learning packages (and thus people), they really need to address this.
SNOMED has certainly become more freely available with the Global Patient Set, SNOMED CT IPS Free Set and now IPS Terminology. And the licence terms are being relaxed to allow a SNOMED coded health record to cross a border into a non-SNOMED territory (EU use case mostly)
But yes, many would agree with you that SNOMED needs to throw off its licencing shackles and become a free global product in its entirety
It’s a pressing question right now in SNOMED and WHO circles as we attempt to bring SNOMED CT and ICD-11 into harmony
Has anyone tried Jasper as an alternative to ChatGPT? They have a higher cost subscription model, and it has different parameter settings. I’m not a SNOMED user, just curious which AI people think will make them redundant first ![]()
I think it would be really interesting to have ChatGPT mimic a GP (within a controlled environment) and feed it exactly the same problem information the patients are providing, to see how accurately it can diagnose vs a GP.
Previous studies regarding GP accuracy are not exactly confidence inspiring, (e.g. link below estimates 12 million Americans are incorrectly diagnosed by their GPs each year). Add to the mix that ChatGPT potentially does not suffer from (as much) bias or personal preference and it may make for an interesting result.
Some interesting snippets on the net:
Mehrotra recently conducted an informal study that boosted his faith in these large language models.
He and his colleagues tested ChatGPT on a number of hypothetical vignettes – the type he’s likely to ask first-year medical residents. It provided the correct diagnosis and appropriate triage recommendations [about as well as doctors].
Plus the sheer volume of medical knowledge is better suited to technology than the human brain, said Pearl, noting that medical knowledge doubles every 72 days. “Whatever you know now is only half of what is known two to three months from now.”
(https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2565684) did and far better than the online symptom checkers which the team tested in previous research.
I’m guessing once controlled studies prove AI is more accurate at diagnosing conditions then it will be a game changer for health, (which can’t come soon enough, given the deteriorating health systems and dwindling numbers of health professionals around the world)



