APIs - is there a place for FHIR-like APIs?

Hi everyone, I am working at a Product Manager for Data Sharing in the Northern Region DHBs (CMH, WDHB, ADHB and NDHB) We have stood up our API platform (Mule) and API gateways (AWS and Mule). We are also doing some work on building FHIR APIs over the top of some of our legacy systems. As part of the development of our API operating model we need to develop guidelines on when to use FHIR and when to not. The current guideline that had been agreed is we would build FHIR for use cases where we wanted to expose our data externally (to non DHB applications). We think we can develop a bit more detail around this so I am keen to hear thoughts from the community. We are also looking at architectures that allow us to build ‘FHIR like’ in the beginning to ensure we don’t have to completely rework things if in the future we want to ‘FHIR up’ an exsisting API. Some of the feedback we have had from internal stakeholders is that FHIR does add costs and extends timelines to both building the APIs but also for consuming clients so we are wanting to get a usable approach here to the use of FHIR. Looking forward to your thoughts.

‘FHIR like’ and ‘FHIR up’ are terms I’ll be using, thank you Anna Marie Scroggins (healthAlliance)

Peter Jordan Thanks for your insights. Let’s chat about this offline as I think it is important to maintain the position of HL7 and conformance to FHIR when FHIR is the standard being used. We are wanting to develop clearer guidelines in the Northern Region to meet stakeholders requirements which aren’t always for FHIR. Andrew Cave (WDHB) has some specific viewpoints on this and has noted for some use cases FHIR will cause up more up front time, cost and if exsisting applications are calling an API then re-dev work in them to call a FHIR API. I agree with you though the that you get your ROI over time with FHIR. We operate in a really fiscally strained environment so are trying to balance a FHIR first approach with the practical realities of our ecosystem. Always a hard line to work!

Anthony Benson (healthAlliance) You might be interested in this thread too.

Hi Anna Marie,

If you want a good test case for where FHIR is going to be very important in the NZ context, consider integration with FHIR APIs for accessing SNOMED CThttps://www.hl7.org/fhir/snomedct.html servers. As this is about to become a must have in the NZ ecosystem soon. (As Alastair Kenworthy has spelled out in the Interoperability Roadmap).

Another suggestion is that you learn how to make FHIR profiles and implementation guides in the PREMS and PROMS space using the FHIR Questionnaire Resourcehttps://www.hl7.org/fhir/questionnaire.html and specialising it to get specific patient responses. This is a good entry point because there are lots of people using Questionnaire right now, and the FHIR Chat communityhttps://chat.fhir.org/ is full of examples and community advice.

cheers,
…|<

Hi Keith Duddy, Thanks for that. We don’t really need use cases for FHIR it is more guidelines for our region about when to build APIs to the FHIR standard and when to not. In an ideal world all would be built to FHIR but there are current constaints in this approach.

Ah, I see. Sorry for the gratuitous advice then. You’ll need Alastair Kenworthy and co to guide you… but I wouldn’t be surprised if FHIR APIs to SNOMED will still be first out of the blocks :wink:

I’d agree with you about FHIR not being suited to User Experience Andrew Cave (WDHB). However, I can see how it will increasingly be used to interconnect systems at the middleware and workflow coordination layers, and maybe act as a clinical data modelling approach in a green field application which needs to integrate with existing systems that will increasingly become wrappered with FHIR APIs for interchange.

I don’t see anyone wanting to go back to other HL7 standards for the clinical informatics now… the specs are closed, the RIM is complex, and the method of specifying by constraint is illegible to most clinicians who will need to validate the intended content.

I hope you’re not considering “rolling your own” directly in SQL tables and programming language data structures?

cheers,
…|<

‘FHIR like’ isn’t a term I’d favour - it’s most likely meaning is a RESTful Web Service API that does not conform with the HL7 FHIR specification. If the functionality of a ‘home-baked’ API can be achieved by using FHIR, then it’s very hard to see the ROI on using anything other than FHIR. Any savings in the short-term will quickly be negated by the costs of maintaining a bespoke API and subsequently converting it to FHIR (‘FHIR up’ - really?)because that’s what all your external stakeholders will require. FHIR has been developed by a huge world-wide community over a significant period of time and comes with extensive ‘free to use’ resources, such as software libraries. Let’s eliminate the creation of #8 Fence Wire approaches to interoperability in NZ, put the NIH culture to bed and implement international standards!

@amscroggins presumably you see FHIR-like and FHIR-up as helpful positioning steps in your environment towards APIs that are fully FHIR conformant? On the other hand, Peter Jordan, is there no such thing as less than 100% FHIR conformant?

Peter Jordan Will set up a call sometime with you as we have imbeded a multi-layer architectural approach (1st use case was Patient Demopgrahics) so would be good to think about this is managing the dual needs.

Andrew Cave, would be also good to go through the architecture with you as the system APIs aren’t FHIR. We have used a process API to do most of the transformation. Yes FHIR did mean for the system APIs we build tables with more data elements that the specific use case in mind but it does allow a dual approach. Anyway all very good discussion and reflections.

Wow Anna Marie Scroggins (healthAlliance), you have certainly got a lively discussion going here. I’m a bit late to reply, and I see that Peter Jordan has made many of the points that I might have contributed.

But, no, Andrew Cave (WDHB), you have made a completely valid point about performance and legacy proprietary systems… sometimes you just have to play the cards you have been dealt, and serving up the right result in milliseconds rather than seconds (by whatever means necessary) in interactive use cases is fully justified.

I’m also participating in some FHIR Talk chats at the moment, and I might pose some of the issues raised below to the FHIR-first enthusiasts.

Let’s hope we can get an nHIP architecture that doesn’t rely on real time queries to backends at DHBs to satisfy user performance requirements on data that is not changing moment by moment (it’s unlikely that some remote system will discover a new allergy while I’m in an outpatient clinic, for example). But we also don’t want to go down the MyHealthRecord Big FAT Database rathole that we are in on my home turf.

Thanks all for your considered opinions.

I spent my early career designing, standardising and building CORBA Services and applications… But despite the convenience of accessing the same object API for an in-memory object as for a remote object across the network (perhaps programmed in a different language), naive users often made the mistake of writing chatty application protocols, which worked fine when all the objects were co-located, but then suffered severe performance penalties when some objects were remote (especially if the remote object wasn’t actually running and its state had to be loaded from a database).

Good CORBA implementations often relied on clever caching, pre-loading of often used data from DB into in-memory objects, and a combination of publish/subscribe and query-based APIs. A lot of this could be pushed down a layer so that naive programmers could still get reasonable results - but forgetting that you rely on remote systems and acting like everything’s in memory always has poor results. Although a lot of that is more transparent now that we have gone back to The-Protocol-is-the-Middleware (i.e. REST web services), and it’s a bit more obvious where the time-costly operations are.

Our current methodology considers several time frames (with orders of magnitude of difference), and several “binding times” when you have to do the work to make something available to its (human or machine) client. And my perspective is still always informed by the RM-ODPhttp://www.rm-odp.net/, which has a timeless set of patterns and viewpoints for distributed systems design.

Hi Andrew

I probably would need to discuss this with you and I’m not sure I’m following the options propossed here. Catch up in the New Year!