Health NZ to simplify ‘confused’ technology landscape

NEWS - eHealthNews.nz editor Rebecca McBeth


This is a companion discussion topic for the original eHealth News article:

https://www.hinz.org.nz/news/609884/

This is promising:

Health NZ will not be a “passive recipient of vendor solutions” and will be rigorous in defining what it needs in terms of technology, the chair of interim Health NZ says.

I also rather liked this comment:

“While staff on the front line are under real pressure coping with current realities, the corridors of management are cluttered with consultants, contractors and vendors hawking their wares to solve problems which they promote to meet whatever they have for sale,” he said.

Good, insightful stuff, IMO. But then in the next breath we have PWC promoting: “Omnichannel Care”

… which on the surface looks enticing. Or is it reminiscent of “corridors cluttered with consultants” once more? I’m not sure. Looking around at who is promoting ‘Omnichannel’, I can’t help but feel that it’s not clinicians but Microsoft Industry Blogs, SDLC, QnomyHealth, CBRE, and so on.

What is Omnichannel care then? HIMSS talk about its as follows:

New players that have been entering healthcare over the last few years – including big tech companies and big box retailers – have learned the same lessons, and are reshaping their care offerings toward what they’re calling an omnichannel approach, smartly combining in-person care, virtual care, care at home and care by mail.

So on the one hand we have a refreshing take on the current state of healthcare—and on the other, emphasis on packaging.

Is it just me, or does this Omnichannel patter betray a lack of insight into the bold architectural strokes that will be needed if we are to fix the foundations of our digital infrastructure?

I don’t think a Walmart approach will fix much. But perhaps that’s just me?

Dr Jo.

1 Like

Blame the consultants/contractors - it’s as easy as shooting rats in a barrel. In reality, this is a contract engagement and management issue. Contractors don’t just waltz into an organisation without invitation and certainly should be accountable to whoever engages them. If the Chairman of the Board of what has been declared to be ‘the largest IT shop in NZ’ really wishes to dispense with contractors (and contracts with external suppliers) then he will be faced with massive capability and capacity challenges.

1 Like

I see two key issues. One is the shape of “the problem’” we are trying to solve. Or maybe that is called “system thinking” and standing back far enough to see that what we have is not a technical problem but a socio-technical problem, which requires different lenses and solutions.

We can nail down more than that generality, The irreducible complexity of this problem ( the Pae Ora Health Futures task ) is at a minimum multi-level and fractally complex. In simpler terms the key details at each level and each location are completely invisible to managers and policy makers at the next higher level, who keep asking why we can’t “just do X” and think we’re being lazy when in fact we are being diligent, opening and unpacking X, discovering it is a “can of worms” and worse, every “worm” is actually a new can of deeper worms.

That’s fractal in the data processing sense. What’s worse is that the “fracticality” ( is that even a word?) is not just the data, but the context for the data. And the meaning of the data is context-dependent, but everyone inside the silo thinks everyone outside the silo sees the same thing, but they don’t. In computing terms, up higher in the code, there is a “with…” context that’s critical but easily overlooked.

The result in many cases, for example Cerner’s attempt to tame the US Veteran Administration’s “mess” of 103 different versions of Vista, by imposing a single central version of everyone. And they act surprised that the clinicians are pushing back very strongly and it’s not working out.

We can be bolder – any top-down imposed "solution’ at the technical level will fail in fractal environments. “One size’ will never “fit all.” The evolution of the system has to be 'bottom up” because the necessary context-awareness is out there, not in the central corridors of power.

The US National Library of Medicine once observed, in terms of “fixing” health care in the USA, that what was needed was not a “billion dollar system” but a billion “one dollar systems”. in another metaphor, the mine workers at the coal face know what they need, and where it hurts, and where it could “obviously” be improved, but do not have the tools to do anything about it.

Putting problems on lists which get prioritized and moved up the management hierarchy and further prioritized and filtered up several more levels somehow always removes the original issue from the table. And that’s if you only had one culture, not an entirely separate Maori culture with a completely different concept of what’s going on and what’s needed. For that matter, the nursing profession has a very different take on “the biggest problems” than medical doctors. Or patients. Or hospitals. Each level of aggregation has it’s own version of variables and facts connecting them and narratives.

The “Walmart” solution or ‘omnicare’, by itself, hopes to succeed by taking a centrally planned vision of what should happen and imposing it even further from the hospital or strategy office into people’s homes. The mental model is “we know what you need and here it is.”

Even a basic application of the reality of Social Determinants of Health reveals that no you do not know from your ivory tower what person X needs. Maybe the reason they are fatigued is that they need a new washing machine, not a new medication easily ordered and delivered to their door.,

It makes the wrong thing easy.

Trying to define a technical “solution” to a socio-technical problem will simply never work. What can be defined is a framework for action. We can back up one level and make local collaboration and decision making and consultation work better. Every higher level can learn to listen better and be surprised.

This is a different kind or class of technology that can facilitate such things. People need to be able to be themselves in their own niche, but probably have an AI-assisted “wrapper” around them which makes them look like, to a higher scale, something manageable.

Or maybe I have it all wrong. Looking for replies!!
Wade

2 Likes

I think that you have some pretty solid insight there, Wade.

In summary, it is all too easy to attempt to reduce the EHR ‘problem’ to enable simplistic solutions - but these will not satisfy the coal face need.

Instead what is needed is for the coal face to be empowered to sort out their own solutions in a way that is secure, consistent, maintains data portability, and is easily to share / scale.

In other words, we need a robust platform that takes care of the data in context (e.g. OpenEHR with data separation from applications) and provides an extensive, flexible, and sophisticated workflow toolset (have yet to see one of these) that coal face clinicians without informatics expertise can use or modify effectively for their local situations.

Feels a long way off…

Hi Wade

I think you’re mostly correct—the problems appear ‘wicked’ and have been intractably difficult. I think your reference to Cerner iteratively stuffing up their Veterans solution is apt.*

There is a danger in believing that ‘intractable’ means ‘impossible’—you tend to lose hope. Here are a few thoughts about how we might do this better. First, let’s state the problems, then a few tentative solutions. I’d suggest that we already have the solutions in our hands—but they don’t involve AI.

The problems

  • You are absolutely correct that a top-down approach is doomed to fail—but on the other hand, we need the ability to organise and co-ordinate the system. How best can this be done?
  • Our information systems (or data systems) are currently not just a mess, but a morass. How can this be sorted out?
  • People at the coal-face are increasingly dispirited (especially now that COVID-19 is showing every sign of persisting as a drain on the system). How can we help?
  • We need to give people both what they want and—even more importantly—what they need. This implies that the IT solutions must not just improve efficiency but improve the way we deal with problems, and unburden clinicians. Historically, when new medical information systems are introduced, they almost always impair clinical performance. How can we get new benefits from our IT solutions?
  • Current systems are often document-centric and denormalized, so there will, for example, be half-a-dozen conflicting records of allergies. How can these each be resolved?
  • Managers often want data that are irrelevant to clinicians. How can their disparate desires be resolved?

The solutions

I’d suggest that we already know of existing approaches that can deal with all of the above—and more. Here are a few thoughts.

  1. The principal data problem is still one of normalization—as in 3NF. Nothing has changed here since Codd, but a worrying number of people see the issue as one of ‘optimisation’ or doing things faster. They may not even really ‘get’ 3NF. But we know how to normalize.
  2. We know how to fix the top-down problem. Deming worked it out in 1950. The problem here is that people don’t understand what he said, so they go to Toyota and faithfully copy Taiichi Ohno’s 14 principles (not realising that these are derived from Deming’s 14 principles) or stuff up ‘six sigma’ because they don’t understand the need for cultural change in management. This can be challenging, but re-engineering the processes can be made easier with the right tools. Which we have.
  3. A huge clinical problem—arguably the huge clinical problem—is a dearth of problem lists, and all the good that flows from a properly-constructed problem list. A problem list should identify problems (D’Oh!) but it should also link to evidence for and against the existence of each problem, say how severe it is, identify what is being done to address the problem, persist, and be up to date. This is where information technology should really shine, and currently doesn’t. (Maintaining problem lists on paper is a PITA).
  4. The conflict between local practice and the need to have common basic terminology and functionality across divergent or even disparate areas of healthcare is tractable, provided one doesn’t try to embrace the entire, crushing banyan vine of the SNOMED CT ontology. And to our credit, we in NZ have tried to pull useable terms from the mess. But this is just half the task. The rest is to create a two-tier structure where local teams are empowered to change the labelling on the basic structure, mapping easily into familiar terminology.
  5. I think that NT Cheung pretty much defined the solution to my last problem. The data should flow from properly constructed clinical systems. If a manager wants a datum, they should be able to demonstrate the clinical benefits of adding its capture.

A large part of the above is gathering those frustrated clinicians (not just doctors) who have some IT savvy and are already saying things like “This can be done better”, and giving them the ability to work together within a simple, basic framework that they can grow to meet their needs.

As an example of this last point, as part of my duties at my hospital, I visit pretty much every ward. Not only is there unnecessary variation around every corner, but there are clinicians struggling against the odds to organise their data. Every solution represents someone trying to re-invent the wheel, rather than improving basic solutions that feed into a common data pool.

I believe this can be done, provided the basics are done properly. What I see at present is glib solutions that don’t even acknowledge the above.

My 2c, Dr Jo.


* Cerner will never get it right, because their system is hugely denormalized, with thousands of tables. Epic has similar issues, for a very different reason—their B-tree-based data structures are intrinsically denormalized, something that seems to be poorly addressed by post-hoc Intersystems ‘solutions’.
We also know that XML is the antithesis of normalization :slight_smile:
A huge problem here is that the benefits only appear when a critical mass has been achieved, so that problems are handed over and maintained, decreasing overall workload.

1 Like

Dr Jo, what a great summary!

  • Thank you for catching my implication that this is “impossible” and hopeless. I agree totally that there is a lot that can be done, and that much could be improved if clinicians were empowered to fix things that are obviously wrong, without, of course, creating three new bugs for every bug fixed.

I like the idea of empowering the translation of global terms to terms that are meaningful locally. I suggested once that all applications should also allow users to add the equivalent of digital yellow sticky Post-it notes for hard won knowledge, such as "ignore the wording here and use option three!’ with click-to-see who posted that and when and why. The 200 “work-arounds”, some public, some forbidden but done anyway required to make the vendor system fit actual local practice remains an unsolved problem. And some system developers seem to believe that clinicians open a document, work on it until they are complete, then close it – as opposed to opening it, getting a page, going to interrupt, coming back cold and trying to figure out where they were. It’s like the developers never got out of their basement and came and shadowed clinicians for even one hour! The valuable feedback is never getting back and having a visible impact on the delivered product.

So yes, Demming. Summarized - everyone should listen to each other’s actual pain and then get together and fix it, and in some cases the problems come from higher level policies that are simply and mysteriously out of touch with reality.

But you said one thing:“A huge clinical problem—arguably the huge clinical problem—is a dearth of problem lists, and all the good that flows from a properly-constructed problem list.” Now, THAT is something that it seems to me Artificial Intelligence and a hybrid human-machine system could address. Natural language processing has become very powerful and even unstructured text can be, in principle, “read” and '“understood” by computers. So it’s technically possible for a computer to read through several hundred documents and put together a problem list from them – and also a problem-list squared : a list of inter-document conflicts, from allergies to whether it was the right or left leg, or whether this text clearly does not belong to this patient.

You’d never let a computer make changes without supervision, but you can imagine the computer locates issues, puts up the issue and the suggested resolution, and some human says yes or no.
Better, the computer is a rule-based expert system that can explain its reasoning, not a neural net which cannot, and it can say “What additional rule would have let me fix things the way you see them not the way I , the computer, saw then” and incrementally get wiser and wiser. When the computer gets better than a human at spotting and fixing issues, maybe you let it do that.

The obstacles to that are not really technical - they are social and legal and cultural.

But, if you stand back, this is also the issue that is widespread comparing what system X in Christchurch says about the patient, even their birth-I date or whether that’s “Li Bing Ying” or “Bing Ying Li”, with what 27 other systems say in varying degree of conflict with that. and various confidence that you even have the same patient.

I did a search once and found ten thousand patients in the database who had a middle name of “Von”. Highly unlikely. Very likely their last name had a space in it and was parsed wrong by a human or computer at data-entry time. Odds are 9,950 of those were wrong. But people reacted in horror when I proposed moving the “Von - space” to their last name field. Absolute horror. That would break 50 names that were right! Yep, it would. Can’t do that! We are back at the railroad switch and whether we let the train kill ten people as it’s doing or pull it and kill one on the siding. It’s a fascinating problem as to what to do in such situation. The hospital’s decision was to pretend there was no problem and move on. I suspect people here would be of two minds.

And room for ambiguity, you asked for! How much medical judgment is lost due to EHR’s forcing a choice of either A or B, not “well, leaning towards A but it could be B” . The patient’s life and the clinician’s judgment gets gated by the programmers decision to force a diagnosis, exactly one, put it right here.

Still pondering how to empower workers at the “coal-face” to actually fix tiny portions of large systems which they have to use and have to apply local work-arounds in order get their actual job done, not the job in the mind of some developer far away in space and time.

If we could solve that problem it would be very helpful. So I’m trying to get the right mental frame around it.

I think this particular socio-technical problem isn’t really about the data architecture and degree of normalization, or fragmentation into silos called documents. I see several tangled problems related to managed evolution and growth of large complex systems.

The first is the inverse of the greater system not comprehending the smaller operation’s needs. The smaller operation almost certainly has no idea of what the impact would be, or might be, on the larger system if "Just this one tiny thing here’ was changed ( aka “fixed”). In many systems I’ve had a chance to look under the bonnet, it seems even the system developers had lost control of what did what and were paranoid about anyone touching or changing anything without doing a complete regression test suite on the total revised system. I managed some regulated systems and we had a long and time intensive process required to put anything from “development” into “test” and from test into ‘production.’ So in your terms. Dr. Jo, not only is the whole health care environment a mess and morass, but under the covers many of the software systems in use are similarly a mess. Rapid staff turnover and poor documentation further complicate the issue.

As more systems grow larger and collide with other systems, this problem goes up exponentially. Now you have cascading system updates flowing around huge loops with long delays. People become less and less interested in making system-level changes or understanding how their own process affects others downstream. No one wants to touch something and the next thing they know 200 people have rushed down to their office to ask “what did you change?!!”

Again, I don’t think solutions are impossible, but i think the legacy approach to system validation is no longer workable and never will be again. “The cheese has moved.” And again, the answer is already known – replace spaghetti code with well structured object-oriented code, with very tightly controlled interactions between modules so that surprise cascade effects are prevented. If that is done well enough, which is possible, then technically the change can be determined to be entirely within module A, and within that entirely within submodule A-3 and within that entirely within sub sub component A-3-bb8, and so long as your revised component is “plug compatible” with the old one, yes, you only need to validate that sub sub sub component and you can safely swap it in without putting the whole enterprise at risk.

But until that happens, product managers, regulatory authorities, barristers, and one’s own peers will remain justifiably wary of breaking the seal on the box, voiding the warranty, and tinkering with what’s inside.

That legacy model has outgrown its effectiveness, however, and produced systems that are increasingly hard to modify confidently, which means they are left to diverge from changed reality under various rationalizations, and become legacy maintenance burdens that have to coexist with yet more new systems layered on top of them. The whole thing becomes brittle, arthritic, and finally will collapse under its own weight.

What we need, on the other hand, is a clinician who sees that a field on the screen is too small for new values of that variable, and somehow pushes and causes the system to understand what is being suggested, validate the tiny tiny portion of the world that’s affected, agree, and add that as a new improved feature to every other instance of that system in existence. ( Although, of course, the flip side is going crazy over procedures that used to work and now suddenly, on Wednesday, they no longer work because someone updated the field locations on the screen so the documentation for temp staff is all wrong. )

What is needed is software design and devlopment at a sufficiently high professional standard that the system can be continually upgraded and evolved without breaking anything in the process. It needs “dynamic stability”. What we have is static stability.

Then, when someone realizes that these 2 tables over here need to be normalized ( 3NF), it can be an afternoon’s work for a junior software engineer to make the change, not an Act of Parliament.
Times a billion instances as each change and improvement, as in the Toyota Way, will reveal more things that also really are obsolete and need to be improved. tiny things. many many tiny things.

The problem then is “who makes money?” Suddenly there are no huge projects at great expense and profitability. Suddenly we do for software systems what public health does for hospitals – make large expensive acute repairs obsolete because problems were fixed at the local level so they never metastasized.

And it has the same dilemma as preventative maintenance everywhere . If the department does a good job they all get fired because “nothing ever breaks.”

But until we can start dissolving the blood clots in the way of system improvement, the outcome is clear and not what we want. But the dissolving metaphor is a miillion small molecules one at a time being fixed, versus whole-body surgery.

As always open for counter-evidence or better mental models!
Wade

2 Likes