Tthis extensive list of competencies can be summarised by:
âMostly hits the target, largely misses the markâ.
These criteria, exhaustive (an exhausting) though they are, are both too large and too small.
As a CQI-loving clinician who has also programmed for four decades (and can create an SQL database in 3NF, then query it and apply SPC methods to the results; program in Lisp, Perl, JavaScript, Python, C, etc; and design and implement a computer language of his own specification; and is rather familiar with ontological constructs like SNOMED CT) I can see several âgood bitsâ, a number of omissions, and a few egregious errors that, despite perhaps fitting current best-practices will cause long-term pain. I will try to elaborate.
The fairly good bits
If you list the contents of this framework, there are many desirable attributes. Iâd applaud the emphasis on human factors, and the criterion âApplies quality improvement and process engineering to facilitate business and clinical transformation, measuring and analysing appropriate outcomesâ. It is surely wise to be able to understand clinical concepts (although Iâd doubt that a clinician of less than ten years experience really âgetsâ most conditions, even if they can parrot definitions), have a good feel for audit and the statistical nous required to make audit not just meaningful but publishable, and understand the clinical environmentâand how it constrains good people at every turn. To grasp good clinical decision-making , they arguably need even more experience. It is wise to understand how targets and league tables force us to take our eyes off the ball. It is hugely desirable to be able to pop up PubMed (for example) and filter out the cruft when searching for relevant information.
It is indeed valuable to be able to characterise the software life-cycle, and many theoretical aspects of computer science. Clearly, deep insights into UI and UX are desirable (and, looking around, mostly lacking). It is also necessary to understand health care systems architectureâand why most systems have accreted, even if the intentions were to design them. Security is fundamental to this whole exerciseâand also fundamentally and universally deficient. Data skills are mandatory, as is an understanding of the limitations of ML (gradient descent+backprop), masquerading as âAIâ. Meticulous and wise application of sound ethics is vital, as is application of appropriate principles of that much-abused term âchange-managementâ. Of course patients must be the core focus of our efforts. Iâll say a bit about evidence-based medicine below. I wonât say much about leadership, despite its importance, as I know others are far more capable than I am at addressing thisâalthough most of the time I hope that I can distinguish between a good leader and an arse-covering bureaucrat!
A box-ticking problem
But I get the impression that someone can check most of the boxes in the framework, and still be a complete failure at clinical informatics. Rather more worryingly, I am concerned that there are those who will tick very few of the above boxes, but still have a huge amount to contribute. I am especially concerned that some well-meaning person will take all of the above âcriteriaâ, make a check-list, and then start applying them to colleagues. Because it is obviously near-impossible for anyone to demonstrate âcomplete competenceâ in all domains, some sort of threshold will be set, either within or across competencies. This will completely miss the mark, precisely because any clinical informatics enterprise needs multiple people with multiple strengths. Overall mediocrity is likely to be more harmful than the combination of brilliance at some aspects, abysmal ignorance in other areas, and willingness to co-operate and learn.
Big defects
Above, I said the criteria are also too small. There are sentinel defects. To me, the fundamental deficiencies that stick out like a sore thumb include:
- A complete lack of reference to Bayesian methods. This is sooo 20th century;
- Failure to mention the implications of Pearlâs ladder of causality (Heck, itâs only 2 decades old);
- A naĂŻve take on âlevels of evidenceâ (more on this below);
- A failure to emphasise the need to understand common-cause variation, surely the lynchpin of statistical quality control;
- Failure to mention the all-important concept of database normalization. Itâs my belief that if you donât understand (âgrokâ) this, you shouldnât be allowed to touch databases, let alone design them.
- A naĂŻve take on data security, including failure to emphasise the centrality of getting every participant on board in the cause of security, the importance of social engineering in breaches, and how structural security must be designed in from the bottom upâand never is. Kerckhoffsâ law doesnât even get a mention. And so on.
Not vaguely future proof
Principles and frameworks will never be designed to cut the shackles of current wisdom, but should always be forward looking and somewhat edgy. These arenât. They effectively espouse mediocrity, as evidenced by their emphasis on âbest practiceâ. Let me use a clinical example to highlight this point. In the management of children with cystic fibrosis in the 1950âs, âbest practiceâ produced a median survival from birth of just 8 months. One centre claimed 10 year survival, and an entire measurement network was created to disprove their claims, which however turned out to be true. By the time everyone else had cranked their survival to 10 years, the outlying centre was achieving 20 years. And so on. Principles should not try to be exhaustive, but should be aspirational, and encourage:
- Widespread sharing of new things that workâin contrast to cherished but staid âbest practiceâ;
- Good measurement principles;
- A healthy community of practitioners.
I am very concerned that the competency framework will achieve precisely the opposite. It strikes me as having a very concrete focus on individual competencies, rather than the power of people co-operating as a group.
Backward-looking
There are also things that I see as not so much backward-looking, as frankly wrong. These include:
- âHierarchies of evidenceâ. The current âbest practiceâ (see illustration above) EBM approach to levels of evidence is an anachronism, a band aid for the defects in frequentist statistics. Bayes allows us to join up information, and this is what we should be doingâand espousing!
- Section 2.1 is not joined-up. It presents fragmented ideas like âDiscuss the range of health information systemsâ without a clear feel for larger structures. I also donât know what BLMN means. Possibly BPMN?
- Section 2.4 fails to convey the utter chaos and representational inadequacy present in all current âinteroperabilityâ, including FHIR.
- There is complete absence of the core concept of keeping things simple. The entire framework is in fact a slap in the face of minimalism. Yet one of the core issues with almost all current software is unbounded growthâoften related to poor initial decisions, and the consequent bad architecture that breeds more badness. This is the central, largely unacknowledged problem in modern IT.
- Where is the central importance of a common data dictionary mentioned? Surely this should be right at the start?
- Software error is hardly given a nod. The word âerrorâ doesnât even appear in the entire document, but should be a major topic e.g. â2.9 Software Errorâ
- Where is the TDD?
Anti-science
All of the above pale into insignificance when confronted by the bald statement at the start of Section 3:
âHealthcare is a data-driven activity to inform clinical practiceâ
No it is not! Nor is healthcare informatics. Although common in the ML/AI community, this sort of statement is profoundly anti-science. Historical formulations of science (pre-1930s) concentrated on âknown factsâ (epitomised by logical positivism) or perhaps asymptotic approximation to some Platonic âtruthâ. The known data could lead us in the right direction.
Weâve now moved onâat least, good scientists have. Good science starts with problems , and is characterised by early, bold generation of hypotheses. Strong attempts are then made to refute these theories. If they survive, then they are provisionally accepted as âtrueâ. Acquisition of traceably calibrated data informs the decision making, but the data drive nothing. Shorn of context and theory, the data are mute. This is well shown by Judea Pearl. One of the most severe and pernicious failings of modern attempts at data science that so many of its adherents donât get this basic point about what science can reasonably be. (Happy to discuss the philosophical ramifications of this model).
It is also unwise to specify specific technologies or languages (R, Python, Jupyter) in a document of this nature, as it will likely become dated, and may well skew perceptions. The term âAIâ is used very loosely.
My 2c, Dr Jo.