Core Competencies for Clinical Informatics - international resources

The Faculty of Clinical Informatics UK has just published its new Core Competency Framework.
Follow the link to review the content - https://facultyofclinicalinformatics.org.uk/blog/faculty-of-clinical-informatics-news-1/post/launch-of-competency-framework-for-clinical-informaticians-23

Do these competencies fit your vision and perspective for Core knowledge and skills?

  • Yes
  • No
0 voters

Would you propose different competencies?

  • Yes
  • No
0 voters
1 Like

Hi Rebecca,

This list is similar to the SFIA list of competencies but with a better focus on Clinical Health Informatics. It defines all the domains that are encompassed within the discipline, but no practitioner can gain and retain competence of practice across all of them in a meaningful way. With SFIA the application of that framework is to describe to a peer, the evidence of which of their 106 domains that you have currency of practice in, and to what level?

Regards

Greig

1 Like

Hi @Greig Russell (greig.russell@midcentraldhb.govt.nz) and @Rebecca George. We need to resurrect the conversation we started back pre-Covid with the HINZ workshop in Wellington. Maybe through this forum or a zoom meeting? Inviting the participants from that last meeting and any one else interested.

Cheers Inga

1 Like

I agree about the breadth of skills and competencies, but it’s typical of health/clinical informatics all over the world. If you look at it from a learning programme point of view the same problems arise – how do you learn the right skills for the kind of job you want to do? It would be interesting to map these competencies into some of the job descriptions we have in Discourse and see how an emphasis falls in terms of different types of jobs.

image001.jpg

image003.jpg

1 Like

Tthis extensive list of competencies can be summarised by:

“Mostly hits the target, largely misses the mark”.

These criteria, exhaustive (an exhausting) though they are, are both too large and too small.

As a CQI-loving clinician who has also programmed for four decades (and can create an SQL database in 3NF, then query it and apply SPC methods to the results; program in Lisp, Perl, JavaScript, Python, C, etc; and design and implement a computer language of his own specification; and is rather familiar with ontological constructs like SNOMED CT) I can see several ‘good bits’, a number of omissions, and a few egregious errors that, despite perhaps fitting current best-practices will cause long-term pain. I will try to elaborate.

The fairly good bits

If you list the contents of this framework, there are many desirable attributes. I’d applaud the emphasis on human factors, and the criterion “Applies quality improvement and process engineering to facilitate business and clinical transformation, measuring and analysing appropriate outcomes”. It is surely wise to be able to understand clinical concepts (although I’d doubt that a clinician of less than ten years experience really “gets” most conditions, even if they can parrot definitions), have a good feel for audit and the statistical nous required to make audit not just meaningful but publishable, and understand the clinical environment—and how it constrains good people at every turn. To grasp good clinical decision-making , they arguably need even more experience. It is wise to understand how targets and league tables force us to take our eyes off the ball. It is hugely desirable to be able to pop up PubMed (for example) and filter out the cruft when searching for relevant information.

It is indeed valuable to be able to characterise the software life-cycle, and many theoretical aspects of computer science. Clearly, deep insights into UI and UX are desirable (and, looking around, mostly lacking). It is also necessary to understand health care systems architecture—and why most systems have accreted, even if the intentions were to design them. Security is fundamental to this whole exercise—and also fundamentally and universally deficient. Data skills are mandatory, as is an understanding of the limitations of ML (gradient descent+backprop), masquerading as “AI”. Meticulous and wise application of sound ethics is vital, as is application of appropriate principles of that much-abused term “change-management”. Of course patients must be the core focus of our efforts. I’ll say a bit about evidence-based medicine below. I won’t say much about leadership, despite its importance, as I know others are far more capable than I am at addressing this—although most of the time I hope that I can distinguish between a good leader and an arse-covering bureaucrat!

A box-ticking problem

But I get the impression that someone can check most of the boxes in the framework, and still be a complete failure at clinical informatics. Rather more worryingly, I am concerned that there are those who will tick very few of the above boxes, but still have a huge amount to contribute. I am especially concerned that some well-meaning person will take all of the above ‘criteria’, make a check-list, and then start applying them to colleagues. Because it is obviously near-impossible for anyone to demonstrate “complete competence” in all domains, some sort of threshold will be set, either within or across competencies. This will completely miss the mark, precisely because any clinical informatics enterprise needs multiple people with multiple strengths. Overall mediocrity is likely to be more harmful than the combination of brilliance at some aspects, abysmal ignorance in other areas, and willingness to co-operate and learn.

Big defects

Above, I said the criteria are also too small. There are sentinel defects. To me, the fundamental deficiencies that stick out like a sore thumb include:

  • A complete lack of reference to Bayesian methods. This is sooo 20th century;
  • Failure to mention the implications of Pearl’s ladder of causality (Heck, it’s only 2 decades old);
  • A naĂŻve take on “levels of evidence” (more on this below);
  • A failure to emphasise the need to understand common-cause variation, surely the lynchpin of statistical quality control;
  • Failure to mention the all-important concept of database normalization. It’s my belief that if you don’t understand (‘grok’) this, you shouldn’t be allowed to touch databases, let alone design them.
  • A naĂŻve take on data security, including failure to emphasise the centrality of getting every participant on board in the cause of security, the importance of social engineering in breaches, and how structural security must be designed in from the bottom up—and never is. Kerckhoffs’ law doesn’t even get a mention. And so on.

Not vaguely future proof

Principles and frameworks will never be designed to cut the shackles of current wisdom, but should always be forward looking and somewhat edgy. These aren’t. They effectively espouse mediocrity, as evidenced by their emphasis on ‘best practice’. Let me use a clinical example to highlight this point. In the management of children with cystic fibrosis in the 1950’s, ‘best practice’ produced a median survival from birth of just 8 months. One centre claimed 10 year survival, and an entire measurement network was created to disprove their claims, which however turned out to be true. By the time everyone else had cranked their survival to 10 years, the outlying centre was achieving 20 years. And so on. Principles should not try to be exhaustive, but should be aspirational, and encourage:

  • Widespread sharing of new things that work—in contrast to cherished but staid “best practice”;
  • Good measurement principles;
  • A healthy community of practitioners.

I am very concerned that the competency framework will achieve precisely the opposite. It strikes me as having a very concrete focus on individual competencies, rather than the power of people co-operating as a group.

Backward-looking

There are also things that I see as not so much backward-looking, as frankly wrong. These include:

  • “Hierarchies of evidence”. The current ‘best practice’ (see illustration above) EBM approach to levels of evidence is an anachronism, a band aid for the defects in frequentist statistics. Bayes allows us to join up information, and this is what we should be doing—and espousing!
  • Section 2.1 is not joined-up. It presents fragmented ideas like “Discuss the range of health information systems” without a clear feel for larger structures. I also don’t know what BLMN means. Possibly BPMN?
  • Section 2.4 fails to convey the utter chaos and representational inadequacy present in all current “interoperability”, including FHIR.
  • There is complete absence of the core concept of keeping things simple. The entire framework is in fact a slap in the face of minimalism. Yet one of the core issues with almost all current software is unbounded growth—often related to poor initial decisions, and the consequent bad architecture that breeds more badness. This is the central, largely unacknowledged problem in modern IT.
  • Where is the central importance of a common data dictionary mentioned? Surely this should be right at the start?
  • Software error is hardly given a nod. The word ‘error’ doesn’t even appear in the entire document, but should be a major topic e.g. “2.9 Software Error”
  • Where is the TDD?

Anti-science

All of the above pale into insignificance when confronted by the bald statement at the start of Section 3:

“Healthcare is a data-driven activity to inform clinical practice”

No it is not! Nor is healthcare informatics. Although common in the ML/AI community, this sort of statement is profoundly anti-science. Historical formulations of science (pre-1930s) concentrated on “known facts” (epitomised by logical positivism) or perhaps asymptotic approximation to some Platonic ‘truth’. The known data could lead us in the right direction.

We’ve now moved on—at least, good scientists have. Good science starts with problems , and is characterised by early, bold generation of hypotheses. Strong attempts are then made to refute these theories. If they survive, then they are provisionally accepted as ‘true’. Acquisition of traceably calibrated data informs the decision making, but the data drive nothing. Shorn of context and theory, the data are mute. This is well shown by Judea Pearl. One of the most severe and pernicious failings of modern attempts at data science that so many of its adherents don’t get this basic point about what science can reasonably be. (Happy to discuss the philosophical ramifications of this model).

It is also unwise to specify specific technologies or languages (R, Python, Jupyter) in a document of this nature, as it will likely become dated, and may well skew perceptions. The term “AI” is used very loosely.

My 2c, Dr Jo.

2 Likes

I agree with @derek.b whilst these core competencies look very comprehensive I struggle to see how a new or existing informatician would be able to cover all of these off without it becoming a tick box exercise. I feel that a more stepped approach would take this list from its current unwieldy state into something achievable by anyone rather that its rather exclusive club. I also agree with @DrJo both too large and too small, in attempting to cover everything they have made it hard to actually achieve anything.

Saying all that it is a definite step in the right direction, just maybe a first draft not a final one!

1 Like

The literature on health informatics competences does the same, i.e. very broad list of competencies, in an attempt to cover the breadth of the topic without going into a lot of dept. Conversations I’m having with people about competences conclude that since there are so many different roles in the field it might be helpful to (1) identify core competences (i.e. the job isn’t clinical informatics if it doesn’t have a, b and c) (2) cluster competences that make sense to be clustered etc.

On the other hand, we could take a few job descriptions from our CiLN collection and map the UK competences into them and see what happens. That might be the most practical thing to do.

image001.jpg

image003.jpg

1 Like

This (just published) article outlines and critiques the FCI Core Competency Framework in the global context, and its future direction. It concurs with many of the astute points raised earlier in this Topic / thread. It might be of interest to @education-competency-ciln-wg

https://www.sciencedirect.com/science/article/pii/S1386505622002192

Where are we at here in NZ on this front?

Obvious gaps in these areas are data-digital related competencies on:

  • Data sovereignty
  • Cultural Safety - so one is conscious of own biases when managing/working in data and digital
  • Te Tiriti- competencies which enable that Te Tiriti is effectively given effect to
  • Whanau and person-centred engagement and co-development competencies- e.g., relating to domain 6 “Leading informatics teams and projects” - knowing how to do so effectively and not just tokensitcally by having a “patient” rep, for example

We also need to consider what are the principles of standards development e.g.

  1. Minimum standards vs. aspirational ones? (e.g. regulation vs. professional practice levels) - related but different. Key principle as it influence what is in content

  2. Principles based standards vs. detailed standards (e.g. the UK ones - as per most of their competence standards tend to be very detailed, input focused (vs. outcome focused) and has sometimes unintended consequence of compliance focused rather than future/enabling). In contrast, modern regulatory practices and standard setting tend to be more principles and outcome focused. The pros of the latter is that it does not stifle innovation in terms of what is within those standards (and thus teaching) but cons can be that they are too amorphous in terms of teaching. The latter approach is more contemporary - as per policy setting practices.

  3. Purpose for standards - e.g. will they be used primarily for regulation (e.g. inclusion within current responsible authorities such as Nursing Council, Medical Council, Pharmacy Council etc) standards. Or for professionalisation of clinical informatics (e.g. set by a professional association and body).

Overall, agree that a national set of clinical informatics competency standards should be developed. Above are some key things to consider. Importantly, right at the outset, I would suggest that this process to develop is done in partnership with Maori.

Noho oro mai
Jerome

5 Likes

There’s also a major issue missing around the following:

  • Data ethics as to general clinical applications - Not just analytical methodologies using ML algorithms/AI
  • Relational versus non-Relational databases, pros/cons, performance vs analytical applications.
  • Clinical governance of informatics initiatives, especially in respect to social licences.

My third thought was why this list? CHIA has a similar list and developing equivalency with our friends across the ditch makes more sense. Ok, as a member of both I accept the bias.

My second thought was we need to do more with Health Informatics than just not going harm, we should be causing some positive good. Understanding how harm can be caused despite good intentions is important. Equally basing decisions on entitlement, hand waving and magical thinking causes systemic harm that health informatics can solve.

My first thought like everyone else was about the absence of an equality lens and the need to respect the unique role of Te Tiriti o Waitangi in Aotearoa.

I do take @jerome’s point about the need to differentiate between the skill base need and maintaining legislatively mandated but diverse professional accountability standards.

Something else to bear in mind is that this isn’t the only (or the first) set of clinical competencies. There are the IMIA competencies for medical bio-informatics, TIGER competencies for nursing informatics and our own NZ nursing informatics competencies that are part of the nursing scope of practice.

A good approach to competencies would be to find ways to apply them. The international community has been designing and debating competencies for decades, esp IMIA. Let’s find out how they work in practice. How do we apply them? What changes when we apply them? Discourse is good. Action and application are better. I would like to see which ones become classics and which ones pass through to become something different as the digital health landscape changes.

~WRD000.jpg

3 Likes