This is of concern. It is a good case study of where an algorithm without ongoing oversight will make decisions on the data it has - which is not necessarily the right decision.
To be honest, I am not surprised. I was interested to understand how they discovered this issue - looks like the researchers stumbled onto it when they were considering another issue.
From a governance perspective, there appears to be a failure to monitor and audit the behaviour of these algorithms. In the light of this example, and many others like it, I wonder why the AI Forum NZ’s Report into Healthcare in NZ suggests that we should have a fundamental change to our mindset whereby with true AI clinicians would not always have to validate the outputs of intelligent systems.
opengraphobject:[360495710445568 : https://www.wsj.com/articles/researchers-find-racial-bias-in-hospital-algorithm-11571941096 : title=“Researchers Find Racial Bias in Hospital Algorithm - WSJ” : description=“New study finds bias in a common algorithm hospitals use to deploy extra medical help: It favored healthier white patients over sicker black patients.”]
I think identifying good and bad cases, and using them as baselines for discussion about th evolving framework of best practices in this space would be good.
opengraphobject:[360495710445568 : https://www.wsj.com/articles/researchers-find-racial-bias-in-hospital-algorithm-11571941096 : title=“Researchers Find Racial Bias in Hospital Algorithm - WSJ” : description=“New study finds bias in a common algorithm hospitals use to deploy extra medical help: It favored healthier white patients over sicker black patients.”]