Ten principles of NHS AI code of conduct published

Good to see this, may be of interest to others. https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/

Thanks Nathan - yes, they are good but the legislative framework for the NHS is a little different to the NZ one. I found the further commentary around some of the principles really useful.
opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”]

The other thing I would note (see our other post about Data) is that in the Caldicott context, the NHS assumes anonymisation is the answer. As we know from Eric Topol, algorithms are good at finding patterns very well trained humans can’t. He gave the example of retinal scans, where the machine vision was able to determine the gender of the person. I am guessing it could probably build a model of age, lifetime earnings and ethnicity too. In that context a model based on anonymisation needs to be paired with transparency and explainability?

opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”], opengraphobject:[351561256136446 : https://en.wikipedia.org/wiki/Caldicott_Report#Caldicott_principles : title=“Caldicott Report - Wikipedia” : description=“”]

Hi John
As a researcher (and CEO of my start-up), we are actively working on creating AI algorithms for disease classification using ophthalmic images. We have a lot of experience in ‘big data’ handling, security and ethics. I disagree with Eric Topol that an algorithm can identify gender, age, … An algorithm can only produce a probability for its classifications (e.g. %60 male, %60 smoker, …). These probabilities are product of many things, including dataset size, data imbalance, network architecture and so on. The point being that the similar data (but not exactly the same) used to train a different AI, will produce a different set of probabilities (e.g. %57 male, %45 smoker). Even if these probabilities were repeatable and highly-accurate, in ‘big’ data set (>10,000 data points) it will be identification of an individual will be improbable.
opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”]

Thanks for engaging Ehsan. I understand that these models predict likelihood’s not absolutes (the variability you identify leads to the interesting question of business models for AI - but that is a different conversation).

The point I was making is not so much about retina’s in particular, but more that in a country like New Zealand there is already evidence that we can de-anonymise data we have using proxy variables. If we are joining up data and building algorithms using “anonymised data” and then assume this won’t reveal personal characteristics and therefore break the assumed privacy of anonymisation is false. The reason is that the point of machine learning particularly is to find patterns in data that humans can’t (ie. the comment about gender).

I am also not saying we shouldn’t do this, but social license is presumed at the moment in health, due to anonymisation. My concern is that if this goes away we run the risks that the NHS has encountered (eg. Care.Data, Caldicott review, Google Deepmind).

Useful primer below on this problem from Jayden MacCrae, also worth having a read of Rhema Vaithianathan’s work at Auckland Uni (building predictive models for social services in the US). Happy to chat further.

https://datacraft.nz/what-we-do/redicentification-risks-of-health-data/
opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”], opengraphobject:[351561256400853 : https://datacraft.nz/what-we-do/redicentification-risks-of-health-data/ : title=“Reidentification Risks of Health Data – DataCraft Analytics” : description=“”]

I understand your concern John. However, the same could be said for other form of data, for example credit score or insurance risk assessment based on AI models of ZIP code + purchase history + education level.
I am familiar with the DeepMind case at Moorefields. I believe the solution is to provide accessible and lay information for the public. I personally believe that the “good” of AI far outweighs its potential “evil”.
opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”]

I agree we should be doing this work - it is how we do this that I see as important. I am interested in avoiding the unintended consequences of charging ahead.

You will be familiar where hubris in healthcare can get us and the reason that ethics is important - but for the wider audience who may not know - linked is the turning point for ethics in health.

We are thinking about this in the data strategy - is how we do this better.

opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”], opengraphobject:[351561256403515 : https://www.cdc.gov/tuskegee/timeline.htm : title=“Tuskegee Study - Timeline - CDC - NCHHSTP” : description=“Tuskegee Syphilis Study Timeline”]

That’s fair. I will be more than happy to meet and help, if I could be of assistance
opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”]

Great hope you are coming to the ETIH conference in Christchurch.
opengraphobject:[351561256015846 : https://www.digitalhealth.net/2019/02/nhs-ai-code-of-conduct-ten-principles/ : title=“Ten principles of NHS AI code of conduct published” : description=“The principles, which tech companies are expected to follow, include understanding user need as well as being fair and transparent.”]