I’ve been thinking a bit about the wearables eco-system, and how data from the millions of Apple watches and Fitbits (Google) could become part of a person’s medical record.
But there may be other angles to this. Apple and Google use location data from billions of devices to derive real time information on how busy the traffic flow is. They leverage Bluetooth and GPS in billions of devices to support location tracking tags.
So what might they do with the biometrics of billions of people?
As my contribution, I suggest that emergency services might be interested where there’s a sudden rise in heart rate among (say) the people inside a bank.
Any thoughts?
Thanks for discussion. I have been thinking around this and it was a focus at a GP CollegebDigitalndaybsomenyears ago from a consumer representative presentation perspective . ThenchallengebInsee is the filtering of the huge amount of normal data and being able to flag only the exception reporting to the clinician and PMS otherwise this is “alert fatigue” on steroids.
Ai tools now available should be able to triage the constant flow of normal data and pick the changes in trend thereby flagging concerns to system. I though Whanau Tahi were doing some work on this already based on a webinar I attended last year.?
Hi All,
It’s been looked into by a number of different parties, including patient portals. Unfortunately, the high variability and lack of consistent useful standards in the capture of data, devices and digital identities that results in clinically useful has been a point of contention.
Until wearable results are recorded in standardised approaches, and outputs are agreed upon where it is of clinical relevance, Data from Fitbits/AppleHealthKit and the like will remain useful to trigger behavioural response, rather than showing clinical provenance meaningful towards a health record.
I don’t think it’s a co-incidence that the major wearable suppliers have hyper-scale investments in machine learning. (There’s a lot of players in this area, but Google and Apple are definitely 800 lb gorillas).
And there’s been a lot of progress in the last decade in using very powerful software to compensate for mediocre hardware. Cloud computing is an obvious example. We used to purchase very expensive servers and storage to build fast and reliable data centres. Now it’s cheap consumer hardware with the speed and availability is done in software.
Maybe as a better example, some phones use quite old camera sensors, and do amazing tricks in software to improve the image quality. (like take 20 photos over a half a second, compensate for the parts of the images where movement has occurred, then combine it all into a single image. ).
So I am expecting that Apple ad Google will be generating valuable insights into the health of individuals, and at some point we will probably be wanting to include the processed conclusions into the clinical record.
But that’s not what my question was about.
GPS in phones is useful as it tells the user where they are, and nicely ties into mapping applications. But Google and Apple combine that data for millions of people and can show in real time how congested the roads are. I was speculating what these companies might do with the combined biometrics data of a billion people.
Overall use of wearables comes down to the practicalities of the reporting and the meaningfulness of the measures.
I recently upgraded to the Google Pixel Watch and it came with Fitbit premium (to continue tracking my last 6 years of fitbit data between Versa and Versa2 models earlier) and while the latest is FDA approved, and yes I track it for my frequent sleep and stress levels, it’s not really that useful unless I’m looking for something.
Monitoring versus intentional titration or target setting, is generally what needs to be considered in wearable information, but it’s certainly not clear how that’s made available to the lay user.