Just came across this
Has there been a ruling made in the last couple of days that OpenAI must keep…
Just came across this
Has there been a ruling made in the last couple of days that OpenAI must keep…
Makes perfect sense to a degree. Some questions and outputs, potentially used in real-world situations with risk and consequences often disappear when you ask that same question again a few hours later.
I’ve had several occasions testing AI models that the same question repeated 6 hours apart were substantially different in outcomes so it’s hard to know when hallucination or history of the output (potentially wrong advice) is able to be assessed without screenshots.
It hopefully keeps AI developers to be accountable for some of the outputs they produce. However, there will probably be an adverse outcome to the environment for more space/capacity to keep these records.
This is a fascinating development and astute observation; thank you @charlt and @SamuelWong for sharing.
That’s a bit scary for health. I am working on a Te Tiriti Action Plan for ANZCA and one of the actions I was proposing was to consider/investigate use of AI to inform personal or institutional bias. Not much point if it’s not reliable.(I actually don’t know if my suggestion is even feasible, but I thought it might be interesting to see if you could detect a selectivity bias for operations or medications etc. Noone means ot be discriminatory but we all are to a certain extent
Well, we tolerate this sort of thing in humans all the time (and call it an ‘opinion’ that is revised in the face of new evidence / mood). It is just super unnerving when a computer does it too!
Personally, I believe that screening for bias is a pretty good use-case for AI. Would need careful parameters set beforehand and ongoing oversight - as per all things generative AI.
This strongly suggest to me that we should be self-hosting our own Gen AI models, to be safely insulated from rogue states / ensure our data sovereignty. This is not actually all that difficult to do these days.
In fact, there is a strong case to be made for doing this with all the software we use. This highlights the case for self-hosting open source solutions for all that stuff (with a focus on the Microsoft ecosystem we are so deeply intertwined with):
I hear that Apple reported the newer LLM having more hallucinations… so my question then would be.. how would we know which ones are the newer ones?
good to know that suggestion is not completely off the wall!
Great point Nathan - we know unconscious bias exists - but so does AI bias. Look at those most likely to use AI and those most likely to input data in to AI and the fact that data is often second hand data in health - as one young disabled person told me “the second you write down my story that becomes your version of the truth”… @Mhead there’s a good summary of equity considerations on the AI equity chat if that’s helpful for the work you are doing
Thanks Alex. I’ll check the AI equity chat. . like the idea of having a tool to check unconscious bias but on a personal rather than institutional level perhaps.