What if I told you that your doctors will consult with a "robot colleague" equipped with an artificial intelligence based on your medical history to diagnose your health and prescribe medications?
For decades, the idea of entrusting medical diagnosis to a machine was simply science fiction. Now, with the advent of sophisticated language models like GPT-3, this possibility could become quite real.
Say thirty-three thousand
In December, from the sea of \uXNUMXb\uXNUMXbthe growing scientific articles on artificial intelligence is ticked Foresight, a medical machine learning model developed by researchers at King's College London (KCL).
Makes proper use of GPT-3, the model that powers the popular "smart" chatbot Chat GPT, and a dataset based on 10 years of real electronic health records. And what does it do? It predicts future medical events, estimates risks, suggests alternative diagnoses, or predicts complications in real or simulated people whose information is entered into it.
He's not the only "AI doctor" taking his first steps. At the end of 2022, Google announced the latest advances of Med-PaLM, a medical version of its huge Al model called PaLM. Med-Palm, as it is easy to guess from the name, is trained on texts taken from the web and medical books and optimized using medical documents.
Here the discussion becomes even more interesting. Med-Palm answers common medical questions that require lengthy written answers, and the (real) doctors who tested it are seeing insane results. When we started talking about it, as of March 2021, the accuracy of the model was 75%. Today, 92,6% of Med-PaLM responses "are in line with the scientific consensus": only 0,3% less than responses given by human doctors.
There are still gaps in some responses and possible safety issues that don't yet make the model ready for clinical use, but it's clear this AI is making rapid progress.
How close are we to seeing these AI medical tools in clinics and hospitals?
Take note of this general prediction, then I'll explain. Medical AI models are likely to reach a level of clinical proficiency before all the necessary rules and boundaries are set by regulatory bodies.
Why did I write it to you? Because probably the single biggest obstacle to the clinical use of a medical artificial intelligence will be privacy.
The creators of Foresight at King's College say they have removed any potentially identifying information from the electronic medical records used to train the AI. Even the presence of rare diseases with fewer than 100 samples. This eliminates (or at least reduces) the risk of patient identification, but also limits the capabilities of the system.
In any case, to answer the question about timing, the technicians say that another year is needed to "lock down" the safety of these medical systems. Ok, in 2024 these gadgets will be ready: at that point (let's go back to the general rule) their adoption will only be a political fact. Normative.
I imagine that the accuracy of the answers and the transparency of the decision-making process are not the only things to evaluate before using a "robot doctor". For example, it will be necessary to understand whether medical artificial intelligence is "fair" and not prejudiced against certain groups of people, perhaps due to partial "training".
Again, it will be necessary to limit the "hallucinations" that sometimes lead these artificial intelligence medical systems to give apparently precise answers, but tainted by tragic errors.
As always, the last word is up to us: but given the results of these preliminary models, it's a question of "when", not "if". Doctors will soon be assisted in their diagnoses by artificial intelligence.