What if I told you that your doctors will consult with a "robot colleague" equipped with artificial intelligence based on your medical history to diagnose your health condition and prescribe medications?
For decades, the idea of entrusting medical diagnosis to a machine was simply considered science fiction. Now, with the advent of sophisticated language models like GPT-3, this possibility could become quite real.
Say thirty-three thousand
In December, from the sea of ever more numerous scientific articles on artificial intelligence is ticked Foresight, a medical machine learning model developed by researchers at King's College London (KCL).
It makes use of GPT-3, the model that powers the popular “intelligent” chatbot Chat GPT, and a dataset based on 10 years of real electronic health records. And what does it do? It predicts future medical events, estimates risks, suggests alternative diagnoses, or predicts complications in real or simulated people whose information is entered into it.
He is not the only "AI doctor" taking his first steps. At the end of 2022, Google announced the latest advances of Med-PaLM, a medical version of its huge Al model called PaLM. Med-Palm, as it is easy to guess from the name, is trained on texts taken from the web and medical books and optimized using medical documents.
Here the discussion becomes even more interesting. Med-Palm answers common medical questions that require long written answers, and (real) doctors who have tested it are seeing crazy results. When we started talking about it, as of March 2021, the model's accuracy was 75%. Today, 92,6% of Med-PaLM responses “are in line with the scientific consensus” – just 0,3% less than responses given by human doctors.
There are still gaps in some answers and possible safety issues that do not yet make the model ready for clinical use, but it is clear this AI is making very rapid progress.
How close are we to seeing these AI medical tools in clinics and hospitals?
Take note of this general prediction, then I'll explain. Medical AI models are likely to reach a level of clinical competence before all the necessary rules and limits are established by regulatory bodies.
Why did I write it to you? Because probably the single greatest obstacle to the clinical use of medical artificial intelligence will be privacy.
The creators of Foresight at King's College say they have removed any potentially identifying information from the electronic medical records used to train the AI. Even the presence of rare diseases with less than 100 samples. This eliminates (or at least reduces) the risk of patient identification, but also limits the system's capabilities.
In any case, to answer the question about timing, the technicians say that another year is needed to "armorize" the safety of these medical systems. Ok, in 2024 these things will be ready: at that point (let's go back to the general rule) their adoption will only be a political fact. Normative.
Personal doubts
I imagine that the accuracy of the answers and the transparency of the decision-making process are not the only things to evaluate before using a “robot doctor”. It will be necessary to understand, for example, whether medical artificial intelligence is "fair", and not biased towards certain groups of people, perhaps due to partial "training".
Furthermore, it will be necessary to limit the "hallucinations" that sometimes lead these artificial intelligence medical systems to give apparently precise answers, but tainted by tragic errors.
As always, the final say is up to us: but given the results of these preliminary models, it is a question of “when”, not “if”. Doctors will soon be assisted in their diagnoses by artificial intelligence.