How many times, feeling a pain or noticing a strange symptom, have you looked for answers online before consulting a doctor? What if I told you that more and more people are even bypassing the classic Google search to ask medical advice directly from an artificial intelligence?
New Australian research has revealed that one in ten Australians has already asked ChatGPT for medical advice in the first six months of 2024. Even more surprising is to discover who these users are: people with low health literacy and immigrants. In short: is AI filling a gap in the healthcare system, is it creating new risks for the most vulnerable patients, or both?
When the algorithm becomes first aid
It's 2025 and the trend of seeking medical advice through generative artificial intelligence tools is growing at an impressive rate. The study conducted by University of Sydney (I link it here) examined a representative sample of over 2.000 Australians, finding that 9,9% had asked ChatGPT health-related questions in the first half of 2024. This may seem modest, but it has profound implications.
What strikes me most is the level of trust: on average, participants said they trusted ChatGPT “somewhat” (3,1 out of 5). Not blind trust, of course, but not healthy skepticism toward a tool that has neither a medical degree nor clinical experience. And here lies the first chasm of this phenomenon: trust in a system designed to appear to be an expert in everything, but which in reality has no real specialist expertise.
The research revealed an even more troubling pattern: ChatGPT use for healthcare is significantly higher among people with low health literacy, born in non-English-speaking countries, or who speak other languages at home. In other words, AI is attracting the very people who are already struggling to navigate the traditional healthcare system.
Generative AI is here to stay, with all its attendant opportunities and risks for those who use it to seek health information.
Medical Advice from an AI: Harmless Questions and Dangerous Queries
The study reveals that the most frequently asked questions concern information on a health condition (48%), the understanding of the meaning of symptoms (37%), the request for medical advice on actions to be taken (36%) and explanation of medical terms (35%). So far, nothing particularly alarming: using AI as a medical dictionary or for initial, generic information could also be useful.
However, another fact gives us pause for thought: more than half of users (61%) asked at least one question that would normally prompt professional clinical advice. The study authors classified these questions as “higher risk.” Asking ChatGPT what your symptoms mean can give a rough idea, but it is no substitute for personalized clinical advice..
The future is already here, and it's growing fast
The trend of turning to AI for healthcare is set to increase. In the study, 39% of people who had not yet used ChatGPT for health said they would consider doing so in the next six months. And these numbers are just the tip of the iceberg, when you consider other tools like Google Gemini, Microsoft Copilot, and Meta AI.
On the one hand, this technology appeals to people who already face significant barriers to accessing healthcare and health information. One of the key benefits is the ability to instantly provide easy-to-understand health information. Another recent study has shown that generative AI tools are increasingly capable of answering general health questions using plain language, although they are less accurate for complex health topics.
On the other hand, people are turning to general AI tools for medical advice, which is riskier for questions that require clinical judgment and a broader understanding of the patient. They already exist Study cases demonstrating the dangers of using generic AI tools to decide whether to go to the hospital or not.
AI Medical Advice, Towards a New Digital Health Literacy
Healthcare organizations are developing policies around AI, but most focus on how healthcare services and staff interact with the technology. What is urgently lacking is to equip the community with digital health literacy skills appropriate for the AI era.
We need to help people think carefully about the kinds of questions they ask of others. AI tools and connect them with appropriate services that can answer their most risky questions. As AI continues to evolve, it’s critical to remember that technology should complement, not replace, professional medical advice.
The real challenge will be balancing the benefits of accessibility with the risks of relying solely on automated recommendations. So, who would you turn to for advice on your next lingering headache?