Artificial intelligence in healthcare: technological miracle or ethical threat?
The French health insurance system’s budget deficit reflects tensions in an overloaded healthcare system characterised by a complex care pathway. Against this backdrop, artificial intelligence (AI) offers some specific ways to transform the medical sector by optimising time management for health professionals, making their day-to-day working lives easier and promising to improve clinical outcomes by as much as 40%. According to the Harvard School of Public Health, AI can also help reduce avoidable costs – in particular those associated with treatment and diagnostic assistance – by up to 50%.
Although AI is increasingly integrated into healthcare systems, its use raises major questions around not only data quality and inaccurate diagnosis but also ethics and ethical accountability. Thanks the current strong momentum in this field, France’s ecosystem now includes more mature start-ups backed up by reliable clinical studies.
AI accounts for the lion’s share of investment in digital healthcare
Global investment in digital healthcare in 2024 totalled €24.5 billion, with €4.8 billion of this total in Europe. The “Santé Numérique” (“Digital Healthcare”) acceleration strategy"1, launched as part of the “France 2030” plan, aims to make France a global leader in e-Health innovation by supporting the emergence of innovative digital solutions. Its priority focus is on AI, with €250 million of investment in the sector. This explains why more than 250 start-ups – over half of all young companies in the e-Health sector – now use integrated AI technologies in their solutions. In the first half of 2024 alone, applied AI companies raised €116 million, three times more than the total amount raised the previous year. In 2025, start-up Nabla2 on its own raised $70 million. Technologies developed in the sector are designed to meet the major challenges facing healthcare systems: improving patient care, optimising care pathways and boosting medical research and innovation. Designed largely for healthcare institutions, these innovations are entirely consistent with both market expectations and operational needs on the ground.
AI and diagnosis: between anticipation and personalisation
Artificial intelligence continues to show its worth in our healthcare system, notably by helping improve the prediction of pathologies, anticipating risks and introducing rigorous medical diagnosis tailored to each patient. In radiology more specifically, algorithms are able to analyse medical images with close to 95% accuracy. Not only does this second view round out practicians’ own analyses, it also anticipates their diagnoses by detecting anomalies right from the initial phase of a disease.
A case in point is start-up Therapixel3, which specialises in breast imaging. Its technology improves the sensitivity of breast cancer screening, enabling early detection in 50% of cases and providing valuable support to radiologists. In oncology, AI can cross-reference information from a wide variety of sources ranging from medical antecedents to genetic data to quickly offer specific, patient-targeted treatments. Among the most striking breakthroughs is CHIEF4, an AI developed by Harvard Medical School. CHIEF is designed to analyse histopathological images5 of tumour tissue and can read digital slides, identify cell characteristics specific to 19 types of cancer with 94% accuracy and predict molecular profiles without the use of costly DNA sequencing. French unicorn Owkin6, is also contributing to the fight against cancer by offering researchers a platform for analysing medical data. Its system includes an AI that draws on anonymised patient data to better predict a treatment’s probability of success.
While AI offers impressive analytical and predictive capabilities, it is also proving invaluable in optimising medical time and improving care pathways.
Freeing up medical time: AI’s administrative potential
Although AI is now playing a part in supporting practitioners, its role as an administrative assistant is still its most widely used application to date. The assistance it provides in prescribing treatment and automating tasks is a significant support, particularly in hospitals. The growing importance of AI was highlighted when the French health insurance system mandated its use in prescription and medical decision support tools. It is also helping drive widespread adoption of digital prescriptions to increase the security, traceability and efficiency of the care pathway.
In this context, Synapse Médicale7 appears to be a natural candidate to help the sector navigate this transformation. The company has developed a generative AI that prevents prescribing errors, automatically generates prescriptions and can propose therapeutic alternatives. Similarly, Nabla and Hopia8 stand out in the medical assistance field by freeing up medical time that is too often swallowed up by administrative tasks. Nabla, a voice assistant that generates medical reports, can save two hours a day spent on documentation. Meanwhile, Hopia has halved the time spent managing schedules at Brest university hospital.
However, when an AI-powered scheduling tool allocates a priority slot to a stable patient to the detriment of an urgent case, medical liability in the event of complications rests on a decision process where boundaries between the tool, healthcare personnel and the institution are unclear.
Unreliability in medical AI: failures with serious clinical consequences
Potential data breaches and cyberattacks are significant risks to the security of healthcare platforms. But beyond technical security lies the fundamental issue of AI training databases, which can underrepresent patient diversity. The risks that arise, which range from misdiagnosis to a total failure of care, can shorten patients’ lives. The WHO highlights this bias in its guidance on AI ethics and governance, pointing out that many algorithms exclude women, minorities and rural communities. One notable example is a healthcare allocation algorithm that underestimated care needs for black patients by 29%9.
In light of the above risks, the use of AI in healthcare is subject to a strict regulatory framework consisting of the EU Medical Devices Regulation, the Artificial Intelligence Act, bioethics law and GDPR. There are also certification requirements such as CE marking in Europe and FDA approval in the United States. These requirements ensure that devices are compliant with safety, effectiveness and data protection standards. However, there is persistent uncertainty as to liability in the event of a medical error involving algorithmic decision-making. This issue is highlighted by a US case in 2019 in which a diagnostic algorithm misinterpreted an X-ray, leading to a fatal delay in treatment. The company that made the software and the hospital were jointly implicated, highlighting grey areas in the liability regime applicable to medical AI.
Paradoxically, while these new algorithms promise results that are free from the cognitive biases to which practitioners are subject, blindly trusting these technologies could have grave legal consequences.
The future of AI in healthcare: anticipating risks and ensuring efficiency
AI is establishing itself as a pivotal driver of change in medical practices. From freeing up medical time to helping identify targeted treatments, it appears to be a robust tool for reducing medical costs that should be integrated into our healthcare system. However, in a field where each and every decision can have life-and-death implications, technological performance is not the only thing that matters; skewed data, an inaccurate diagnosis or a cyberattack could compromise care and endanger thousands of patients.
When it comes to accelerating the rollout of AI, it is clear that those developing new solutions must demonstrate absolute transparency not only to anticipate liability risk but also to ensure user confidence. Meanwhile, practitioners must be even more proactive as the use of artificial intelligence becomes more widespread, including in particular by recognising the need for digital training. Furthermore, cooperation and listening between the various bodies and stakeholders are the keys to providing simple products tailored to the needs of practitioners at suitable healthcare facilities. The regulatory framework must be strengthened to encourage the development of these technologies while protecting their users10.
- Launched in January 2021 and overseen by the General Secretariat for Investment, the digital healthcare strategy aims to make France a global leader in this field by supporting innovation and transformation in the healthcare system.
- French firm Nabla, established in 2018, is designing an ambient AI assistant (Nabla Copilot) to help clinicians with their day-to-day activities and improve care quality.
- French firm Therapixel, established in 2013, develops AI solutions for analysing medical images, notably in radiology.
- Clinical Histopathology Imaging Evaluation Foundation.
- Histopathology is the microscopic study of tissue taken from an organ (whether living or dead) with the aim of diagnosing a disease – for example a cancer – by accurately identifying anomalies in cells and tissue organisation.
- French firm Owkin, established in 2016, uses AI and federated learning to speed up drug discovery and improve diagnosis in oncology.
- Synapse Médicine, founded in France in 2017, develops AI solutions to ensure safe medical prescribing and automate administrative tasks thanks to its “Copilot” suite of tools.
- French firm Hopia, established in 2020, is developing a solution that uses AI to optimise hospital scheduling to improve workplace quality of life for healthcare personnel.
- S. Jemielity, (28 October 2019), “Health care prediction algorithm biased against black patients, study finds”, University of Chicago News.
- K. Sadad Lachheb (10 January 2025), L’intelligence artificielle en médecine : quelle responsabilité en cas d’erreur?, Village de la Justice.