AI is gradually becoming an essential tool for doctors, according to Professor Jean-Emmanuel Bibault, oncologist and researcher specializing in these technologies.
The arrival of artificial intelligence (AI) in the world of medicine fascinates as much as it frightens, as the field of possibilities seems infinite and still difficult to define. Professor Jean-Emmanuel Bibault concedes: “We don’t yet know a tenth of the possibilities AI offers in healthcare.” Potentials that will be scrutinized closely, Thursday July 6 and Friday July 7, during a summit organized by the International Telecommunications Union (ITU) in Geneva (Switzerland).
In his small office at the Georges-Pompidou European Hospital in Paris, sits the poster of this oncologist’s favorite film, 2001, A Space Odysseywhich inspired the title of his book, 2041: the medical odyssey (Equateur ed., January 2023). In Stanley Kubrick’s work HAL 9000, the artificial intelligence guiding the spacecraft kills the humans controlling it. In his book intended for the general public, the one who is also a researcher specializing in AI explores all the possibilities that this new technology could offer in the near future. It also raises the ethical questions that must accompany all this progress. More urgent questions than you might think, because AI is already used in many hospital departments.
Franceinfo: How do doctors use artificial intelligence today? What does it bring to medicine?
Jean-Emmanuel Bibault: Currently, artificial intelligence can perform many tasks, especially in radiotherapy. For example, before arranging ray treatments [pour traiter une tumeur ou un cancer], you have to program the machine so that it aims at the right place. To do this, we perform a CT scan of the patient and from this we draw in three dimensions the tumor to be destroyed, as well as the organs that are around, to spare them as much as possible.
This step, which we call computer contouring, could take two to three hours, or even half a day in complex cases. Now we have software that is based on deep learning, learning deep neural networks, which are able to do this contouring in two or three minutes. For now, these results are still verified by a human, because some are not yet perfect. But in the years to come, these imperfections will diminish and we will have fewer and fewer corrections to make.
How much do these technologies cost?
It is extremely variable from one solution to another. For now, unlike conventional machines, the price of AI depends on the number of patients it treats. Generally, it is of the order of several tens of thousands of euros. This may seem like a lot, but in fact, on the scale of the operating costs of a radiotherapy department, it really isn’t. For example, a scanner is worth several million euros, not counting the annual maintenance contract, since all these machines are checked very regularly. This is also one of the reasons why the centers are working with artificial intelligence.
We understand the interest that AI has for doctors and hospitals. And for the patient?
For the patient, AI brings consistent quality wherever it is developed. If I take the example of contouring again, thanks to AI, no matter where the patient uses it, the result will always be the same. In addition, AI will ensure better dosimetry [mesure de la dose de rayonnements ionisants que peut recevoir un objet ou une personne] because it is more accurate than humans. Which will potentially lead to fewer side effects and better efficacy on cancer.
This is in line with what you emphasize in your book, that is to say that these results, which are always more precise, will force doctors to increase their skills to check that the AI is not mistaken…
This is what should be hoped for and favored. However, it is necessary to keep in mind that by relying a lot on AI to perform these contours, young doctors will lose the habit over time. Thus, from generation to generation, this skill could be lost, if we are not attentive to it. The sole use of AI should not lead to no one being able to ensure that they are doing it well. It’s like when you learn elementary arithmetic in elementary school and forget how to add or subtract as soon as you have a calculator.
To hear you, we have the impression that the AI is faster than us and that it already knows how to accomplish a very large number of tasks…
Yes, but we must remember that AI remains a technology developed by a human being. At this time, an AI cannot herself ask questions or solicit a diagnosis, even if it starts to come.
“Some algorithms know how to do things that humans don’t know, and will never be able to do, like predicting your risk at ten or fifteen years of developing a disease. Or, in the case of illness, predicting the chances of recovery at five , ten or fifteen years. These things, even the best experts do not know how to do it.”
Jean-Emmanuel Bibaultat franceinfo
Are these predictions already in use or are they still in clinical trials?
For now, these algorithms exist for translational research [qui réunit des médecins et des chercheurs pour développer des applications médicales, ou, dans le sens inverse, qui peut orienter les scientifiques à partir d’une observation clinique]. It’s going to be difficult to get these models out of the computer and evaluate them. Because if, for example, I’m developing an algorithm that predicts the risk of diabetes in ten years and I want to see if it works, I’m going to have to run a whole protocol and then wait until that time is elapsed. So, in the best case scenario, we wouldn’t have an answer for ten years.
Beyond these validation difficulties, imagine that this AI works and can make this kind of predictions with almost 100% certainty. Is this prediction actionable from a medical point of view? If I know that I have a very high risk of developing colon cancer in ten years, will I be able to use this information to reduce this risk and adapt my behavior? Or, whatever I do, will this risk continue to exist? It is not obvious that this information is good to have, because it can have many psychological impacts on the quality of life.
There is also a second, more dizzying question. To pose it, I make the parallel with the film Minority Report, in which a police department arrests people before they even commit a crime. In medicine, it’s the same logic. If, one day, we manage to predict a disease ten years in advance and it does not break out, you will have lived for ten years with this sword of Damocles hanging over your head. Was the algorithm wrong? Or have you adapted your behavior to reduce the risk of contracting the disease? No one will ever be able to answer these questions.
In your book, you also question the training of AIs. You cite the case of an application that was developed in dermatology to detect an anomaly using photos taken by the patient himself (PDF). The study shows that it is generally more effective than dermatologists, but that on the other hand, it is not on black skin…
This example illustrates the fact that we must be very vigilant about the biases that we ourselves introduce into the AIs, through the data or the algorithmic methods that we use. Sometimes we realize these biases, as is the case in this experiment. But there is also a risk that in other cases, we don’t even realize it and that we use tools that give bad results.
There is also the question of cybersecurity. Let’s imagine that a high-profile personality is being operated on tomorrow by a fully automated AI. How to ensure that it is not hacked for malicious purposes?
When this question is approached, many think that it is science fiction. But this is not the case. There is a study published in the journal Science on “adversary attacks” [ou contradictoires], which consist in generating totally artificial images which, to the naked eye, have no particular characteristics. But when analyzed by an artificial intelligence, these images generate a false result. I often give the example of a panda image, which the AI recognizes as such. I have created an image which, apparently, has no particular character for the human eye. If I add it to the panda image, the AI no longer recognizes a panda but a gibbon.
This technique can be reproduced in radiology, on scanners or MRIs. Imagine that one day, we only rely on systems that automatically interpret images at very high speed, with 50 patients analyzed every hour, and that no one is monitoring what the AI is doing. In this case, in theory, you could skew all results with the same type of layer as the panda image, orchestrating a cyberattack. In the United States for example, where the financial stakes are even higher than in France and Europe in the field of health, a group of hackers could be reimbursed for hundreds of treatments costing millions of dollars with false results. or for people who don’t exist. This may sound completely crazy, but it’s not at all.
Will artificial intelligence replace doctors?
Indeed, the risk is that we are told that fewer doctors are needed because we can do more things in less time. But it is a big mistake to think so, because I am sure that we will have to continue to recruit doctors and train them, despite the use of AIs. These technologies should only enable practitioners to upgrade their skills, not to be replaced. AI needs to free up our medical time and instead of counting fifteen minutes for a consultation, we can give it three quarters of an hour, or even more, and see patients more often.
Do you think AI is just a fad?
I do not think so. In my opinion, the only reason why AI could go pschitt is if there were a global economic crisis, a very hard recession, or a war which would lead to a halt in technical or technological progress because we would decide to reuse the resources for other primordial, more essential things.