Haro on discrimination by AI

Artificial intelligence (AI) can easily perpetuate social and health inequalities. But there are ways to correct the algorithms. Researchers proposed possible solutions at the last annual meeting of the American Association for the Advancement of Science (AAAS), in early March in Washington.


The cost of care…

In 2019, American researchers published in Science the classic example of the perils of the use of AI in health, and, at the same time, the remedies to this problem. “This is a common situation with AI,” says Emma Pierson, a computer scientist at Cornell Tech, a graduate institute at Cornell University in New York. “In the case of the 2019 study, the idea was to use future healthcare costs to identify priority patients in a cohort with multiple diseases. »


PHOTO FROM CORNELL TECH WEBSITE

Emma Pierson, computer scientist from Cornell Tech

The problem in the United States is that white people are more likely to have good insurance, and therefore to have access to care. If the cost distortion was corrected, the proportion of privileged black patients increased from 18% to 47%.

Emma Pierson, computer scientist from Cornell Tech

… and osteoarthritic pain

Mme Pierson has published herself on AI distortions in health. In 2021 in the magazine NatureMedicine, she showed that AI analysis of x-ray images of the knee well reflected pain reports of osteoarthritis patients in whites, but not in blacks. “Less educated and poorer patients also had an under-rating of the severity of their osteoarthritis,” says Ms.me Pierson. It reduced their access to knee surgery. In this case, however, the fix to the AI ​​algorithm was not obvious.

Data base

One of the big problems is that the databases used to train AI programs have distortions in relation to society. “Often, white people are overrepresented in clinical trials,” says Ms.me Pierson. Lance Waller of Emory University in Atlanta, who organized the AAAS session, adds that clinical trials often involve healthy people. “We want to avoid interference with drug-related data, the background noise that prevents having statistical power,” says Waller, who specializes in correcting distortions in health databases. “We also see the same problem in the use of AI in human resources. If AI is used for the initial skimming of applicants for a position at a company that has a large majority of men, women may be automatically disadvantaged. We’ve even seen situations where keywords used more in male CVs, such as “task completion”, are favored by an algorithm trained with too many male CVs. »

Police


PHOTO GETTY IMAGES

Forensic algorithms must take into account the perceptions of populations most likely to be targeted by the police.

Police databases are particularly problematic, said Megan Price, a statistician who heads the Human Rights Data Analysis Group (HRDAG) in San Francisco. “Certain types of crimes are favored by the police, and certain neighborhoods too, explained Mr.me Price in his presentation. Attempts should be made to include the perceptions and experiences of populations overlooked or more likely to be targeted by the police in forensic algorithms. This can introduce distortions in the AI’s assessment of the risk of recidivism, for example. » Mme Price is not a priori opposed to AI. “We used it to detect the mass graves of people executed as part of the war against drug traffickers in Mexico. »

Social networks

A potential solution to these database distortions is to use AI to find databases overlooked by researchers. “One of my colleagues at Cornell Tech, Tanzeem Choudhury, uses AI to find markers for certain diseases in social media posts,” says Mr.me Pierson. This is a good example of a way to use AI to counter AI weaknesses. » The People Aware Computing Lab at Mme Choudhury’s publications include detection using “digital biomarkers”, social media keywords, mental fatigue, schizophrenia, persecution complex, chronic pain, anxiety and of psychosis.

AI in the third world


PHOTO FROM HARVARD UNIVERSITY WEBSITE

Milind Tambe, computer scientist at Harvard University and director of Google’s AI for Social Good project

Another session at the AAAS convention was much more enthusiastic about the use of AI in healthcare. “I understand the concerns about systemic inequalities in rich countries, but in many developing countries, and even among marginalized populations in industrialized countries, AI is already showing great promise,” said speaker Milind Tambe, computer scientist at Harvard University and director of Google’s AI for Social Good project. He cites the example of a program aimed at supporting poor mothers in India, to give them health information. “The AI ​​helped predict which mothers were at risk of leaving the program. We go to see them at home to give them a personalized follow-up. »

Do you have a scientific question? Ask Mathieu Perreault.

Learn more

  • 50%
    Proportion of pharmaceutical companies planning to invest in AI in 2023

    Source: pharmaceutical technology

    60%
    Proportion of Americans who are uncomfortable with the idea of ​​their doctor using AI for their care

    Source: Pew Research Center

  • 3
    Number of AI algorithms approved by the FDA in radiology in 2017

    Source: Goldman Sachs

    66
    Number of AI algorithms approved by the FDA in radiology in 2022

    Source: Goldman Sachs


source site-60