A doctor’s unconscious bias could affect patient care. A hiring manager’s preconceptions might influence recruitment. But what happens when you add AI to these scenarios? According to new research, AI systems don’t just mirror our biases — they amplify them, creating a snowball effect that makes humans progressively more biased over time.
This troubling finding comes from new research published in Nature Human Behaviour that reveals how AI can shape human judgment in ways that compound existing prejudices and errors. In a series of experiments involving 1,401 participants, researchers from University College London and MIT discovered that even small initial biases can snowball into much larger ones through repeated human-AI interaction. This amplification effect was significantly stronger than what occurs when humans interact with other humans, suggesting there’s something unique about how we process and internalize AI-generated information.
“People are inherently biased, so when we train AI systems on sets of data that have been produced by people, the AI algorithms learn the human biases that are embedded in the data,” explains Professor Tali Sharot, co-lead author of the study, in a statement. “AI then tends to exploit and amplify these biases to improve its prediction accuracy.”
Consider a hypothetical scenario: A healthcare provider uses an AI system to help screen medical images for potential diseases. If that system has even a slight bias, like being marginally more likely to miss warning signs in certain demographic groups, the human doctor may begin unconsciously incorporating that bias in their own screening decisions over time. As the AI continues learning from these human decisions, both human and machine judgments could become increasingly skewed.
The researchers investigated this phenomenon through several carefully designed experiments. In one key test, participants were asked to look at groups of 12 faces displayed for half a second and judge whether the faces, on average, appeared more happy or sad. The initial human participants showed a small bias, categorizing faces as sad about 53% of the time. When a computer program called a Convolutional Neural Network (think of it as an AI system that processes images similarly to how human brains do) was trained on these human judgments, it amplified this bias significantly, classifying faces as sad 65% of the time.
When new participants interacted with this biased AI system, they began adopting its skewed perspective. The numbers tell a striking story. When participants disagreed with the AI’s judgment, they changed their minds nearly one-third of the time (32.72%). In contrast, when interacting with other humans, participants only changed their disagreeing opinions about one-tenth of the time (11.27%). This suggests that people are roughly three times more likely to be swayed by AI judgment than human judgment.
The bias amplification effect appeared consistently across various types of tasks. Beyond facial expressions, participants completed tests involving motion perception where they judged the direction of dots moving across a screen. They also assessed other people’s performance on tasks, where researchers found participants were particularly likely to overestimate men’s performance after interacting with an AI system that had been deliberately programmed with gender bias to mirror biases found in many existing AI systems.
“Not only do biased people contribute to biased AIs, but biased AI systems can alter people’s own beliefs so that people using AI tools can end up becoming more biased in domains ranging from social judgements to basic perception,” says Dr. Moshe Glickman, co-lead author of the study.
To demonstrate real-world implications, the researchers tested a popular AI image generation system called Stable Diffusion. When asked to create images of “financial managers,” the system showed a strong bias, generating images of white men 85% of the time – far out of proportion with real-world demographics. After viewing these AI-generated images, participants became significantly more likely to associate the role of financial manager with white men, demonstrating how AI biases can shape human perceptions of social roles.
When participants were falsely told they were interacting with another person, while actually interacting with AI, they internalized the biases to a lesser degree. The researchers suggest this may be because people expect AI to be more accurate than humans on certain tasks, making them more susceptible to AI influence when they know they’re working with a machine.
This finding is particularly concerning given how frequently people encounter AI-generated content in their daily lives. From social media feeds to hiring algorithms to medical diagnostic tools, AI systems are increasingly shaping human perceptions and decisions. The researchers note that children may be especially vulnerable to these effects, as their beliefs and perceptions are still forming.
However, the research wasn’t all bad news. When humans interacted with accurate, unbiased AI systems, their own judgment improved over time. “Importantly, we found that interacting with accurate AIs can improve people’s judgments, so it’s vital that AI systems are refined to be as unbiased and as accurate as possible,” says Dr. Glickman.
AI bias is not a one-way street but rather a circular path where human and machine biases reinforce each other. Understanding this dynamic is crucial as we continue to integrate AI systems into increasingly important aspects of society, from healthcare to criminal justice.
Source : https://studyfinds.org/ai-systems-amplify-human-bias/