In a study conducted by the University of Zurich, researchers have discovered that the human brain can naturally detect deepfake voices.

This study, published in Communications Biology, sheds light on how the human brain processes natural voices differently than deepfakes, revealing the brain's inherent ability to recognize these artificial sounds.

The study involved 25 participants who were tasked with distinguishing between natural and deepfake voices. Researchers used advanced voice-synthesizing algorithms to create high-quality deepfake voices from recordings of four male speakers.

The participants listened to multiple voices and were asked to determine whether the identities of two voices were the same, either matching two natural voices or one natural and one deepfake voice.

US-IT-MEDIA-POLITICS
(Photo : ALEXANDRA ROBINSON/AFP via Getty Images)
A AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or "deepfake" at his newsdesk in Washington, DC. 

How the Brain Detects Deepfake Voice

The results showed that deepfake voices were correctly identified two-thirds of the time, indicating that while deepfakes have the potential to deceive, they are not foolproof.

Claudia Roswandowitz, a postdoctoral researcher at the Department of Computational Linguistics and the first author of the study, commented, "This illustrates that current deepfake voices might not perfectly mimic an identity, but do have the potential to deceive people."

Researchers used imaging techniques to explore how the brain responds differently to natural and deepfake voices, identifying two key regions involved in this process.

Nucleus Accumbens: A core part of the brain's reward system, the nucleus accumbens was less active when participants compared deepfake and natural voices, suggesting that the reward system may not respond as strongly to synthetic voices.

However, it showed heightened activity when comparing two natural voices, potentially indicating a preference for authentic vocal communication.

Read Also: Ex-Robots: Advancing Humanoid Robotics with Emotion Recognition Technology

Auditory Cortex: Responsible for processing auditory information, the auditory cortex was more active when distinguishing between deepfake and natural voices.

This heightened activity may reflect the brain's effort to identify the subtle acoustic differences between real and artificial voices, compensating for the missing or distorted acoustic information in deepfakes.

These findings suggest that while humans can be partially deceived by deepfake voices, our brains have specific mechanisms to detect the artificial nature of these sounds.

"Humans can thus only be partially deceived by deepfakes. The neural mechanisms identified during deepfake processing particularly highlight our resilience to fake information, which we encounter more frequently in everyday life," Roswandowitz explained.

Growing Threat of Deepfake Audio

This study comes at a critical time when the use of deepfake technology is on the rise. Back in January 2024, the New Hampshire attorney general's office reported a robocall using artificial intelligence to mimic President Joe Biden's voice to discourage voters during a primary election.

Additionally, Sumsub, a global full-cycle verification provider, reported a 245% year-over-year increase in deepfakes worldwide. Countries with recent and upcoming elections, such as India, the US, and South Africa, have seen significant spikes in deepfake incidents.

Stay posted here at Tech Times.

Related Article: AI Bots Can Steal Log-in Credentials With Only a Phone Call-Here's How to Protect Against It

Tech Times Writer John Lopez

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: deepfake
Join the Discussion