Few years ago, wrong wrong looked like a new technology whose makers relied on a lot of computing power. Today, deepfakes are ubiquitous and can be misused for misinformation, hacking, and other nefarious purposes.
Intel Labs has developed real-time deepfake detection technology to counter this growing problem. Intel Principal Researcher Ilke Demir explains the technology behind deepfakes, Intel’s detection methodsand the ethical considerations involved in the development and implementation of such tools.
Also: Current AI Boom Will Amplify Social Problems If We Don’t Act Now, Says AI Ethicist
Deepfakes are videos, speeches or images where the actor or action is not real but created by artificial intelligence (IA). Deepfakes use complex deep learning architectures, such as generative adversarial networks, variational auto-encoders, and other AI models, to create highly realistic and believable content. These models can generate synthetic personalities, lip-sync videos, and even text-to-image conversions, making it difficult to distinguish between real and fake content.
The term deepfake is sometimes applied to genuine content that has been alteredlike the 2019 video of former Speaker of the House Nancy Pelosi, which was doctored to make her look drunk.
Demir’s team examines computer deepfakes, which are synthetic forms of machine-generated content. “The reason it’s called deepfake is because there’s this complicated deep learning architecture in generative AI that’s creating all this content,” he says.
Also: Most Americans Think AI Threatens Humanity, Poll Shows
Cybercriminals and other bad actors often abuse deepfake technology. Some use cases include political misinformation, adult content featuring celebrities or non-consenting individuals, market manipulation, and impersonation for monetary gain. These negative impacts highlight the need for effective deepfake detection methods.
Intel Labs has developed one of the world’s first real-time deepfake detection platforms. Instead of looking for fake artifacts, the technology focuses on detecting what is real, like heart rate. Using a technique called photoplethysmography – the detection system analyzes color changes in veins due to oxygen content, which is visible by computer – the technology can detect whether a personality is real human or synthetic.
“We try to look at what’s real and authentic. Heart rate is one of the (signals),” Demir said. “So when your heart pumps blood, it goes through your veins, and the veins change color because of the oxygen content that changes color. It’s not visible to our eyes, I can’t just look this video and see your heart rate. But this color change is visible by calculation.”
Also: Don’t Get Scammed By Fake ChatGPT Apps: Here’s What To Look For
Intel’s deepfake detection technology is implemented across a variety of industries and platforms, including social media tools, news agencies, broadcasters, content creation tools, startups, and online organizations. non-profit. By integrate technology into their workflowsthese organizations can better identify and mitigate the spread of deepfakes and misinformation.
Despite the potential for misuse, deepfake technology has legitimate applications. One of the first uses was in the creation of avatars to better represent individuals in digital environments. Demir refers to a specific use case called “MyFace, MyChoice”, which leverages deepfakes to improve privacy on online platforms.
Simply put, this approach allows individuals to control their appearance in online photos, replacing their face with a “quantifiably different deepfake” if they want to avoid recognition. These commands provide greater privacy and identity checkhelping to thwart automatic face recognition algorithms.
Also: GPT-3.5 vs GPT-4: Is ChatGPT Plus Worth Its Subscription Fee?
It is crucial to ensure the ethical development and implementation of AI technologies. Intel’s Trusted Media team works with anthropologists, social scientists, and user researchers to evaluate and refine the technology. The company also has a Responsible AI Council, which reviews AI systems for responsible and ethical principles, including potential biases, limitations, and potentially harmful use cases. This multidisciplinary approach helps ensure that AI technologies, like deepfake detection, benefit humans rather than cause harm.
“We have lawyers, we have social scientists, we have psychologists, and all of them come together to identify the boundaries to determine if there is a bias – algorithmic bias, systematic bias, data bias, any kind sideways,” says Dimer. . The team scans the code to find “every possible use case for a technology that can harm people.”
Also: 5 ways to explore the use of generative AI at work
As deepfakes become more widespread and sophisticated, it is increasingly important to develop and implement detection technologies to combat misinformation and other harmful consequences. Intel Labs real-time deepfake detection technology offers a scalable and effective solution to this growing problem.
By integrating ethical considerations and collaborating with experts from various disciplines, Intel is working towards a future where AI technologies are used responsibly and for the good of society.