Turns out, the email wasn’t from YouTube, the video was AI-generated and the link was a scam designed to harvest confidential information.
Welcome to the world of deepfake technology. Scammers are using AI to generate realistic images, audio and video to trick unsuspecting victims into divulging sensitive information or transferring large sums of money. In February 2024, a finance worker at a multinational firm was duped into transferring $25 million to a fraudster based on a Zoom meeting in which all participants were deepfakes.
So far, deepfakes remain a relatively small part of the overall cyber threat landscape. However, as AI technology improves and becomes more accessible, deepfakes are likely to become more prevalent. Experts say that social engineering and targeted harassment are the greatest immediate threats.
What Are Deepfakes?
Deepfakes are generated using recursive neural networks, which are designed to process hierarchical structures such as trees or graphs. Recursive neural networks are adept at understanding and analyzing text and sentence structure, making them useful for natural language processing. They are also good at image recognition and, in reverse, at generating fake images.
In a typical setup, one algorithm is trained to generate fake images and another to determine if the image is real. These tasks are performed recursively, with each algorithm improving until the fake images are impossible for a human to detect.
This takes a lot of time and processing power, making it difficult to generate deepfakes at scale. According to an Onfido study, 80 percent of scams use relatively crude “cheapfakes” that are easy for security tools and vigilant humans to detect. Deepfakes are typically used for high-dollar scams, such as the one that duped the Hong Kong finance worker. However, the Onfido study found that deepfake attacks increased by 3,000 percent in 2023.
The Insider Threat Risk
Today’s AI-powered security tools can detect many deepfakes using techniques such as advanced pattern recognition. They can also examine the deepfake’s digital footprint, which shows traces of how it was generated pixel by pixel. Photoplethysmography has been used to detect video deepfakes by looking at blood flow frequency in the face and neck.
There’s no doubt that detection techniques will continue to improve along with deepfake technology. However, insider threats remain the most significant risk. Humans can be fooled by deepfake technology. They can also be blackmailed by nonconsensual pornography, which uses the victim’s face in pornographic images or video. One study found that 95 percent of all online deepfakes videos are nonconsensual porn, which could be used to coerce employees to commit financial fraud or facilitate a security breach.
Combatting these threats starts with increased awareness. Organizations should update security training programs to include a discussion of the deepfake threat and tips for detecting attacks. Organizations should also evaluate advanced technologies for detecting social engineering attacks and identifying deepfakes based on known patterns.
Prevention and Zero Trust
To reduce the risk of deepfakes, organizations should strictly limit access to sensitive data that can be used to generate them. They should also implement policies and procedures for reviewing suspicious content and develop an incident response plan.
Because it’s impossible to prevent every deepfake threat, organizations should adopt a zero trust security approach. Zero trust is a holistic defense strategy that assumes threats are already present, and uses least-privilege access policies to prevent threats from spreading throughout the environment. It can be enhanced with voice analysis, facial recognition and other techniques to verify user identities and identify deepfakes.
Technologent’s security team is monitoring the deepfake threat landscape, and staying abreast of developments in AI-powered security tools. We’re here to provide guidance and help you update your security strategy to address this growing threat. Contact us to schedule a confidential consultation.