Deepfake technology is outpacing our ability to spot it. That could be bad news for cybersecurity.
Please view the full article on Built In: https://builtin.com/cybersecurity/deepfake-phishing-attacks
When you hear the term “deepfake,” you probably think of synthetic reproductions of politicians or celebrities. But, starting now, you should also think about your boss.
Remote work is putting companies at greater risk of deepfake phishing attacks, executives at Technologent warned during a cybersecurity webinar last week. In a deepfake attack, criminals use synthetic audio to mimic the tone, inflection and idiosyncrasies of an executive or other employee. Then, they ask for a money transfer or access to sensitive data.
Concern about deepfakes — or technology that uses machine learning to realistically recreate the face, body or voice of a real person — has been on the rise as open-source tools like DeepFaceLab and Avartify gain traction. Meanwhile, Facebook released its Deepfake Detection Challenge data set in June to help researchers develop ways to identify deepfakes, and the U.S. House of Representatives passed legislation in 2019 funding further research.
So far, actual deepfake attacks have been few and far between. There was a high-profile case in 2019, in which criminals used the tech to impersonate the CEO of a German conglomerate and steal almost $250,000 from a U.K.-based energy company, and Technologent reported three cases among its clients last year.
Accessible deepfake detection methods are still under development, leaving companies and employees exposed to the new form of exploitation. But our bosses and coworkers are people we know — usually, we talk to them every day. Could an attacker really imitate them convincingly enough to fool us?
According to Technologent’s panel, the answer is yes. That’s because bad actors, as it happens, are actually pretty great actors — and they research their roles thoroughly.
The widespread switch to remote work brought on by the pandemic has come with sweeping IT challenges for companies — cybersecurity among them. Some of those challenges are technical, but others simply come from the increased likelihood of human error when employees are separated from each other.
Take this example Technologent Chief Information Security Officer Jon Mendoza offered:
The CFO of one of Technologent’s client companies received an email from his CEO, who he knew was boarding an airplane, requesting an urgent payment to a third party to avoid late penalties. Then, he received a text from the CEO, checking to make sure he got the email. Eager to avoid the fees, he forwarded the email to accounts payable, and that was that. Days later, the CFO mentioned the last-minute payment over drinks, and his boss had no idea what he was talking about.
In this story, it’s clear what went wrong: The CFO didn’t want to disturb his boss as he boarded the airplane, so the whole interaction was conducted via email and text. With workforces operating remotely, criminals have even more ways to manipulate unsuspecting employees — whether that involves deepfake audio, a really good impression or a perfectly timed email.
Often, the attack starts long before it concludes, with attackers identifying a target and watching that target’s behavior in the workplace.
“They may survey that victim for three months, four months, five months, a year.”
“The timetable is so much more extended than what we’ve seen in the past,” Technologent security practice director David Martinez said. “They may survey that victim for three months, four months, five months, a year.”
Whether it’s Facebook, LinkedIn or a compromised professional email account, attackers find ways to get to know their victims and understand the company’s internal workflows. That way, they can identify weak links in a company’s processes or busy moments when employees are particularly vulnerable.
“We’re creatures of habits,” Mendoza said. “So if I can get you in a state where you’re busy, you’re distracted, and I know all of your habits and your organization’s habits, I might not even need to do synthetic audio. It could just simply be impersonating an executive, which is most often what we see, or perhaps impersonating a trusted third party.”
When attackers do rely on synthetic audio, that audio comes from deepfake software programs trained on existing audio recordings of the target speaking on phone calls, at conferences or in press conferences. Their proximity to natural human speech is getting better, and that plays into our cognitive biases: My boss speaks in a particular cadence, so if the person leaving me a voicemail speaks in that candence, it must be him — right?
“That’s what’s really made the COVID-19 impact here. Those people that you would have gotten up and walked down the hall to say, ‘Are you sure you want me to transfer this money?’ those people are not in the office together anymore,” Martinez said.
Given the amount of planning criminals put into these attacks, it’s understandable when employees are duped. But the fallout can be rough, Martinez told Built In. First, there’s the water cooler conversations. (“Did you hear? Marcy fell for a phone scam and gave all our money away.”)
Then, there’s the reaction from executives, who too often equate investments in security toolkits with all-around protection from fraud. Unfortunately, Martinez said, there’s no one-and-done prevention strategy for social engineering.
“When an event like this happens, initially, there’s some disbelief, and there’s some anger on the part of executive management because they feel like they spent money on all of these, quote, security tools, and they were useless,” he added. “[Executives] feel like they’re going to be viewed as lax or ill prepared when, really, this is a very nefarious and insidious type of attack.”
View the full article on Built In: https://builtin.com/cybersecurity/deepfake-phishing-attacks