- March 5, 2022
- Posted by: administrator
- Category: Sectigo
Business email compromise (BEC) and other spear phishing attacks have long been a favorite for bad actors looking to steal cash from unsuspecting victims.
The idea is simple: get employees to send money or information by impersonating a person in a position of power via email. These days, employees may consider themselves experts at sniffing out untrustworthy communications. Unfortunately, bad actors know this, and they’ve added a new component to their schemes: AI. We’ve entered the era of deepfake phishing, and it could have far-reaching security implications.
What’s a “Deepfake?”
Deepfake technology allows users to impersonate others with startling accuracy. Bad actors have access to technology that teaches neural networks to create fakes by learning from existing images and videos of the target. While the most prominent examples of deepfakes focus on celebrities or politicians, just about anyone can use technology to create fake media about anyone else. All the creator needs are images, videos, and audio recordings of the target.
Most people don’t realize how far deepfake technology has come in recent years—or how easy it is to use. It’s not a technology reserved for computer whizzes and criminal masterminds. Anyone can seek out deepfake software and services on the internet and have a relatively convincing representation of another person within minutes.
The widespread availability of AI deepfake technology invites two questions:
- Can we trust anything we see or hear?
- What do deepfakes mean for identity verification?
Can We Trust Our Senses?
The short answer is no. Not anymore. Deepfake videos can be hard to identify, especially if the impersonated individual is acting in what appears to be a reasonable manner. It’s even more challenging when the context changes to a medium we are more comfortable with. Many people view text-based internet communication with skepticism, but what about a phone call from a manager, client, or CEO? A bank manager in China fell victim to a phishing attack executed this way in 2020. The scam—which resulted in the manager transferring $35 million—was at least the second time a deepfake enabled a successful phishing scheme. In the first instance, bad actors impersonated a company’s CEO to get employees to transfer €220,000.
What It Means for Security
The growing sophistication of deepfakes and the availability of the technology needed to make them may have serious implications for security procedures. As passwords become used less and less, so biometrics have risen as a trusted form of identity validation. It makes sense. Until very recently, most people would never have imagined it possible to create such realistic representations of another person. However, deepfake technology allows physical attributes—like irises, voices, and faces—to be replicated with relative ease.
With that in mind, it’s essential that security professionals and individuals alike keep in mind that deepfakes may be used against them. Don’t let it be an afterthought. Anyone we would consider a high value target may want to choose biometric authentication methods with care—and with the understanding that, as deepfakes become more sophisticated, some biometric authentication methods may be rendered useless.
NOTE:: This is article is copyright by Sectigo and we are used it for education or information purposes only.