Deepfake Scams: Staying Safe with AI and Cybersecurity

IT Leadership

Written by

David McBride

Published on

October 16, 2024

AI and its benefits are well known across nearly all businesses now, but how well known are the threats it poses?

While it’s common for new threats to arise as new technology pushes the boundaries of business operations, AI is restructuring the playing field for both cybersecurity and cyber threats. Generative AI has proven to be a powerful tool for businesses seeking a competitive edge. However, it also presents a new threat to companies and employees. 

For example, someone using deepfake AI scammed a business out of $243,000, tricking the CEO of a UK-based energy firm into thinking his boss needed an immediate transfer of funds. The caller used deepfake audio of the CEO’s boss, causing confusion and an eventual transfer of funds to a Hungarian bank account. 

Knowing the threats of AI, best practices to limit exposure, and leveraging AI to your benefit is necessary for businesses looking to stay safe and ahead of the competition. 

What is Deepfake?

AI has immense power and potential to create anything a user needs, including fake images, videos, and sounds that imitate humans. While it’s entertaining to make your cat sing like Celine Dion or your dog give an acceptance speech for winning the Loyal Friend Award, deepfakes pose a severe threat. 

In cybersecurity, deepfake is defined as:

  • Media, typically audio, video, or images, that have been altered by artificial intelligence through deep learning. 

Knowing how deepfakes are made equips you and your employees to better detect and defend against the rising threat they pose. 

How deepfakes work

Deep learning is a type of machine learning that uses multilayered neural networks to simulate the human brain. It is found in many AI features and services, including self-driving cars, credit card fraud detection, digital assistants, and generative AI. 

A common use of deepfake is to spread misinformation about politicians, celebrities, or other high-profile individuals. These videos or images often surface on social media and can go undetected until the individual being impersonated points it out. 

By altering videos through face swapping and manipulation, deepfakes can make videos that misrepresent a person or entity. The model does this by taking a video or photo of the target and placing it over another person’s face, often a deepfake actor. 

For deepfakes requiring audio, the impersonator will record the target voice and break it down into smaller samples. From there, AI uses machine learning to analyze the voice and detect its characteristics. Then, the AI model produces a new recording using information and samples from the original voice. 

How do impersonators obtain my face and voice?

For high-net-worth individuals, business owners, and CEOs who speak publicly, their voices are easy for fraudsters to target and replicate with AI. It’s believed that only 30 seconds of audio is needed to create an accurate deepfake recording, and a mere 3 seconds can produce audio quality that matches the target voice by 87%. 

McAfee reported that 53% of people share audio recordings online or by recorded notes weekly – combined with social media, audio clips of an individual’s voice are becoming increasingly easier to find. 

Additionally, any videos or photos that contain images of a face can be input into an AI model and used to create a deepfake image or video. 

With AI’s ability to scrub data across millions of websites, including social media, obtaining any information it needs to create deepfake press is quick and easy. 

Businesses at Risk of Deepfake Attacks

It is not only celebrities and political figures who should be wary of deepfake attacks. Businesses and High-Net-Worth Individuals (HNWIs) associated with them can also fall victim to these. 

Here are a few ways that businesses are at risk of deepfake attacks. 

CEO fraud

CEO fraud is a phishing attack that impersonates a CEO (or a high-level executive) to steal personal or business data. Security.org reported that at least 400 companies are the target of CEO fraud per day. Businesses that aren’t up-to-date with the latest cybersecurity measures and aren’t knowledgeable about deepfake attacks are the most likely to succumb to these attacks. 

CEO fraud is most commonly associated with deepfake phishing. This is where scammers send digital messages (email, voice messages, video calls) using deepfake technology to get individuals to reveal sensitive information. 

  • Deepfake email phishing:
    • Business email compromise (BEC) is one of the most damaging types of phishing to businesses, and deepfake technology makes it easier for cybercriminals to expose enterprises. 

Deepfake technology allows criminals to make convincing profiles and emails that lure employees into divulging sensitive business data. 

  • Voice messages:
    • Only 3 seconds of audio is needed to create a highly accurate voice clone. Deepfake audio recordings can be used to leave messages or even engage in live conversation.  

Having accurate voice clones can make it difficult for employees to know who they are talking to, which can lead to damaging issues with businesses. 

  • Video messages:
    • One of the most convincing forms of deepfake phishing is deepfake video messages. Scammers use face-swapping technology to engage in video calls with business personnel to try and gain access to sensitive information or to ask for money transfers. 

In February 2024, an employee of a multinational company based in Hong Kong was tricked into believing they were on a call with the company’s CFO. Before the employee realized it was a scam, they transferred over $25 million into the scammers’ bank accounts. 

HNWI Identity Theft

Protecting your business from deepfake attacks also means protecting the identity of HNWIs associated with the company. 

Cybercriminals often need sensitive information about CEOs and other executives to successfully carry out CEO fraud and different types of attacks on your business. Additionally, suppose an attacker is able to obtain personal information about a HNWI and combine it with deepfake technology. In that case, the attacker may be able to access business bank accounts without phishing attacks. 

The rise of deepfake attacks

With deepfakes becoming more and more realistic, it’s no surprise that cybercriminals are turning to deepfake attacks. In fact, 2022 saw nearly an 1800% rise in deepfake fraud in North America and over a 1500% increase in the Asia-Pacific region. 

But the rise doesn’t stop there. 

Forbes reported that deepfake fraud attempts were up 3000% year-over-year from 2022 to 2023. 

The emergence of generative AI technology into everyday life is making it easier and easier for deepfakes to be created, opening a window of opportunity for cybercriminals to target companies and HNWIs. Security.org states that creating deepfake audio recordings doesn’t require a lot of user skill and that Google Trends reported searches for “free voice cloning software” increased by 120% from July 2023 to July 2024. 

Combined with how available generative AI is, it’s no longer just celebrities and politicians under attack; all businesses and individuals are at risk of being targeted. Even with easy-to-use AI tools, criminals are looking daily for high-quality deepfake images and audio that can cost anywhere between $300-20,000 per minute, furthering the point that deepfakes are on the rise and businesses and individuals need to take precautions. 

Deepfake awareness rose from 13% to 29% from 2019-2022; however, a McAfee survey found that nearly 70% of people aren’t confident they can differentiate between a genuine and deepfake audio message. With the majority of people unable to confidently identify a deepfake, your business and its data could be at risk. 

While the majority of early deepfake attacks focused on fintech and crypto businesses, the increasing demand for deepfakes and low awareness for businesses and individuals is leading criminals to target more unsuspecting targets. 

Be Proactive Against Deepfake Attacks

Cybersecurity awareness has constantly been rising for companies. Adding in deepfake preparedness is an essential step for all businesses and HNWIs looking to keep their data and profit safe. 

Here are a few strategies to protect you and your business from deepfake fraud.

Deepfake education

The number one way to defend against most cyberattacks, including deepfake fraud, is the ability to identify when an attack is happening properly. 

Increase employee awareness of the threats of deepfake attacks and how to identify them. Hold training sessions that emphasize not trusting a video or audio source without confirmation that who you are seeing or speaking to is real. 

Implement training sessions that focus on identifying key signs of deepfakes, such as 

  • lip desynchronization
  • jerky eye and body movements 
  • visual inconsistencies
  • unusual or untimely requests

Additionally, phishing simulation programs can help employees better identify when they are being attacked by drawing attention to common social engineering tactics. 

Watch for deepfake fraud during video calls

  1. Observe head movement: Deepfake videos rely heavily on tracking specific facial features, including eyes, nose, and mouth. Asking someone to turn their head to remove or distort one of those focal points may cause the deepfake to glitch or, in some cases, reveal the face behind the scam. 
  2. Request a natural background: Deepfakes succeed more when using a blurred or altered background. If you suspect you might be talking to a fraud, ask the user to use a natural background with no filter on the video call. Doing so could reveal inconsistencies, and someone who refuses to use a natural background may raise concerns. 
  3. Encourage interaction with the surrounding environment: Asking someone to interact with objects in their background can help spot a deepfake. This tactic has a dual purpose: it gives the person a reason to turn around or move, potentially occluding facial features, and it also tests their ability to respond naturally. For example, you could ask about an item you see in the background and have them describe or point out specific parts of it. This may reveal any unnatural delays or glitches that would be unlikely with a genuine person.

Use AI technology for defense

In most cases, the same AI technology that is being used to create deepfakes can be used to identify them. Using advanced technology to combat advanced attacks is essential for businesses to remain safe.

AI technology can be trained to identify the same things employees are taught to identify deepfakes (irregular lip/eye/body movement, speech irregularities, and visual inconsistencies). 

Increase security measures

Increasing needed cybersecurity measures, such as multi-factor authentication, strong passwords, and keeping software up to date, is essential for protecting against deepfake attacks. Additionally, increasing the safety of employee processes, such as transferring money or changing settings, is a must. Companies can implement a zero-trust policy or require an additional layer of verification before employees can complete specific tasks.  

While most deepfake cyberattacks target individuals, some deepfakes are used to try and bypass specific security measures like voice recognition and biometrics. Enabling MFA and keeping your software up to date will deter cybercriminals from accessing valuable personal or business data. 

Limit media sharing

Most suggestions around sharing media suggest limiting the amount of photos, videos, and audio messages you send. But, in today’s world, with the importance of social media and the need to communicate worldwide, limiting what you share isn’t always possible. 

Instead, CEOs, employees, and business profiles should pay attention to where they share their information and whether those websites have privacy policies and data protection—being mindful of who and where you share media limits the potential for unnecessary exposure to criminals. 

Use watermarks

For HNWI and others who often have to share videos or photos of themselves, it’s essential to use a watermark. Watermarks make it easier to trace the media and harder for criminals to make fakes that don’t look altered. 

Use advanced biometrics

Using advanced biometrics is an option for businesses and HNWIs looking to protect precious data. While some deepfake technology can make clones accurate enough to pass simple biometric tests, advanced biometric tools are often too complex for deepfakes to pass. 

Using fingerprint or palm scanners is another form of biometrics that will prevent deepfakes from accessing sensitive data. Currently, deepfakes cannot mimic finger or palm prints, making them one of the safest ways to add an additional layer of security to your business. 

AI and blockchain

Combining AI with blockchain creates a verifiable history of digital content. This will limit criminals’ ability to develop deepfakes that go undetected. 

Report deepfakes

If you encounter deepfake media or experience a deepfake attack, report it to authorities quickly. You can also report the media on the site it was found and request that it be taken down. 

Implement zero trust

Zero trust is a cyber security measure that requires all requests to access information to be verified—no one source can be trusted without verification. 

By implementing zero trust, you limit the ability of users and criminals to access vital business data. If a cybercriminal fooled an employee, the employee would have to bypass an additional layer of verification before transferring funds or exposing data. 

Cybercriminals are gaining momentum and attention with deepfake attacks, and the problem is only expected to worsen over the coming years. Current methods of keeping business data safe are becoming obsolete, and new measures must be taken. 

The Power of an IT Service Provider

Advancements in AI are empowering criminals to do more than ever before, but the same can be said for protecting your business. AI and machine learning are powerful tools that must be used to prevent even the most complex deepfake fraud attempts. 

About 1 in 4 company leaders are unfamiliar with deepfake technology, leading to 31% of executives believing that deepfakes do not threaten their company. However, more than 10% of companies have dealt with a deepfake fraud attack, with damages from successful attacks reaching 10% of the business’s annual profit. 

One critical advantage of AI in cybersecurity is its ability to analyze massive amounts of data in real-time. Combined with behavioral analytics, predictive threat detection, and automated incident response, AI can position your business to guard against all attacks, including deepfakes. These new and threatening forms of cyberattacks require a more proactive and comprehensive approach to security. This is where a dedicated cybersecurity partner can make all the difference.

Partnering with a dedicated cybersecurity partner will move your business from a reactive position to a proactive one, which is better than the 62% of the companies that aren’t proactively defending against cyber threats. 

99Ten focuses on ensuring your business is protected at every step, keeping it protected in today’s threat-filled landscape:

  • Access to cutting-edge technology: Cybersecurity firms invest heavily in the latest AI and ML technologies, providing clients with defenses that may be out of reach for individual businesses. 
  • 24/7 monitoring: Cyber attacks don’t adhere to business hours. A dedicated cybersecurity partner provides round-the-clock monitoring and can respond to threats in real-time, regardless of when they occur. 
  • Expertise and experience: Cybersecurity professionals stay current on the latest threats and defense strategies. Their expertise across multiple clients and industries provides invaluable insights and best practices. 
  • Customized security strategies: Every business has unique vulnerabilities and requirements. A cybersecurity partner can develop tailored strategies that address specific needs and risk profiles. 
  • Continuous adaptation: As threats evolve, so must defenses. A good cybersecurity partner constantly updates and refines its approach, ensuring defenses remain effective against emerging threats. 

With rising deepfake fraud attempts and new ways to steal data and money from businesses, companies can’t afford to wait around without a proactive defense in place. For businesses to become proactive, they must commit to continuously updating and realigning their cybersecurity with new threats. 

By partnering with 99Ten, companies are ensuring that their data is staying safe and their business remains ahead of the cybersecurity curve. Our advanced AI-powered solutions, continuous monitoring, and expert guidance provide the peace of mind you need to focus on what you do best—growing your organization. Don’t wait for an attack to happen. Your business’s future may depend on the actions you take today. If you’re ready to learn more about what 99Ten can do for you, book your comprehensive IT discovery here.