
Technology is a double-edged sword, wielding convenience and risk. One of the fast-growing threats is AI-driven deception scams, also known as deepfakes, according to Perry Carpenter, Chief Human Risk Management Strategist at KnowBe4.
Deepfakes clone the voices and faces of people we trust, making fraud attempts more convincing than traditional phishing emails. Targeting individuals and businesses, these scams exploit trust. The good news: there are steps you can take to protect yourself — starting with code words.
An Example of a Deepfake Scam
You get a call late at night from your grandchild, crying. They explain they were arrested while traveling with friends and need money for bail. The voice is unmistakable, same tone and inflection as you are familiar with. In a panic, you wire the money. In the morning, you find out your grandchild is safe at home and did not make the call. An AI voice clone tricked you.
In 2024, a company employee was tricked into wiring $25 million after joining a video call with what looked and sounded like their CEO and senior leaders, but, in fact, were not. The entire video call was a deepfake.
Why These Scams Are So Dangerous
What makes deepfakes dangerous is that scammers use AI tools to mimic realistic voice, video and text. They can match mannerisms and environments, making them highly believable.
How to Protect Yourself from Deepfake Scams
The best defense is a good offense. Carpentar shares some practical steps we can take to defend against deepfakes.
With friends and family (and your bank):
- Set up code words - Use a phrase or question only you and your family know to verify identities if you receive a call or message that seems suspicious. Do the same with your bank. If you receive a call from someone claiming to be from your bank, ask for the security code. If you call your bank, they may ask you for your code word. It's an added layer of protection against impersonation.
- Pause and verify - Before acting on high-pressure requests, even if they seem to come from someone you trust, tell the caller you'll call them back.
- Call back using their verified number, not the phone number given to you.
- If they claim to be in jail, ask which jail so you can call to confirm if they have been arrested.
- If your boss asks you to make a wire transfer, confirm the request in person or through another channel.
- Limit what you share publicly - The more content, family photos, videos, and personal information you share online, the easier it is for scammers to clone your face and voice, and to gather personal information to impersonate you or other members of your family
- Learn to recognize red flags - Deepfake red flags are similar to other scams:
- The caller will make an unusual request for money or payment, such as buying gift cards, crypto, or making a wire transfer.
- They will create a sense of urgency, fear, or secrecy, e.g., “don’t tell anyone else”
- Listen or observe for sound and video glitches, such as delayed responses, unnatural expressions, and lip-sync errors.
- Practice fact-checking, and question too-perfect videos or voices.
For Businesses
- Employee training - Teach staff to recognize AI-driven scams, just as with phishing awareness.
- Verification protocols - Require multi-factor checks for fund transfers and sensitive instructions. A codeword system with employees and clients can serve as an extra layer of identity assurance.
- AI-powered defenses - Adopt fraud detection tools that monitor anomalies in voice, video, or transaction behavior.
- Incident response plans - Create playbooks to include rapid fraud assessment and clear escalation paths for suspected AI-enabled deception.
Stay Informed
Source: AI, Deepfakes, and the Future of Financial Deception: https://www.sec.gov/files/carpenter-sec-statements-march2025.pdf