<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=6406356&amp;fmt=gif">

Deepfake Phishing: The AI-Powered Social Engineering Threat Putting CISOs on High Alert in 2025

Published on
June 3, 2025

In 2024, British engineering firm Arup lost approximately $25 million after scammers used AI-generated deepfakes to impersonate the company’s CFO and trick an employee into transferring funds. This high-stakes heist - conducted via a convincing video call with a fake CFO - is a stark example of deepfake phishing, where attackers blend cutting-edge AI with classic social engineering to defraud organizations. Deepfakes are AI-manipulated images, videos or audio recordings that look and sound real, enabling criminals to mimic trusted voices and faces with alarming realism.

And Arup is not alone. Just over half of businesses in the U.S. and U.K. have already been targeted by a deepfake-powered scam, and 43% have fallen victim to such attacks. No wonder 85% of finance professionals now view these AI-powered social engineering scams as an “existential” threat to their organization’s financial security. 

For CISOs and cybersecurity leaders, deepfake phishing represents a fast-emerging crisis - one that exploits the human element in new ways and threatens to bypass traditional defenses.

The Rise of Deepfake Phishing

The convergence of generative AI tools and age-old con artistry has created a perfect storm. According to CrowdStrike, AI-based voice cloning attacks skyrocketed by 442% between the first and second half of 2024 - a staggering indicator of how quickly threat actors are adopting this technique. While deepfake scams are still new to many companies, some cases date back to at least 2018, showing that criminal experimentation with deepfake phishing has been quietly brewing for years.

One of the first widely reported incidents occurred in 2019, when fraudsters cloned a CEO’s voice and convinced a subordinate to wire $243,000 to a bogus account. That early voice phishing scam (vishing) foreshadowed a wave of more ambitious attacks. Today, advances in AI have dramatically lowered the barrier to entry - as Deloitte notes, there’s now an entire dark web cottage industry selling AI-driven scamming tools for as little as $20, a “democratization” of fraud tech that is making many anti-fraud defenses less effective. In fact, even free tools are proliferating: searches for “free voice cloning software” jumped 120% in a single year security.org, and modern algorithms need only about 3 seconds of audio to produce an 85% convincing voice match of a target.

Deepfake technology can create highly convincing impersonations by mapping one person’s face or voice onto another. Most people struggle to tell apart bogus AI-generated media from real recordings.

The result is that cybercriminals can now weaponize trust at scale. “More and more criminals are seeing deepfake scams as an effective way to get money from businesses,” observes Ahmed Fessi, a CIO at finance firm Medius. These attacks combine phishing techniques and social engineering “plus the power of AI” – making them perilously effective at tricking even savvy users. It’s little surprise that generative AI fraud is forecast to inflict $40 billion in losses by 2027 if left unchecked.

Real-World Incidents of Deepfake Phishing

Deepfake phishing is no longer theoretical - it’s happening in the wild, targeting companies large and small. Recent examples include:

  • $25 Million CFO Heist: Criminals used a deepfake video and voice to pose as a company’s CFO (and even other employees) in a live video meeting, siphoning $25 million via fraudulent transfers before the scam was discovered. This incident at Arup in 2024 stands as one of the largest deepfake-enabled corporate heists to date.

  • Fake CEO Phone Call: Back in 2019, scammers mimicked a CEO’s voice on the phone to trick an employee into wiring about $243,000 - one of the first known deepfake voice frauds on record. This early case showed how a well-timed fake call could bypass normal controls by exploiting trust and authority.

  • Ad Giant WPP Targeted: In 2024, advertising group WPP was the target of a deepfake scam that attempted to impersonate an executive. Fortunately, that attempt was identified and foiled before any money was lost, but it proved that even global firms are in the crosshairs of deepfake fraudsters.

  • Impersonating Government Officials: In 2025, the FBI warned of hackers using AI voice cloning to impersonate senior U.S. officials in phishing campaigns. In these vishing attacks, targets received voice messages that sounded like trusted public figures, aiming to lend authority and urgency to fraudulent requests.

  • Red-Team Ruses: Even cybersecurity professionals have demonstrated the potency of deepfakes. Mandiant reported that in 2023 their red-team testers successfully used an AI-generated voice to impersonate a company’s employee and convince a colleague to grant network access - allowing the team to slip past defenses and plant a payload inside the network. This “friendly” attack showed how easily a deepfaked voice could undermine standard security protocols.

These incidents reveal a common pattern: imitation + social manipulation = breach.

As one expert put it, criminals can now “use it to social engineer a situation, usually for financial gain” – for example, by cloning a boss’s voice to urgently request an unexpected fund transfer. In many cases, the attackers also create a false sense of urgency to pressure the victim (e.g. “we need this payment immediately”). By coupling legitimacy and urgency, deepfake phishing lures have an alarming success rate.

Why Deepfake Phishing Is a CISO’s Nightmare

For CISOs, deepfake phishing combines the worst of two worlds: highly sophisticated deception aimed squarely at the organization’s most fundamental vulnerability - its people. Despite all the cybersecurity investments, the human factor remains the weakest link (an estimated 74% of breaches involve human error or social engineering). Now, deepfakes pour fuel on that fire by making social engineering far more convincing. An unwitting employee might hear their CEO’s voice or see a familiar face on screen and, naturally, let their guard down.

From a defensive standpoint, these attacks are fiendishly hard to detect. Traditional technical controls (email filters, malware scanners, etc.) won’t catch a fake audio call or a manipulated video in a Zoom meeting. The usual phishing “red flags” - strange email domains, misspellings, odd grammar - don’t apply when the request is delivered via a seemingly genuine voice or video of a trusted person. In deepfake scenarios, the attack looks and sounds legitimate. As a result, the burden falls heavily on the human recipient to sniff out something “off” in the interaction, which is a tall order under pressure. Attackers often exploit that pressure: they invoke authority and urgency, urging targets to skip verification steps. (In the Arup case, the fraudsters even staged a confidential tone and tight deadline to push the employee into quick action.)

The Psychological Aspect of Deepfake Phishing

The psychological stakes are high. Deepfake phishing plays on emotions and trust: employees want to be helpful to their boss or responsive to an important client. When a deepfake triggers those impulses, even well-trained staff can be caught off-guard. It’s telling that 91% of security managers doubt the effectiveness of their traditional security training against advanced phishing attacks. Phishing awareness programs have improved, yet many leaders worry they aren’t enough - and the advent of AI-driven attacks validates those fears.

Making matters worse, most organizations today are ill-prepared specifically for deepfake threats. According to a 2024 study, over 80% of companies have no formal protocol for handling deepfake attacks, and more than half admit their employees lack training in spotting or dealing with deepfakes. Only a tiny minority (around 5%) say they have comprehensive measures in place (spanning staff training, communication safeguards, and process controls) to counter deepfake scams. There’s also an executive awareness gap: roughly 1 in 4 company leaders has little to no familiarity with deepfake technology, and 31% of executives do not believe deepfakes pose a fraud risk to their business. This underestimation can translate into lack of budget or support for preventive measures - a major frustration for CISOs waving the warning flag. In short, deepfake phishing presents a perfect storm of high impact, low organizational readiness, and it squarely targets the one defense that technology alone can’t shore up: the human mind.

Defending Against Synthetic Identity Cyber Threats

To combat deepfake phishing and other synthetic identity cyber threats, organizations need to take a multi-pronged, human-centric approach. It’s not enough to rely on technical safeguards; CISOs must elevate their focus on awareness, process, and validation. Consider the following pillars of defense:

  • Education & Awareness: Ensure everyone in the organization knows what deepfakes are, how to recognize warning signs, and what to do if they suspect a fake. Every employee should have a basic understanding of deepfake threats, with specialized training for high-risk roles like senior executives and finance teams

    Regular security awareness training should now include deepfake examples – e.g. playing a fake voice message vs. a real one – to build skepticism of “unexpected” requests. Many forward-leaning teams are using advanced phishing simulations (including voice phishing and video scenarios) to give employees safe practice against AI-powered social engineering. (In fact, effective phishing training can reduce successful attacks by up to 90%, so an investment here pays dividends.)

    It’s critical that security training evolves from a compliance exercise into a true human risk management program - one that continually tests, measures and improves employees’ ability to resist sophisticated scams.

 

  • Verification Processes: Because deepfake attacks thrive on impersonation, companies should strengthen verification and approval processes for sensitive transactions and requests. For example, require multi-person sign-off for large fund transfers or changes in payment instructions - this helped some firms catch the fraud in time. Implement call-back or face-to-face verification for any request that involves money or data and comes via voice/video message; a quick secondary confirmation using a known legitimate contact (e.g. calling the supposed requestor on an official number) can expose a fake. It’s wise to document these protocols and educate staff on them ahead of time. 

    Drilling the mantra “trust, but verify” into the culture is key - employees should feel empowered to pause and double-check unusual directives, even if seemingly from the CEO. In the moment, that extra verification step is what can break the attack chain.

 

  • Technology & Detection: Finally, leverage technology to complement the human and process defenses. Advanced fraud detection tools, for instance, can flag anomalous payment requests or changes in behavior that slip past people. Using AI/ML-based analytics on transaction patterns (with multi-level approval workflows) can help spot when “something just isn’t right” in a request. Biometrics and authentication measures can also be tuned to detect deepfake artifacts – for example, some voice authentication systems are being updated to detect audio spoofing attempts. 

    It’s early days for deepfake detection tech, but keeping an eye on developments here is prudent. At the very least, ensure your incident response plans cover deepfake scenarios (e.g. steps to validate communications, takedown procedures for fake content, public relations strategies if an executive’s likeness is abused). A prepared team can react faster and limit the damage if a deepfake phishing attack slips through.

 

Crucially, people remain at the center of the defense. 

As cybersecurity leaders, we must foster a culture of curiosity and caution – encouraging employees to question things that “feel off,” no matter who appears to be asking. Many traditional security awareness programs were built for compliance, not resilience, and that mindset needs to shift. Building true resilience means training beyond rote exercises, instead immersing users in realistic simulations and teaching them to think critically under pressure.

Some leading organizations are already embracing this approach. For example, Jericho Security uses generative AI to create hyper-realistic phishing simulations that “feel like interacting with real people,” allowing employees to practice spotting deepfake tricks in a safe environmentjerichosecurity.com. By treating employees as an active defense layer – a strategy known as human risk management – companies can continuously measure and improve their human firewall. The goal is to transform that “existential threat” of deepfakes into just another risk that is understood and mitigated through savvy policy and training.

Conclusion  

Staying Ahead of the Deepfake Curve

Deepfake phishing is here to stay, and its impacts are already being felt in boardrooms and bank accounts. The silver lining is that awareness is growing, and tools and strategies to fight back are emerging. Enterprise security leaders who act now – by educating their workforce, tightening verification controls, and leveraging modern training solutions – can dramatically lower the odds of being the next victim. In the face of AI-powered fraud, a proactive, informed team is the best defense.

Ready to turn the tables on deepfake phishing? 

Equip your organization with cutting-edge training that builds true resilience. Try Jericho Security’s platform free for 7 days and see how AI-powered phishing simulations and a human-centric defense strategy can fortify your company against even the most convincing scams. Protect your team by empowering your people – the first and last line of defense in an age of deepfakes.