Summary

In January 2024, a finance employee in Arup’s Hong Kong office joined what appeared to be a routine video conference with the company’s UK-based CFO and several other senior colleagues. During the call, the employee was instructed to authorize a series of wire transfers. By the end of the meeting, 15 transactions totaling approximately US$25 million had been executed. Every participant on the video call, including the CFO was later confirmed to be an AI-generated deepfake.

As reported by CNN in May 2024, the incident represents a turning point in corporate fraud. This was not a phishing email or a spoofed phone call. The attackers successfully staged an entire multi-person video conference, complete with realistic visuals, voices, and interactions, demonstrating how deepfake technology can now bypass traditional verification methods and exploit trust inside finance teams.

The Attack Timeline: How It Unfolded?

Initial Contact: The Phishing Email

The attack began with an email that appeared to come from Arup's Chief Financial Officer, who was based in the United Kingdom. The message referenced a "confidential transaction" and requested multiple urgent payments.

The finance employee's initial reaction was appropriate: suspicion. The request for secret, immediate transactions raised red flags, exactly the kind of phishing indicator that security training emphasizes.

The Video Call: Overcoming Skepticism

Here's where the attack evolved beyond traditional fraud. To overcome the employee's doubts, the attackers arranged a video conference call. When the employee joined, they saw and heard what appeared to be their CFO and several recognizable colleagues discussing the urgent transactions.

According to Hong Kong police Senior Superintendent Baron Chan Shun-ching, who briefed the media on the incident: "In the multi-person video conference, it turns out that everyone [he saw] was fake".

The deepfakes were sophisticated enough to fool someone who knew these executives personally. The voices matched. The faces were recognizable. The meeting followed a plausible business scenario. Every visual and auditory cue signaled legitimacy.

The Transfers: $25 Million Across 15 Transactions

Convinced by the video evidence, the employee proceeded to execute the requested transfers. Over the course of the interaction, the employee made 15 separate transactions to five different Hong Kong bank accounts, totaling HK$200 million (approximately $25.6 million USD).

The fraud wasn't discovered until the employee followed up with Arup's actual headquarters to confirm the transactions by which point the money had already been moved by the criminals.

How Attackers Created These Deepfakes?

Hong Kong police investigation revealed that the perpetrators developed their AI-generated deepfakes by leveraging publicly available material. Specifically, they used existing video and audio files from:

  • Online conferences

  • Virtual company meetings

  • Publicly accessible corporate events

  • Investor presentations

This is a critical point for CFOs and finance leaders: Your routine business activities provide the raw material for these attacks.

As Rob Greig, Arup's Global Chief Information Officer, emphasized in a statement to CNN:
“Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months”

What Makes This Case Different?

Not a System Breach

This wasn't a traditional cyberattack. Arup Firm confirmed that none of their internal systems were compromised, no data was stolen, and no passwords were cracked. The attackers never needed to hack anything.

Instead, they exploited human psychology and trust. As Greig noted, this was "technology-enhanced social engineering" rather than a technical breach.

Multi-Person Video Deepfakes

Before this incident, most deepfake fraud involved one-on-one interactions, a single fake video call or audio message. The Arup case demonstrated that attackers could orchestrate an entire meeting with multiple AI-generated participants interacting naturally.

This represents a significant escalation in deepfake capabilities and fundamentally changes what finance teams need to defend against.

No System Can Detect This

Traditional cybersecurity measures: firewalls, endpoint protection, multi-factor authentication, all functioned correctly during this attack. The security breach occurred at the human decision-making level, where an employee made what appeared to be a rational business decision based on seemingly legitimate visual confirmation.

Three Critical Lessons for CFOs

1. Public Visibility Creates Vulnerability

Every time you appear on an earnings call, speak at a conference, or post a video on LinkedIn, you're creating source material that can be used for deepfake attacks. This isn't a reason to stop these activities, they're essential to your role but it is a reason to implement stronger verification protocols.

The more publicly visible your voice and appearance, the easier it is for attackers to create convincing deepfakes of you.

2. Video Calls Are No Longer Definitive Proof

For years, security experts recommended video calls as a way to verify identity when email or voice requests seemed suspicious. The Arup case invalidates that advice.

Video verification can still be useful, but only if:

  • The call is initiated through verified corporate channels (not via link sent by the requester)

  • Additional out-of-band verification is required for high-value transactions

  • The communication platform is controlled and logged by your organization

3. Urgency + Secrecy = Red Flag

The request for "confidential transactions" that needed to be executed quickly should always trigger additional scrutiny. Legitimate urgent business needs exist, but so do verification protocols that should apply regardless of urgency.

As Michael Kwok, Arup's East Asia Regional Chairman, wrote in an internal memo seen by CNN:
“The frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers“

What Arup Did Right? (And What You Should Do?)

Despite the significant financial loss, Arup's response demonstrates several best practices:

1. They Reported Immediately
Arup notified Hong Kong police in January 2024 as soon as the fraud was discovered. Quick reporting increases the chances of fund recovery and helps authorities track criminal networks.

2. They Disclosed Publicly
After the Financial Times identified them in May 2024, Arup confirmed the details and shared lessons learned. This transparency helps other organizations defend themselves.

3. They Emphasized Systemic Change
Rob Greig stated: "I hope our experience can help raise awareness of the increasing sophistication and evolving techniques of bad actors." The company recognized this as an industry-wide threat requiring collective defense.

4. They Confirmed Business Continuity
Arup immediately reassured stakeholders that their "financial stability and business operations were not affected and none of our internal systems were compromised”

Immediate Action Steps for Finance Teams

Based on the Arup incident, here are five steps CFOs should implement immediately:

  1. Establish Dual Authorization for all wire transfers above a specified threshold, with no exceptions for urgency.

  2. Verify Through Separate Channels - If someone requests a transaction via video call, verify through a phone call to a known number or an in-person conversation

  3. Use Corporate Communication Platforms - Require that all business-critical video meetings occur on company-controlled platforms with proper logging

  4. Create Challenge Questions - Establish personal verification questions that only the real executive would know (recent specific events, shared experiences)

  5. Train on Deepfake Indicators - While deepfakes are increasingly sophisticated, small anomalies in lighting, lip-sync, or unnatural pauses can still be detected

The Broader Threat Landscape

The Arup case is not isolated. According to Federal Reserve Vice Chair for Supervision Michael S. Barr, there has been a "twentyfold increase over the last three years" in deepfake-related attacks targeting financial institutions.

Financial regulators are responding. In November 2024, the U.S. Treasury's Financial Crimes Enforcement Network (FinCEN) issued Alert FIN-2024-ALERT004, providing specific guidance for financial institutions on identifying and reporting deepfake fraud schemes.

The alert includes red flags such as:

  • Suspicious technological glitches during remote identity verification

  • Use of third-party webcam plugins during live verification checks

  • Inconsistencies in submitted identity documents

  • Unusual transaction patterns following new account openings

Conclusion: Trust Must Be Verified

The Arup incident demonstrates that we've entered a new era of corporate fraud. Seeing is no longer believing. Hearing familiar voices is no longer sufficient. Even multi-person video conferences can be entirely fabricated.

For CFOs, this means fundamentally rethinking authentication and authorization protocols. Every high-value transaction must be verified through multiple independent channels. Urgency cannot override security protocols. And public-facing activities, while necessary, must be balanced with awareness that they create attack vectors.

As Arup's experience shows, even sophisticated organizations with strong cybersecurity can fall victim to technology-enhanced social engineering. The question isn't whether your organization might be targeted, it's whether your current protocols can withstand the next evolution of deepfake fraud.

Keep Reading