Summary

For a decade, "selfie + government ID" has been the default answer to the remote identity problem. Regulators accepted it. Customers tolerated it. Security teams trusted it.

That consensus is collapsing.

In January 2026, the World Economic Forum's Cybercrime Atlas published Unmasking Cybercrime: Strengthening Digital Identity Verification against Deepfakes, a study produced with Mastercard, Santander, Recorded Future, Trend Micro and Group-IB. It tested 17 face-swapping tools and eight camera injection tools against live KYC flows. The finding: most were able to bypass standard biometric onboarding checks. Even moderate-quality face-swap models, combined with camera injection, defeated commercial biometric systems.

That is not a prediction. It is a documented result from mainstream institutions.

For a mid-sized lender, a fintech with a few thousand monthly signups, a B2B marketplace, or a crypto on-ramp, this changes the threat model in a specific way. You are no longer defending against human impostors holding up printed photos. You are defending against cheap, automated identity generators sold as software deployable by someone with no technical background.

This issue breaks down what is happening, why each KYC layer is failing, and what a realistic SMB-scale defense looks like in 2026.

1. KYC verifies a person and deepfakes attack the signal.

The foundational assumption inside almost every eKYC system is simple: if we can get a live image of a face and match it to an ID document, we know who we are dealing with.

Deepfake-enabled fraud does not argue with that logic. It breaks the assumption underneath it that the face arriving at the server is coming from a real camera pointed at a real human.

Two distinct attack classes now matter, and most mid-market security teams still combine them:

Presentation attacks: the fraudster shows something to the camera. A printed photo, a phone screen, a 3D mask, or a live face-swap running on their own face during capture. The camera is real. The content in front of it is fake.

Injection attacks: the fraudster bypasses the camera entirely. Using a virtual camera driver, an emulator, or an intercepted API call, they feed a synthetic video stream directly into the KYC app. The camera hardware is never engaged.

The distinction matters because almost all "liveness detection" sold over the last five years was built to catch presentation attacks. Injection attacks operate at a layer most liveness engines cannot see, the server checks the image after it arrives, by which point a virtual camera has already replaced the real feed.

Your KYC vendor may still be doing everything they promised in 2021. It is the threat that has moved.

2. Why each KYC layer is failing now?

A typical eKYC flow has three gates: document verification, liveness detection, and face matching. Attackers do not need to break all three, they just need enough of the chain to get an account opened.

2.1 Document verification:

Two things have shifted.

First, genuine ID documents are leaking at unprecedented volume. Regula's 2025 incident review cites a June 2025 breach at Connex Credit Union exposing 172,000 customer files including government IDs, and a parallel attack on Italian hotel systems that exfiltrated tens of thousands of high-resolution passport scans.

Second, AI-generated documents now pass automated OCR and template checks. Dark-web services like ProKYC package a synthetic persona with a matching fake document and pre-rendered liveness video for a few dollars. Group-IB's Weaponized AI research, summarised in early 2026, found synthetic identity packs available for as little as $5, and recorded 8,065 biometric injection attempts against a single financial institution's KYC flow between January and August 2025.

2.2 Liveness detection:

Liveness was the industry's answer to presentation attacks, and for a period it worked. Not anymore.

Passive liveness - analysing a single frame, is defeated by modern GAN(Neural Netowork) output. Active liveness - asking the user to blink or turn their hea, is defeated by real-time face-swap tools like DeepFaceLive that render the synthetic face onto the attacker's actual head movements.

In December 2025, MITRE ATLAS, the knowledge base for attacks against AI systems, published a red-team case study from iProov titled "Deepfake Injection Evades Mobile KYC Liveness Verification." The team achieved a full bypass using only publicly available tools: Faceswap for the synthetic face, OBS studio to stream it, and an Android virtual-camera app to replace the phone's default feed. The system accepted the session as genuine.

2.3 Face matching:

Face matching is working closest to design and that is the problem. If the document is synthetic and the selfie is a deepfake of the same synthetic face, the match score will be high. The system correctly confirms that two fabricated artifacts depict the same fabricated person. Tuning the match threshold will not fix this.

3. Why this is now an SMB problem?

The instinct in most SMBs is to assume this threat targets tier-one banks. The economics say otherwise.

The cost to attack has collapsed. Group-IB documents deepfake image services at $10–$50 per image, ready-to-use synthetic identity packs at around $15, and voice-cloning subscriptions under $10 a month. Chinese vendors rent face-swap software for $1,000–$10,000 for industrial-scale operations. Between 2022 and September 2025, Group-IB logged more than 300 dark-web posts explicitly pairing "deepfake" and "KYC" as a service.

Attackers route around hardened targets. iProov's 2026 Threat Intelligence Report recorded a 741% year-on-year increase in injection attacks against iOS devices, with Southeast Asia showing a 720% spike in Q3 2025, techniques that have since spread to Latin America.

SMB stacks are often the softest target. A regional lender or mid-market fintech typically runs a single KYC vendor with default configuration, no injection-attack detection, no device attestation. A synthetic identity that fails against a tier-one bank's multi-layer stack often passes at the smaller institution on the first attempt.

For any business that opens accounts, issues credit, or onboards counterparties, the correct assumption in 2026 is that deepfake attempts are already in your funnel. You may just not be logging them.

4. Five mistakes SMB security teams are still making

"Our KYC vendor handles this." Most vendor contracts cover presentation attack detection aligned with ISO/IEC 30107-3. The newer relevant standard is CEN/TS 18099, which defines Injection-Attack Detection (IAD), a separate control set. Unless your contract explicitly names IAD, a large class of attacks is outside your vendor's scope by design.

"We use active liveness, so we're protected." The MITRE ATLAS case study and WEF Cybercrime Atlas both demonstrate active-liveness bypasses using publicly available tools. Active prompts raise attacker cost marginally, they do not close the vector.

"99% detection is good enough." At 5,000 applications per month, a 1% miss rate means 50 synthetic accounts opened monthly, every month. Synthetic identities typically "age" for 6–18 months before the bust-out, so the loss surfaces in a future quarter under a different line item.

"This is a fraud-team problem, not a compliance problem." ,FinCEN's FIN-2024-Alert004, issued on November 2024, instructs US institutions to file SARs referencing the key term "FIN-2024-DEEPFAKEFRAUD" whenever deepfake media is suspected at onboarding. Non-filing is a regulatory exposure independent of the fraud loss itself.

"We'll wait for the tools to mature." They have matured. The EU AI Act classifies remote biometric verification as high-risk and mandates documented safety testing. eIDAS 2.0 requires high-level liveness for European KYC.NIST SP 800-63-4 incorporates updated liveness and injection-resistance guidance. Waiting accumulates regulatory debt.

5. A practical playbook for SMB

The goal isn’t to match a top bank’s level of security. It’s to make attacking you harder and more expensive than going after the next business.

Step 1 - Red-team your current flow: Most reputable KYC vendors offer these. Specifically test: a deepfake played through a virtual camera driver, a real-time face-swap over the attacker's face, an AI-generated synthetic ID paired with a matching synthetic selfie, injection attempts on both Android and iOS.
Document what passed. That is your baseline.

Step 2 - Close the injection blind spot: This is the single highest-leverage upgrade. Require: device attestation confirming the feed came from genuine hardware, a client-side SDK that binds the camera on-device, and session-integrity signals (virtual-camera driver detection, emulator detection). Ask for CEN/TS 18099 alignment and, when it finalises, ISO 25456.

Step 3 - Layer, don't replace: Every control can be bypassed in isolation. Combined, they compound:

  • Document authenticity (OCR, security features, AI-artifact scan)

  • Presentation attack detection (ISO/IEC 30107-3)

  • Injection attack detection (CEN/TS 18099)

  • Frame-level deepfake forensic analysis

  • Device fingerprinting, IP/geolocation, behavioural signals

  • Monitoring account activity and risk signals closely during the first 90 days after onboarding.

No single line item closes the problem. The stack is the control.

Step 4 - Require out-of-band verification for high-risk events: For wire transfers above an internal threshold, credential resets, beneficial-owner changes, or privilege escalation, require a channel that cannot be satisfied by the same deepfake used at onboarding, a callback to a pre-registered number, a signed request from a registered device, or an in-branch confirmation. The Arup Firm Case - $25.5 million lost in 2024 after a finance employee approved 15 wire transfers on a video call where every other participant was a deepfake, is the reference. Design the control before the same pattern lands in your business.

Step 5 - Update SAR and incident procedures: For US-regulated entities, add the FinCEN key term "FIN-2024-DEEPFAKEFRAUD" to SAR templates and map its red flags to your monitoring rules. For EU entities, document how your flow meets EU AI Act high-risk and eIDAS 2.0 requirements. Everyone else: expect equivalent regulation within 18 months.

Step 6 - Train the humans: The May 2025 Coinbase Incident, where attackers bribed offshore support agents for customer KYC data rather than defeating biometrics at all, is a reminder that the weakest link is often not the model. Treat employee and contractor onboarding as carefully as customer KYC. Run quarterly practice drills for deepfake scenarios, like a fake CEO voice asking for a wire transfer, an impersonated applicant, or a fake counterparty during contract signing.

6. Conclusion

For twenty years, digital identity verification rested on an implicit assumption: capturing an image was roughly equivalent to witnessing a person. Generative AI has challenged that equivalence. An image, a video, a voice - none of them are reliable proxies for presence or a real person anymore.

The businesses that adapt early are not the ones buying the most sophisticated detection engine. They are the ones who have internalised a simpler reframe:

“KYC is no longer about verifying a face. It is about verifying a signal chain - the camera, the session, the device, the network, the timing, and the content, all consistent with a real human being in a real place at a real moment.”

That reframe is what separates a defensible 2026 identity program from one that is going to generate a regulatory finding or a material loss in the next twelve months.

The threat has moved. The cost of attack has collapsed. The regulators have put it on paper. Mainstream research institutions have confirmed the bypasses work.

The only question left is how quickly your verification systems can catch up to attackers who are already ahead.

Keep Reading