Generative AI can assist in anything from creating synthetic identities to automating social engineering attacks. This evolution in both quality and scale has raised the stakes for financial institutions.
Understanding how fraudsters can deploy generative AI is essential for understanding how to prevent deepfakes passing through your security net, and protecting your financial institution and its customers. The technology is complex, but the fundamental challenge hasn’t changed - maintaining trust while stopping sophisticated fraud attempts.
In a recent incident, a British engineering company lost $25 million to a deepfake attack. Scammers used AI-manipulated deepfakes to pose as the company’s CFO and request assistance with a “confidential” transaction from another company. The elaborate deepfake scam even involved a conference call - with other AI-generated employees! This is just one example that demonstrates the level of sophistication currently possible with generative AI-powered deepfake fraud.
As these tools become both more sophisticated and more accessible, fraudsters are able to weaponize the technology to create even more layered, convincing scams. This blog explores some of the latest in generative AI fraud tactics, how they impact identity verification and authentication, as well as how organizations can strengthen their defenses.
What are deepfakes?
Deepfakes are AI-generated synthetic media that replicate human appearance, voice and behavior with ever-increasing accuracy. Using deep learning models known as Generative Adversarial Networks (GANs), fraudsters are able to create highly convincing impersonations capable of challenging advanced verification systems.
Types of deepfakes
These attacks can come in several forms, exploiting vulnerabilities at every step of the customer journey.
-
Video deepfakes - These pose a particular threat to remote identity verification. These deepfakes can mimic facial movements, expressions, and even responses to common security questions. Targeting remote account opening or video verification processes, these deepfakes can be used to bypass traditional liveness detection methods that aren’t capable of detecting AI-generated content. These types of attacks can cause significant brand and reputational damage.
-
Image-based deepfakes - Using GANs, fraudsters can generate synthetic ID photos or alter existing documents, creating false identity documents that can pass automated checks in document verification.
-
Voice synthesis - Fraudsters are now capable of cloning voices from just a few minutes of sample audio. They can then use this for vishing (voice phishing) attacks, impersonating customers or company executives to authorize fraudulent transactions or gain access to internal systems.
These techniques can be used to attack multiple points in the customer journey. This includes account opening fraud where fraudsters use synthetic identities with deepfake video or images to bypass verification, account takeover fraud using injection attacks and sophisticated phishing schemes and fraudulent transaction authorization via customer or employee impersonation.
Deepfakes pose a critical threat across multiple touchpoints - not just account origination and access. This can include the use of convincing deepfakes of company executives or spokespeople (including deepfake celebrities) to spread misinformation, or the impersonation of employees or business partners to gain unauthorized system access.
Check out the video below to see just how advanced and easy it is to create realistic deepfakes.
Companies have always been on guard against social engineering, but deepfake audio and video can now create even more convincing phishing emails or vishing calls, impersonating an executive or a business partner to trick employees into revealing sensitive information. And in a customer service context, fraudsters can use synthetic media to pose as a bank representative or support agent to manipulate customers or internal staff into providing account details. Overall, modern deepfake capabilities, when combined with existing attack vectors, have created even more complex threats.
The democratization of the AI tools needed to create high-caliber deepfakes has made the threat even more pressing. Many of these tools are now available through open-source projects or affordable commercial applications. Traditional fraud detection methods are struggling to keep pace with the available technology, and businesses will need to take a multi-layered approach and combine advanced AI detection with robust protocols for identity verification.
Why deepfakes are a threat to security
The scale of losses to deepfake fraud is already staggering. A recent Federal Reserve Bank of Boston report cited a study indicating synthetic identity fraud alone caused estimated losses of $20 billion in the United States. And the average cost of a successful deepfake attack on a business now exceeds $450,000 for a typical business and over $600,000 for those in the financial services sector (source: Deepfake Fraud Costs the Financial Sector an Average of $600,000 for Each Company, Regula’s Survey Reveals | Business Wire), when direct financial losses and remediation costs are included. These attacks can also have long-term impact on a business’ reputation and customer trust.
Several recent incidents highlight the sophisticated nature of these attacks. In addition to the $25 million case referenced earlier, global examples abound, such as a case in Shanxi, China, where a financial employee transferred 1.86 million yuan after a video call with a deepfake of their boss, and a CEO of a British energy company who was duped into wiring €220,000 to a “supplier” after a phone call. Even the cryptocurrency sector wasn’t safe, with Binance’s chief communications officer relaying that scammers had used a deepfake of him based on his previous TV appearances to trick people into meetings.
Identify verification bypass attempts are also being made possible on a much larger scale. The ability to create synthetic content - including static photos matching fraudulent IDs, and video responses during remote verification - to successfully open accounts with fictitious identities is prompting financial institutions to revisit their entire digital onboarding process.
Financial institutions are also dealing with legal and regulatory concerns, such as the need to demonstrate the ability to detect and prevent synthetic media attacks as part of their compliance obligations to avoid regulatory penalties and mandatory audits.
The threat on multiple levels is immediate and growing, and traditional security measures are increasingly inadequate, as described in our Digital Fraud Playbook.
How to prevent deepfakes: tools and technology
Modern deepfake fraud is a rapidly evolving threat, requiring detection technologies that not only keep pace but also address the increasingly sophisticated methods employed by scammers. By leveraging cutting-edge AI analysis, multi-layered verification, and proactive monitoring, businesses can build robust defenses tailored to combating these complex threats. However, these technologies must work together to counter the highly adaptable strategies used by fraudsters.
Organizations must deploy detection technologies that not only match but surpass the sophistication of scammers’ tools. It’s imperative that they find a trusted provider with a proven track record in emerging fraud tactics and have employed the right talent, tools and technology to build scalable solutions. Modern deepfake detection requires a multi-pronged approach that combines biometric analysis, AI-driven monitoring, and robust verification protocols to create a seamless defense against emerging threats.
AI-powered detection solutions
Advanced AI models can detect subtle inconsistencies, the kind that human observers might miss. For example, fraudsters can record a near-perfect replica of someone’s voice or create a video with highly realistic visuals, fooling even trained individuals. To flag these deepfakes, the detection systems employ multiple layers of analysis allowing it to spot even the smallest of anomalies and send out an alert for suspected fraud.
-
Visual analysis and pattern recognition - In visual analysis, biometric analysis and sophisticated algorithms examine images pixel-by-pixel, looking for indicators of manipulation. Neural networks trained on extensive datasets can identify specific artifacts that are common in GAN-generated content, and visible manipulations or watermarks that can be found on many deepfakes. Deep learning models can be used to analyze facial features for unnatural movements, inconsistent blinking patterns and incorrect micro-expressions. Also, computer vision algorithms can be used to detect inconsistencies in lighting, shadows and reflections throughout video content.
-
Audio authentication - In one form of audio authentication, voice liveness detection systems can analyze multiple aspects of speech like tone and frequency, looking for subtle clues that indicate whether content is real or manipulated. Advanced audio forensics can also be used to analyze the non-speech portion of audio. This can include background noise consistency, acoustic environment matching, the temporal alignment between audio and video, and the use of frequency spectrum analysis to identify synthetic artifacts.
-
Template attack detection - Fraudsters are increasingly leveraging AI to generate highly convincing ID templates that closely replicate legitimate documents. These templates include accurate holograms, fonts, and other security features, making them difficult to distinguish from authentic documents. When paired with accurate personal information, these fakes can bypass traditional identity verification systems, posing a significant risk to financial institutions. Detection mechanisms address these challenges by analyzing patterns of fraudulent behavior. Systems monitor the repeated use of identical personally identifiable information (PII), duplicate portraits, and consistent image backgrounds across multiple attempts. Cross-referencing faces against known fraud databases further enhances the ability to flag recurring scams and identify emerging AI-generated template attacks.
-
Injection attack detection - Deepfake images or videos can exploit identity verification systems by being injected via virtual cameras or emulators. These sophisticated attacks mimic the appearance of a live person during the verification process, tricking systems into approving fraudulent identities. This bypass technique has become a key tool for fraudsters targeting remote identity verification workflows. To counter these threats, detection systems analyze both the digital content and the data streams used during the verification process, looking for signs of manipulation or tampering. Techniques such as virtual camera presence detection, suspicious resolution detection, and duplicate frame detection are deployed to identify and block fraudulent injection attempts, ensuring system integrity and security.
-
Digital forensics - Digital Fraud Defender technology employs sophisticated forensic techniques for deepfake detection. This includes metadata analysis to identify manipulation signatures, Error Level Analysis (ELA) to detect image compositing, the use of noise pattern analysis that can identify inconsistencies in image quality, the detection of compression artifacts and similar anomalies and analysis of color patterns and pixel-level inconsistencies.
-
Pipeline security - For detection and prevention of attacks in the verification pipeline, the system maintains protection against manipulation attempts with techniques like real-time monitoring of data integrity throughout the verification pipeline, validation of input sources and tamper-evident processing of verification requests.
Expanded, layered verification
Fraudsters often exploit gaps between different verification systems, such as using synthetic identities to pass document verification while bypassing biometric checks. Addressing these challenges requires a layered strategy that integrates multiple verification methods into a seamless process, ensuring that vulnerabilities in one method are counterbalanced by the strengths of others.
Leaders in detection recognize the sophistication of these threats and combine various verification methods and signals to create a comprehensive strategy. To counter these advanced tactics, this layered approach includes:
-
Document and identity verification - Techniques used to verify documents include advanced OCR and document analysis, such as hologram and security feature detection, the ability to cross-reference government databases, the use of machine learning models that have been trained on millions of legitimate documents, and real-time document tampering detection.
Another essential element of document verification is biometric matching with liveness, to ensure the person presenting the documents is the true holder and physically present.
-
Biometric authentication - This includes multi-modal biometric authentication using passive liveness detection, behavioral biometrics, and continuous biometric monitoring during sessions.
-
Contextual analysis - These sophisticated behavioral analytics examine user interaction patterns for anomalies, normal transaction patterns, geographic and temporal consistency, historical behavior, and device and network characteristics. They can be used for risk scoring based on multiple data points.
-
Real-time monitoring and response - Layered systems operate continuously to provide real-time threat detection and response, including pattern analysis across the user population, integration with fraud-intelligence networks, adaptive security measures based on threat level, automated risk scoring and escalation, and continuous AI model training.
This comprehensive security stack capable of adapting to evolving threats is essential to effectively combat deepfake fraud. Discover how Mitek’s AI-powered solutions can protect your business against deepfake fraud. Schedule a demo or connect with our experts to see these tools in action and stay ahead of emerging threats.
The future of deepfake prevention
As the race between deepfake creation and detection technologies continues to accelerate, rapid innovation has taken place in the identity verification world. Next-generation detection with quantum-resistant algorithms and sophisticated AI architecture analyzes thousands of data points in milliseconds, measuring everything from blood flow patterns to neural response consistency.
Collaborations between financial institutions and solution providers, with shared threat intelligence and standardized testing protocols, are emerging to meet threats faster. For businesses, success will hinge on their strategic investment in AI security, comprehensive staff training and continuous security assessment. We also forecast the future of identity verification will see increased integration of behavioral biometrics, executed seamlessly behind the scenes to maintain a consistent customer experience.
Organizations assessing how to prevent deepfake attacks cannot afford to take a reactive approach, and must meet these sophisticated and growing threats head-on. Don’t wait for a deepfake attack to expose vulnerabilities in your security infrastructure. Get started by downloading our Digital Fraud Playbook to develop your prevention strategy, and explore Mitek’s advanced AI solutions. The future of fraud prevention and protecting your business and customers from sophisticated deepfake attacks begins today.
About Konstantin Simonchik - CSO at Mitek
Konstantin Simonchik is the Chief Science Officer and co-founder at ID R&D, a Mitek company. He brings a wealth of experience not only as the former science head of a large biometric firm in Europe but also as a professor of Speech Information Systems at a leading research university. He has authored more than 30 scientific papers devoted to speaker recognition and anti-spoofing, holds multiple patents, and has received numerous recognitions and awards from organizations including IEEE, ASVspoof, and NIST.