In its simplest form, generative AI is any artificial intelligence technology that’s used to create new content after being trained to recognize patterns in existing data. Many forms of generative AI, like creating every type of media including text, images, music and video, have achieved mainstream familiarity. There are three main types of Generative AI which are the most common: large language models (LLMs) that process and generate text; generative adversarial networks (GANs) that create and manipulate visual content; and diffusion models, which generate highly detailed images by gradually refining random noise.
These technologies have rightfully earned a place in legitimate business operations. Financial institutions use LLMs for customer service enhancements like conversational AI chatbots, and for streamlining back office processes. GANs can analyze diagnostic images in the medical field. Diffusion models have been transformative for product design and digital content creation. Often, the content these models generate is so compelling that without labeling, it can’t be distinguished from human-created content.
Unfortunately, not all uses of the technology are beneficial. The same capabilities that make AI so valuable for businesses also make it a powerful tool for fraudsters. These scammers have embraced the capability of generative AI to clone voices, create deepfake videos that pass basic biometric checks, and generate synthetic identities. As these models become more sophisticated and easily accessible, businesses are facing an unprecedented challenge – maintaining the integrity of their systems while leveraging the benefits of AI for workflows and automation.
Emerging fraud tactics that GenAI is fueling
Traditional scams have evolved into sophisticated, automated operations with the assistance of generative AI. By leveraging this technology, scammers can create increasingly convincing videos, audio and documentation that challenges conventional security solutions. The following are a few examples.
Social engineering attacks have evolved away from scripted, generic attempts to something highly personalized. It’s now possible to use AI to analyze social media profiles and other public-facing content, and then generate targeted messages that perfectly mirror a victim’s writing style, personal interests and professional background. This level of detail removes many of the red flags - not just bad grammar and spelling, but the use of overly formal or informal writing styles the victim would never use - that previously made these attacks much easier to spot.
Synthetic identity fraud - the generation of a fictional, but convincing personas - has reached unprecedented scale. Sometimes these identities are completely fictional, sometimes they are a mix of real data from multiple or related people. What once took weeks of manual effort is now easy to accomplish in minutes. These synthetic identities, including plausible credit histories, employment records, and digital footprints can often pass initial verification checks.
With the power of AI, it’s now easier than ever to customize and personalize phishing campaigns, generating thousands of unique and contextually relevant email variations tailored to specific organizations or interests. These campaigns can even adapt in real-time based on their success rates, continuously improving their effectiveness.
Voice cloning technology has also taken voice phishing (vishing) attacks to a new level. Just a few seconds of audio are enough to clone a person’s voice with remarkable accuracy, whether the goal is to access an individual’s account or impersonate an executive seeking a company wire transfer.
Supporting all of these attacks, the AI-powered generation of false documentation is also being done at scale. AI makes it possible to create high-quality forgeries of financial statements, utility bills, and even government-issued IDs that contain all of the security features visible to standard verification systems.
Key security threats from GenAI
There are three primary categories of threats posed by generative AI, each of which presents a unique challenge for financial institutions. For a more generic overview on GenAI and the financial impacts, read our blog on navigating identity verification challenges in the age of generative AI.
Model probing attacks: reverse engineering security systems
Model probing attacks are one of the most sophisticated threats to security in generative AI. In these attacks, fraudsters will probe security models to understand and then replicate their decision-making process. After submitting carefully crafted input and analyzing the response, they can effectively reverse engineer the AI models that financial institutions use.
For example, fraudsters might map a bank’s fraud detection parameters by submitting hundreds or thousands of slightly varied transactions. From this, they can determine which triggered security flags and which passed undetected. With this knowledge, they can then structure their future activities in a way that deliberately avoids the detection pattern.
Injection attacks: manipulating AI systems
Security systems face two distinct types of injection threats. You may already be familiar with biometric injection attacks, where fraudsters bypass a biometric or liveness check by injecting video or images into the downstream verification process. Another threat to security in generative AI is a prompt injection attack. With a prompt injection attack, fraudsters target the ways in which AI systems process information. They will seek to exploit vulnerabilities by inserting prompts or data that can manipulate the AI’s response. Recent incidents of this have been seen with attackers successfully compromising chatbots or automated customer service systems, to authorize fraudulent transactions or expose sensitive information.
Data poisoning, another type of injection attack, occurs when attackers gradually introduce malicious data into AI training sets. For example, one financial institution discovered their fraud detection system had been compromised by attackers who spent months feeding it misleading translation data, effectively training the system to ignore one type of fraudulent behavior.
Deepfake attacks: the evolution of identity fraud
With realistic audio and video that mimics a legitimate customer or employee, today’s deepfakes can bypass many traditional biometric security measures. In a recent case, fraudsters used AI-generated video to successfully impersonate a company’s chief financial officer and request large wire transfers.
Document forgery has also reached new levels of sophistication using StyleGAN technology. These systems are now capable of generating identification documents including holograms, micro prints and other security details, making them increasingly difficult for conventional verification methods to detect.
Real-time deepfake detection remains a challenge in financial security, with traditional verification systems struggling to keep pace with the quality and speed at which deepfakes can now be generated, creating a critical vulnerability. These challenges are what Mitek’s Digital Fraud Defender was specifically designed to address with advanced, multi-layered safeguards that work in real-time.
Strategies for enhancing generative AI detection
Financial institutions will need to adopt a sophisticated and layered approach to combat these threats throughout the entire customer lifecycle. From account creation through ongoing access, transactions, and account changes, each touchpoint requires unique security measures.
Advanced behavioral technologies capabilities have also evolved to detect unique signatures that might be left by AI-generated content, and machine learning models will continuously adapt to new threat patterns. These technologies work together to identify anomalies across all customer interactions.
Together, these layered defenses create a comprehensive framework to detect and block AI-generated fraud attempts while maintaining the user experience across the customer journey - from onboarding to day-to-day transactions and account maintenance.
Improving accuracy of AI models
Continuous evolution is necessary to stay ahead of generative AI fraud. Modern security systems implement dynamic training methods to catch new fraud patterns as they emerge, so they can adapt to threats nearly in real-time.
Adversarial training, or deliberately exposing security models to simulated attacks, is one way to strengthen their detection capabilities and improve the level of security against generative AI attacks. The more conditions a system is exposed to, the better prepared it is for any novel conditions fraudsters might attempt to use. This approach can be combined with cross-validation testing across multiple scenarios and user types to ensure consistent performance against fraud attempts.
Model refinements through enhanced feature engineering focus on identifying subtle markers that remain in synthetic content - like microscopic image artifacts, or anomalies in behavioral patterns. These refinements help to maintain high accuracy rates, while minimizing false positives.
Increasing speed, staying ahead
Speed is just as crucial to accuracy when it comes to preventing AI-powered fraud. Modern security systems must be capable of analyzing thousands of data points within milliseconds to detect synthetic content. Backing these real-time detection systems, automated response mechanisms can instantly block suspicious activity and alert security teams.
Predictive analytics can also quickly identify emerging fraud patterns before they become widespread. Through the analysis of transaction and interaction data, AI security systems are capable of anticipating and preparing for new attack vectors. When this proactive approach is combined with rapid deployment for security updates, organizations are able to adapt their defenses just as quickly as fraudsters can evolve.
Robust data governance
To train AI models that can distinguish accurately between legitimate and fraudulent activity, high-quality, well-managed data is needed. Strict data quality standards and rigorous practices for data collection, storage and usage are needed.
AI-related risks must be specifically addressed in modern compliance frameworks, giving organizations the impetus to maintain comprehensive audit trails and conduct regular security audits that encompass specific evaluations of their AI models’ behavior, data integrity and capability for detecting synthetic content. Anti-bias controls must also be in place, to ensure systems don’t unfairly respond to smaller minority segments of the market.
A governance-first approach helps to ensure AI security systems are performing effectively, remaining compliant and protecting sensitive customer information.
The role of regulation and ethical standards
While regulatory frameworks are still in motion, numerous regulators such as the CFPB, DOJ, FTC and FINRA have made statements and orders that are specific to the use of AI. Privacy and data protection laws on the federal and state level must also be taken into consideration. The financial industry has, proactively, developed self-regulation with best practices for AI security, with leading institutions collaborating through industry working groups to share knowledge and ethical best practices for fraud prevention and identity verification.
These efforts are creating a comprehensive framework that allows institutions to innovate with AI-powered security solutions to combat emergent threats while maintaining ethical standards and the customer experience.
The future of generative AI security
As we look toward the future, a high level of industry collaboration will be needed to create new defense technologies, including cross-industry partnerships. Emerging quantum computing applications have the potential to revolutionize the fraud detection process, and advanced neural networks are already in development to specifically counter the generation of synthetic content. Blockchain technology is also being integrated into verification systems, ensuring tamper-proof audit trails and helping verify the authenticity of credentials.
Next-gen identity solutions will combine all of these advances into a robust system with physical and behavioral biometrics. Moving forward, success in this field will require staying agile, and embracing the power of emergent AI tech while understanding its risks and maintaining strong customer trust both in its utilization and defending against it.
Want to stay ahead of emergent AI fraud threats? Check out our Digital Fraud Playbook for comprehensive insights and strategies or browse Mitek’s advanced AI solutions suite to learn how you can protect your business from sophisticated synthetic attacks.
Download the Digital Fraud Playbook now
![Adam Bacia - Senior Director of Product Marketing at Mitek](/files/styles/icon/public/img/misc/adam-bacia-senior-director-of-product-marketing-at-mitek.jpeg?itok=eXlnwWsz)
About Adam Bacia
Adam is Senior Director of Product Marketing at Mitek.