The digital fraud dilemma: navigating identity verification challenges in the age of generative AI

August 26, 2024 by Adam Bacia

GenAI in 2027In today’s hyper-connected world, digital fraud has become a pervasive and rapidly evolving threat. Cybercriminals increasingly employ advanced tools like AI and deepfakes to target vulnerabilities, leaving businesses in a constant game of 'whack-a-mole' against a barrage of new and sophisticated attacks.

Deloitte's Center for Financial Services predicts that by 2027 genAI could enable fraud losses to reach $40 billion in the United States alone. To stay ahead, companies must adopt a comprehensive approach to digital fraud, proactively enhancing their identity verification processes and fortifying their defenses with AI-driven detection

 

Explore modern fraud solutions

 

Understanding digital fraud

At the broadest level, digital fraud refers to any fraudulent activity conducted online or through digital means. A growing concern for businesses is the threat posed by digitally created content and media. Advances in generative AI have enabled the creation of hyper-realistic content that can be almost indistinguishable from authentic material.

This technology, cheap and easy to use, poses significant risks for businesses reliant on online trust. Fraudsters leverage these tools to exploit gaps in identity verification processes or target companies slow to address new attack vectors.

 

How generative AI supercharges digital fraud

As AI technology evolves, so do the methods used by fraudsters to exploit weaknesses in digital identity verification systems. While deepfakes often capture media attention, several AI-driven tactics are used in identity fraud. Key tactics include: 

Deepfake Attacks

Deepfakes are a form of synthetic media that has been digitally manipulated to realistically replace one person's likeness with that of another. This type of media can include audio, image, or video content and is typically used to give the appearance of a real person saying or doing something they never actually did. By leveraging neural networks and vast amounts of data, deepfakes can mimic voices, facial expressions, and even gestures in real time during an identity verification process.

Injection Attacks

An injection attack occurs when an attacker inserts or "injects" malicious information into a vulnerable application or system. In the world of software, hackers often inject code in an attempt to force an application to execute unintended commands or processes. In the world of identity verification, it often comes in the form of digitally manipulated images or video being injected into a device, by way of a virtual camera or emulator, and associated to a digitally created fake ID.

Identity Template Attacks

For identity verification, this type of fraud occurs when an organized fraud ring makes a template out of an AI-generated fake document and then combines it with stolen PII, creating multiple versions of synthetic identities. The use of real personal information means the ID is more likely to pass database checks, and in sophisticated cases, fraud rings will use someone (often a patsy with slight variations to their look) to pass the liveness checks that many systems rely on.

 

Staying ahead of digital fraud

For years, the primary means of identity proofing online has been identity document verification (IDV). Ensuring an individual has a valid, government-issued document is a very strong indicator that they are indeed a real, legitimate person. Image comparison (the process of comparing the document’s portrait image to a selfie) was added to make sure the person using the document is indeed who they claim to be. So, in the past circumventing these processes required fraudsters to be specialists at using graphic design tools like Photoshop to create quality reproductions or alter genuine documents.

However, the sudden surge in AI capabilities has produced a deluge of digitally created and manipulated content, lowering the bar for attempting digital fraud significantly. This sea change has put the identity verification process of the past squarely in its crosshairs and is forcing fraud prevention providers to quickly react with strong countermeasures.

Fortunately, artificial intelligence also offers advanced tools for digitally created and AI-manipulated content detection. AI capabilities such as pattern recognition enable vendors to swiftly identify and address emerging threats. For instance, AI can analyze large datasets to detect anomalies, such as the repeated use of the same image across different accounts and implement effective measures for both detection and prevention.

 

How Mitek is innovating digital security

At Mitek, we believe in the proven cybersecurity principle of ‘defense in depth’ and are actively creating bundles of functionality that work in concert with each other to catch multiple types of digital fraud, rather than forcing organizations to add defenses ad hoc or reactively. Whether it’s deploying neural net model defenses, providing virtual camera detection, reviewing image repetition velocity, or alerting to tampered video streams or anomalies in image telemetry, we’re working to ensure individuals and businesses can have security across all their digital interactions.

 

 Learn more about Mitek's digital fraud detection solutions