Defining Deepfakes

The Cost of Miscommunication: Deepfake Myths in AML and KYC

Killian H. Yates | November 25, 2024 | yatesk4253@gmail.com | www.LinkedIn.com/in/KillianYates | www.fraudfrog.blogspot.com

Introduction

In the rapidly evolving landscape of financial and cybersecurity challenges, accurate terminology is crucial for effective solutions. Unfortunately, the term “deepfake” is being misapplied in ways that hinder clear communication and problem-solving.

The misuse of this term has led to confusion, particularly in Know Your Customer (KYC) and Anti-Money Laundering (AML) prevention efforts. Fraudsters are increasingly using advanced technologies, such as real-time likeness filters and AI-generated faces, to exploit vulnerabilities in verification systems. These practices, while serious, are fundamentally different from true deepfake threats that involve targeted impersonation of specific individuals.

This paper seeks to clarify the distinction between these technologies and their respective threats. By examining the misuse of likeness filters, random face fillers, and true deepfakes, we can ensure that financial institutions and cybersecurity teams address these challenges with the precision and focus they require.

Defining the Problem

A. What Is a True Deepfake?

A true deepfake involves the use of machine learning models trained on specific individuals to convincingly replicate their likeness, voice, or actions. These technologies aim to deceive audiences by creating hyper-realistic content that mimics real people.

Examples of true deepfakes include:

  • Fraudulent corporate communications, such as fake video or voice authorizations by executives to approve large transactions.
  • Extortion schemes leveraging fabricated videos of victims’ family members to coerce ransom payments.
  • Politically motivated deepfakes, like false speeches attributed to politicians, designed to manipulate public opinion or destabilize trust.

B. What Is Being Mislabelled as a Deepfake?

The term “deepfake” is increasingly being misused to describe fraud that involves:

  • Real-time likeness filters, similar to those popularized by social media platforms like Snapchat, but repurposed for criminal activities.
  • AI-generated random face fillers used in combination with stolen or fabricated personal information to create fraudulent accounts.

These methods do not involve impersonating specific individuals but rather aim to bypass verification systems and exploit vulnerabilities.

C. Why This Mislabeling Matters

The overuse of the term “deepfake” creates several critical issues:

  • It suggests a widespread epidemic of individual likeness theft that does not align with actual trends.
  • It shifts attention and resources away from the unique and distinct threats posed by true deepfakes.
  • It fosters public misunderstanding, making it harder to educate stakeholders about the specific nature of these threats and the solutions needed to address them effectively.

Likeness Filters, Random Face Fillers, and Deepfakes: Key Differences

A. Likeness Filters

Likeness filters are technologies that overlay or modify live appearances without attempting to replicate a specific individual’s likeness. They are widely known for their playful use on social media platforms like Snapchat but have been weaponized by fraudsters to bypass biometric verification systems.

In criminal contexts, likeness filters are used to:

  • Alter the appearance of fraudsters during live KYC verification processes.
  • Create fraudulent identities to access platforms such as ride-sharing services or financial institutions.

These filters are easy to implement and require minimal technical expertise, making them a preferred tool for lower-level fraud schemes.

B. Random Face Fillers

Random face fillers involve the use of AI-generated or stolen generic images to represent fabricated or compromised personal identities. Unlike likeness filters, these images are static and paired with fraudulent data to pass verification.

Common use cases include:

  • Setting up shell bank accounts for laundering money.
  • Creating drop accounts to facilitate illegal transactions without directly implicating a real person.

This method allows fraudsters to avoid the complexities of identity theft while leveraging fake personas to exploit systems at scale.

C. Deepfakes

True deepfakes involve advanced AI technology to create highly realistic representations of specific individuals. These are not generic images or filters but tailored impersonations designed to deceive audiences.

Examples include:

  • Fake video calls from executives authorizing financial transfers.
  • Fabricated speeches or public announcements by high-profile individuals.
  • Extortion schemes that use convincingly faked images or videos of a victim’s family members.

Deepfakes require significant computational resources and expertise, distinguishing them from the simpler tactics of likeness filters or random face fillers.

D. Differentiation in Threat Actors

The actors behind these tactics differ significantly:

  • Likeness filters and random face fillers are typically employed by organized fraud rings or opportunistic criminals targeting large-scale account creation for low-stakes fraud.
  • Deepfakes are the domain of highly skilled adversaries, often targeting high-value individuals or organizations for high-stakes crimes like corporate espionage or political manipulation.

Real-World Examples and Implications

A. Misuse of Likeness Filters

Likeness filters are being used by fraudsters to exploit weaknesses in biometric verification systems. For example:

  • Gig Economy Fraud: Fraudsters use filters to pass driver verification processes on ride-sharing platforms like Uber, allowing them to create multiple accounts or impersonate legitimate drivers.
  • Financial Onboarding: Likeness filters are used to pass live identity checks during account creation, enabling access to banking services under false identities.

These schemes undermine the integrity of verification processes and contribute to the proliferation of shell accounts used for fraudulent activities.

B. Random Face Fillers in Financial Fraud

Random face fillers are increasingly used to create synthetic identities for financial crimes. Examples include:

  • Money Laundering Operations: Fraudsters pair random AI-generated faces with compromised or fabricated personal information to open bank accounts, enabling the laundering of illicit funds.
  • Synthetic Account Networks: Fraud rings use fake personas with random faces to establish a network of accounts for large-scale fraud, such as orchestrating loan schemes or e-commerce scams.

The use of random face fillers shifts the focus from stealing personal likenesses to exploiting systemic vulnerabilities in verification systems.

C. Deepfake-Driven Crimes

The implications of true deepfakes are far-reaching and often catastrophic. Real-world examples include:

  • Corporate Fraud: Criminals have used deepfake audio to impersonate CEOs, authorizing fraudulent transfers of millions of dollars.
  • Political Manipulation: Deepfakes of world leaders delivering inflammatory speeches have been created to incite discord and disrupt diplomatic relations.
  • Extortion Scams: Deepfake videos of kidnapped family members have been used to coerce ransom payments from unsuspecting victims.

These examples highlight the high stakes and advanced nature of deepfake crimes, which pose a distinct threat from those associated with likeness filters or random face fillers.

D. Implications Across the Spectrum

The consequences of these technologies include:

  • For Likeness Filters and Random Face Fillers: Increased difficulty in maintaining robust KYC and AML processes, leading to systemic vulnerabilities.
  • For Deepfakes: Threats to trust in media, corporate security, and political stability.

Understanding these differences is critical to developing targeted responses that address each threat appropriately.

5. The Risks of Mislabeling

A. Misaligned Priorities

When likeness filters, random face fillers, and deepfakes are conflated under the term “deepfake,” resources are often allocated inefficiently. For example:

  • Financial institutions may invest in advanced deepfake detection tools that are irrelevant to addressing the immediate threat of likeness filters used in KYC fraud.
  • AML efforts may overlook the need to identify synthetic identities created with random face fillers, focusing instead on traditional identity theft schemes.

By failing to differentiate between these threats, organizations risk leaving critical vulnerabilities unaddressed, while expending resources on solutions misaligned with the actual risks.

B. Overlooked Specific Threats

Mislabeling these technologies blurs their unique characteristics, resulting in:

  • Underestimating the Threat of Random Face Fillers: These synthetic identities can evade many traditional detection systems, enabling large-scale fraud operations that are difficult to trace.
  • Overgeneralizing Deepfake Risks: While true deepfakes pose significant dangers, the misapplication of this term suggests an exaggerated prevalence of individual likeness theft in everyday financial fraud.

This confusion not only hampers the ability to address each threat effectively but also fosters a false sense of security where real dangers are ignored.

C. Erosion of Public Understanding

The misuse of terminology creates unnecessary panic among the public and weakens trust in the systems meant to protect them. For example:

  • Consumers may incorrectly believe their likeness is at risk of being deepfaked for fraudulent accounts, when in reality, it is compromised information—not likeness—that is most commonly exploited.
  • Stakeholders may struggle to comprehend the nuanced distinctions, leading to ineffective policymaking and enforcement.

A clear and accurate understanding of these terms is essential to building informed responses and fostering public confidence.

6. Solutions and Best Practices

A. Terminological Precision

To address the confusion and ensure focused efforts, it is vital to adopt precise terminology:

  • Likeness Spoofing: Refers to the use of real-time filters to manipulate biometric verification.
  • Synthetic Identity Fraud: Encompasses the use of random face fillers paired with fabricated or stolen personal information.
  • Deepfake Technology: Reserved for advanced AI impersonations of specific individuals.

Clear definitions not only improve communication but also enable targeted solutions that address each issue on its own terms.

B. Tailored Responses to Each Threat

Developing specific strategies for each fraud technique is crucial:

  1. Likeness Filters:
    • Enhance live biometric checks by integrating movement-based or liveness detection technologies.
    • Implement secondary verification steps, such as manual review for flagged cases.
  2. Random Face Fillers:
    • Utilize advanced identity verification methods that cross-check synthetic faces with databases of legitimate individuals.
    • Employ machine learning to detect patterns commonly associated with synthetic account creation.
  3. Deepfakes:
    • Deploy deepfake detection tools, particularly in environments prone to high-stakes impersonation, such as corporate communication networks.
    • Conduct regular training for employees on recognizing deepfake content, especially in high-security roles.

C. Public and Industry Education

Educating both the public and industry stakeholders is critical for building resilience against these threats:

  • For Financial Institutions and Cybersecurity Teams:
    • Offer training on the distinctions between likeness spoofing, synthetic fraud, and deepfakes.
    • Share case studies to illustrate real-world applications and risks of each technology.
  • For the Public:
    • Provide clear, accessible information on the actual risks of identity theft and fraud.
    • Promote awareness campaigns to dispel misconceptions about deepfake prevalence in financial fraud.

D. Collaboration Across Sectors

Addressing these issues effectively requires collaboration between industries and technology providers:

  • Share threat intelligence and best practices to stay ahead of emerging fraud techniques.

  • Foster partnerships with AI developers to create tools that can differentiate between synthetic and authentic content.

By implementing these best practices, organizations can safeguard their systems and clients while fostering a more informed and resilient public.

7. Case Studies and Further Reading

A. Misuse of Likeness Filters

Case Study: Fraudulent Account Creation Using Likeness Filters

In 2024, a fraud ring exploited real-time likeness filters to bypass biometric verification systems of several financial institutions. By altering their appearances during live KYC processes, they successfully created multiple fraudulent accounts, facilitating unauthorized access to financial services and enabling money laundering activities.

Further Reading:

B. Random Face Fillers in Financial Fraud

Case Study: Synthetic Identities for Money Laundering

A criminal organization utilized AI-generated faces combined with fabricated personal information to create synthetic identities. These identities were used to open bank accounts across multiple institutions, through which they laundered millions of dollars. The scheme went undetected for months due to the convincing nature of the synthetic profiles.

Further Reading:

C. Deepfake-Driven Crimes

Case Study: CEO Fraud via Deepfake Technology

In 2024, the CEO of WPP, the world’s largest advertising agency, was targeted in an elaborate deepfake scam. Fraudsters cloned the CEO’s voice and used an AI-generated video to impersonate him during a Microsoft Teams meeting, attempting to deceive employees into transferring funds. The vigilant response from the WPP executive prevented the scam from being successful. 16

Further Reading:

These case studies illustrate the diverse methods fraudsters employ, highlighting the necessity for precise terminology and tailored countermeasures in combating financial fraud.

8. Continue the Conversation

As we navigate the complex landscape of fraud prevention and cybersecurity, your insights and experiences are invaluable. Here are some questions to consider and discuss:

On Terminology and Threat Differentiation:

  1. How has the misuse of the term “deepfake” impacted your organization’s approach to fraud detection and prevention?
  2. What alternative terms or classifications would you suggest to improve clarity when addressing threats like likeness filters and synthetic identities?

On Technology and Tools:

  1. Are current biometric verification tools equipped to handle the evolving sophistication of fraud techniques such as likeness filters and random face fillers? If not, what changes are needed?
  2. What role should AI developers play in creating tools to detect and prevent misuse of their technologies in financial fraud?

On Policy and Collaboration:

  1. How can financial institutions and cybersecurity teams collaborate more effectively to address distinct threats like synthetic identity fraud and true deepfake impersonations?
  2. What policy changes or regulations could help mitigate the risks posed by these technologies without stifling innovation?

On Public Awareness and Education:

  1. What steps can be taken to better educate the public about the real risks of deepfake technology versus the myths surrounding it?
  2. How can we involve non-technical stakeholders, such as customers and employees, in the fight against evolving fraud methods?

On Emerging Threats:

  1. As deepfake technology becomes more accessible, what new threats do you foresee in the realms of finance and cybersecurity?
  2. How can organizations stay ahead of these threats while balancing usability and customer experience?

Your perspective is crucial in shaping the conversation around these challenges. Share your thoughts, experiences, or case studies in the comments to continue this dialogue and help drive actionable change.

Popular posts from this blog

Cross posted evidence backup

How to Safely Scan and Analyze QR Codes

Protecting Our Elders: The Power of YubiKey in Preventing Fraud