Threat actors are leveraging a newly discovered deepfake tool, ProKYC, to bypass two-factor authentication on cryptocurrency exchanges, which is designed specifically for NAF (New Account Fraud) attacks and can create verified but synthetic accounts by mimicking facial recognition authentication.
By overcoming these security measures, threat actors can engage in money laundering, create mule accounts, and perpetrate other fraudulent activities.
The prevalence of such attacks is increasing, with losses exceeding $5.3 billion in 2023 alone, where the sophistication of ProKYC highlights the growing threat posed by deepfake technology to financial institutions.
Analyse Any Suspicious Links Using ANY.RUN’s New Safe Browsing Tool: Try for Free
AI-powered tools are enhancing cybercriminals’ ability to bypass multi-factor authentication (MFA) by generating highly realistic forged documents, where traditionally, fraudsters relied on low-quality scanned documents purchased from the dark web.Â
However, AI-driven tools can now create highly detailed forged documents that are difficult to distinguish from authentic ones, making it easier for cybercriminals to deceive security systems and gain unauthorized access to sensitive information, which poses a significant challenge to organizations seeking to protect their data and systems from malicious attacks.
ProKYC’s deepfake tool is malicious software sold on the dark web that exploits deep learning technology to circumvent authentication processes, which can generate counterfeit documents and realistic videos of fabricated identities, thereby deceiving facial recognition systems.
The tool’s effectiveness is demonstrated by its ability to bypass ByBit’s security measures. This poses a significant threat to online platforms as it undermines their authentication mechanisms and facilitates fraudulent activities.
The attacker leverages AI-generated deepfakes to create a synthetic identity complete with a forged government document (e.g., Australian passport) and a facial recognition bypass video.Â
The video adheres to facial recognition system instructions (e.g., head movements) and is fed into the system instead of a live camera feed, deceiving the system and facilitating a successful account fraud attack.
Detecting account fraud attacks is challenging due to the trade-off between restrictive biometric authentication systems that lead to false positives and lax controls that increase the risk of fraud.
High-quality images and videos, often indicative of digital forgeries, are red flags. Inconsistencies in facial parts and unnatural eye and lip movements during biometric authentication can also signal potential fraud and require manual verification.
According to Cato Networks, organizations must proactively defend against AI threats by collecting threat intelligence from various sources, including human and open-source intelligence.Â
While threat actors are constantly evolving their use of deepfake technologies and software, it is essential to remain informed about the most recent trends in cybercrime.
Strategies to Protect Websites & APIs from Malware Attack => Free Webinar