Shutterstock 1490310101 (1) (1)

AI: A Double-Edged Sword in Identity Verification

Identity theft has become unavoidably more widespread as the internet has grown larger and more intelligent. The recent wave of artificial intelligence has brought new access to perpetrators of identity theft, mainly increasing access to deepfakes. With AI, creating realistic photos, videos, and audio manipulations to deceive identity verification and prevention systems is now more accessible and cheaper.

Successful perpetrators commit identity theft when they steal personal information such as personal details, financial details, and email or social media accounts. The stolen identity is then used for illicit purposes to make purchases, open financial accounts, and withdraw money.

The advancements in AI technology aren’t just enabling perpetrators to commit fraud. At the same time, anti-fraud solution providers like Arritech are leveraging the technology to mitigate fraud. Thanks to collective knowledge and expertise, our organisation can leverage AI to recognise subtle differences between authentic and synthetic (AI-generated) images or videos. AI-powered identity verification solutions, human verification, and manual investigations minimise fraud attempts in the ongoing AI versus AI showdown. In our opinion, the effective collaboration between the expansive capabilities of AI and the proficiency of anti-fraud experts represents a powerful strategy for mitigating identity theft-related fraud.

The crypto and fintech industries, followed by iGaming, stand out as the most targeted by deepfake perpetrators due to the sector’s high level of digitalisation and monetary nature. Cybercriminals stand to profit significantly from exploiting vulnerabilities in onboarding and transactional flows in digital applications.

The crypto industry is the most targeted, where crypto exchanges and digital over-the-counter (OTC) desks facilitating fiat and crypto transactions are susceptible to fraud attempts. The digitised nature of crypto platforms and the web of regulatory challenges create vulnerabilities for perpetrators to exploit. Instacoins is a good example of a modern OTC desk that manages these vulnerabilities by implementing strict KYC processes, including automatic and manual vetting of personal and financial details, along with Anti-Money Laundering (AML) checks.

Andrew Martinelli, Director of Compliance at Instacoins, shared an overview of their workflows for robust identity verification, “We have automated KYC workflows during onboarding processes to establish identities and risk profiles. We then continuously screen the activity and transactions of our users to minimise risks. We have the capacity to supplement our automated and machine-learning solutions with manual evaluations of all personal and financial details to further mitigate fraud attempts.”

For Arritech, we remain committed to providing services to crypto, fintech, and iGaming companies. As technology advances, there’s an increased risk of more sophisticated fraud attempts associated with identity thefts. Situated in Malta, the global hub for iGaming, Arritech headquarters have benefited from the growing influx of crypto and fintech companies to the country due to a favourable regulatory and financial climate. This close geographical proximity has enabled us to foster relationships that played a pivotal role in crafting robust identity verification solutions tailored to the unique needs of these vulnerable industries.

AI will become increasingly important in verifying legitimate users and detecting fraud attempts in the coming years. Today, fraud detection mechanisms must extend beyond onboarding new users to online platforms. Continuous monitoring and analysis are required to detect signs of fraud committed by users who initially appeared legitimate. This heightened proactive approach can mitigate more complex risks that are likely to remain despite having advanced fraud prevention tools in place when users are onboarded. As long as general technology and AI continue to develop, more advanced synthetic fraud will emerge, requiring more advanced anti-fraud solutions that cover the whole lifecycle of online platforms.

We also expect governments to step up and introduce regulations to minimise the negative impact of AI technology. However, regulating AI is complex due to being a dual-use technology. As explained above, artificial intelligence has become a powerful tool to prevent fraud and has made advanced technology more accessible to perpetrators. Future AI regulations should empower companies, balancing usage for well-intended purposes and limiting illicit activity.