AI-enabled fraud and the creation of identification deepfakes is a growing threat for operators, but there are ways operators can better protect themselves and players.
As technology evolves, so do the risk factors for operators and consumers alike. The use of AI for the creation of identification deepfakes and generation of synthetic identities is a growing risk for operators, one that may necessitate greater security standardisation or expensive software to mitigate.
AI is already being used to create promotional content for illigal operators. Over the Easter weekend, Sky News reported it had recently discovered an AI-generated video of Sky News presenters touting gambling apps.
Older footage of news presenter Matt Barbet was used to make a video that purported to have him talking to another Sky News correspondent about an iPhone game they had won £500,000 on. The fake adverts were spread through social media and supported the marketing of illegal gambling sites contained within gaming applications on the Apple app store.
AML risk of AI
In April, the UK’s Gambling Commission issued an update warning of the prevalence of AI deepfakes connected to emerging money laundering and terrorist financing risks.
Last year, the UK’s Joint Money Laundering Intelligence Taskforce published an amber alert on the use of AI to bypass customer due diligence checks. The UK’s National Crime Agency (NCA) took down a website last year that was offering AI-generated identity documents, such as passports or driver licences, for just $15.
The Gambling Commission has advised all operators of the need to train staff in the assessment of customer documentation for AI-generated documents.
Threat actors and fraudsters are well versed in emerging technologies. With the prevalence of digital mediums for public and private services, synthetic identity theft has become an increasing challenge for law enforcement.
“Synthetic identity theft is a type of fraud in which genuine and fabricated personal information are blended to generate a completely new, fake identity,” Dr Michaela MacDonald, senior lecturer in law and technology at Queen Mary University of London, tells iGB.
“Alongside voice cloning, behavioural mimicry and deepfake technologies, AI-generated synthetic identities can easily bypass traditional Know Your Customer (KYC) systems by defeating facial recognition, exploiting support chats, or spoofing voice-activated authentication.”
Research on deepfake technology from the Alan Turing Institute, published in March, said AI-enabled crime is being driven by the technology’s ability to automate, augment and vastly scale up criminal activity volumes.
That report stated: “UK law enforcement is not adequately equipped to prevent, disrupt or investigate AI-enabled crime.”
While legislation may help to deter the threat of AI-enabled crime, the institute called for a “more robust and direct approach” – one that is centred around the “proactive deployment” of AI systems in law enforcement.
How will regulators likely respond to deepfake incidents?
Regulators across the world tend to be strict on AML infringements, which have historic connections to the gambling industry given the movement of vast sums of money.
In the UK, the Gambling Commission hit two operators with penalties for AML and customer care failures last month. The Football Pools was ordered to pay £375,000 (€449,732/$484,417) for AML breaches. The regulator found that when AML thresholds were reached, Football Pools’ processes did not initiate hard stops. These only kicked in when a “manual” review was taken.
Corbett Bookmakers was hit with a fine of £686,070 for numerous AML failures, which included not knowing the appropriate customer, product, geographic and payment risks. The commission stated that it had failed to take a sufficiently risk-based approach to AML.
When it comes to the developing risks of AI on the gambling sector, the Gambling Commission has stated that all operators must train staff in the assessment of AI-generated documents.
Regulators can approach the issue by allowing information sharing across secure channels, promoting innovation in the sector and international cooperation, as well as reviews of their own frameworks.
Fast-moving technology
Annabelle Richard, legal partner at Pinsent Masons, tells iGB that given the emerging and fast-moving nature of this technology, regulators may be lenient in some early cases of AML rules being perpetrated by deepfake technology.
If operators find their systems have been bypassed in some way, but they are still unsure of what the remedy should have been at the time of the incident, the regulator may opt not to hit them with an AML warning or fine.
However, if a failure of systems occurs, or an operator has been too slow to spot something that the tools exist to catch, a regulator will likely not be as lenient.
“If you haven’t even engaged with the authority to say ‘I’m not sure what I can and can’t do,’ it will be considered that you didn’t do what you were supposed to, to abide by your regulatory obligations. And that’s going to be a whole different situation,” Richard states.
How gambling can mitigate the AML AI deepfake risk
The UK’s National Crime Agency says fraud is the most prevalent crime in the UK and AI has the potential to increase the speed, scale and sophistication of online scams.
With AI technology, threat actors can target more victims or companies across international and language barriers. The use of deepfake images and videos is increasing the difficulty of fraud detection.
“The use of AI to facilitate fraud underscores the need for private industries, law enforcement and the public to all take steps to reduce the threat. The UK’s Online Safety Act puts more onus on the online platforms to take action and we are continuing to work with government and regulators to maximise its impact,” the NCA tells iGB in a statement.
The UK’s 2023 Online Safety Act set out rules to curb online fraud. Service providers have been asked to introduce measures that tackle fraud and terrorism.
These include the explanation of how they undertake account verification, as well as the inclusion of automatic detection software that finds and removes advertisements or posts that are linked to the sale of stolen credentials or faked credentials.
Operators need to refresh AML processes
With the growing sophistication of the AI threat, operators need to keep up to date with best practices and technological innovations. Operators can enhance AI-based document checks with biometrics such as facial verification and liveness detection checks. The use of device fingerprinting and geolocation services would also increase detection rates.
Additionally, machine learning applied correctly could identify inconsistencies in player activity that may give an additional layer of security.
Queen Mary University lecturer MacDonald tells iGB there are several technologies emerging which will help detect synthetic identities and manipulated materials. These include end-to-end orchestration, data intelligence and artificial intelligence.
“These tools work together to centralise verification processes, analyse large datasets for subtle inconsistencies and leverage machine learning to detect evolving fraud patterns with greater accuracy and speed,” says MacDonald.
“However, implementation varies widely. High-quality defences require significant investment and many operators are using the same class of AI tools for verification that fraudsters are using to attack them.”
Fraud and gambling are old enemies
The gambling sector has always been a ripe target for manipulated documents and fraudulent activity. There has been a constant arms race between operators and fraudsters trying to get one past security systems.
Gambling industry expert and Circle Squared consultant Mick d’Ancona tells iGB that operators have dealt with dodgy documentation from players for years.
“All that’s happening now is it’s easier to [fake documents required by operators]. But actually, if you’re a good operator and you’re [processing documents] properly, you’ve got what you need in place” already, d’Ancona says.
However, he believes it won’t be cheap to mitigate the risk of fake documentation, as fraud gets more sophisticated and operators must keep their processes up to date.
Smaller operators, or those in the grey markets, may not be properly putting protections in place, he warns. Budget constraints or lack of protection engagement raises the risk of due diligence failures.
“If you only ask for a copy of a passport, but you don’t do a likeness check, or if you just ask for proof of funds when you don’t actually have the staff, experience and the tooling to check that it’s a legitimate bank and that everything looks right with it, you are, for sure, exposed,” d’Ancona states.
Emergence of electrical ID wallets
One method that could help curb ID fraud is the introduction of official or national digital identifications.
Digital identity wallets or applications use numerous technologies to secure and confirm identification. These include cryptographic keys and biometric data, as well as fingerprints and facial recognition.
Digital wallets are not a nascent technology. Singapore brought out its SingPass in 2002, which acts as a national ID, allowing users to file taxes or access medical documents.
Estonia also introduced its digital ID card in 2002. Other nations with digital ID options include Germany, Sweden, Japan and Canada.
The UK’s Post Office EasyID launched in 2021. It provides a digitalised ID that is government certified and can be used for right to work, criminal record checks and age verification.
The EU has already taken some steps in ID regulation that could be a useful aid in securing operations as the EU Digital Identity Framework Regulation came into force in 2024.
Under the regulation, EU member states must offer at least one digital identity wallet to citizens.
This app ID will allow people to identify themselves to public and private online services.
Jarek Sygitowicz, co-founder of identity verification software developer Authologic, thinks the implementation of the electronic ID wallets could be a game changer.
“These have seen adoption growing over the years, but with the EU implementing the eIDAS 2.0 regulation, what has been a slow wave will become a big jump in the next 12-24 months. While the EU has led the adoption of e-IDs, even skeptical countries such as the UK are planning to launch their own digitised driving licence in the summer,” Sygitowicz tells iGB.
Call for standardisation and consistency
The threat of AI deepfakes is already here, but again, so are the majority of the tools that will be used to mitigate the risk.
While AI software in the wrong hands is capable of producing fake IDs and mimicking biometric data with increasing precision, there are advanced measures available. However, the adoption is patchy and smaller platforms may not be aware of their options.
“What’s missing right now is consistency. There’s no shared framework for tackling AI-driven fraud, and that needs to change,” Web3 recruiter Spectrum Search CTO Peter Wood explains.
“Regulators should be pushing for industry-wide standards around ID verification that are designed to hold up against AI. We also need to see better collaboration between platforms, some kind of anonymised, real-time data sharing system that helps flag suspicious activity across the board.”
One of the key challenges in detecting synthetic identity fraud is that personal identifiable information can be fragmented across multiple platforms. If there is no “unified oversight”, it can be difficult to spot inconsistencies.
MacDonald says one way to mitigate this would be for regulators and law enforcement to encourage “international coordination on synthetic ID detection, information sharing and regulatory standards, which will be essential to staying ahead of increasingly sophisticated AI-driven fraud”.
While the risk profile for potential fraud and AML breaches enabled by the use of AI has risen, operators’ obligations to be informed and up to date have not changed.
There are tools that can help keep the gambling sector ahead of the threat, but the industry and regulatory stakeholders may need to come to some form of consensus about best practice.
Original Article