Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime , Next-Generation Technologies & Secure Development
Tackling Fraud in AI Deepfakes With Layered Controls
Anthony Hope of NAB on the Latest Approaches to Handling AML and Financial CrimesBanks need to make changes to their fraud programs to tackle mule accounts in the age of AI deepfakes, and the primary change is to move away from having one control to handle all suspicious accounts, said Anthony Hope, group head of AML, counter-terrorist financing and fraud risk at Australian bank NAB.
See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity
"It is about trying to see your controls as layers of intervention rather than one control to rule them all," Hope said. "With the deepfake, you know that there is a potential compromise of facial ID. In this case, don't have this as the only control that authenticates someone before they are able to access their banking account. Make sure you've got some kind of step-up mechanism," he said.
Additional controls could be linked to a password or biometric authentication, Hope said. "It is important, even more so now, in the age of AI, to have multiple layers of controls to address those key areas of risk. And it's a stagger process - you're not going to want that for every client."
In this video interview with Information Security Media Group, Hope discussed:
- The role of AI in fighting cybercrime;
- How to future-proof fraud prevention programs;
- Regulatory changes needed to promote AI products.
Hope guides innovation around financial crime risk technology and processes at NAB. His career began in the U.K. civil service, where he held several roles, including head of the European Finance and EU Budget Negotiations Branches for HM Treasury.