Fraud Management & Cybercrime , Ransomware , Video
ISMG Editors: Why Synthetic ID Fraud Is on the Rise
Also: More Support for Ransomware Victims; Key Takeaways From RSA 2024 Anna Delaney (annamadeline) • May 17, 2024In the latest weekly update, Information Security Media Group editors discussed key takeaways from RSA Conference, the surge in synthetic ID fraud in the auto lending industry, and a new initiative by the U.K.'s National Cyber Security Center and major insurance associations to combat ransomware threats.
See Also: Ransomware Response Essential: Fixing Initial Access Vector
The panelists - Anna Delaney, director, productions; Mathew Schwartz, executive editor, DataBreachToday and Europe; Tom Field, senior vice president, editorial; and Suparna Goswami, associate editor, ISMG Asia - discussed:
- Highlights from ISMG’s coverage of RSA Conference 2024;
- The reason behind a 98% surge in synthetic ID fraud in the auto lending industry, which led to $7.9 billion in losses last year;
- How the U.K. government's cybersecurity agency and three major insurance associations have pledged to offer better support and guidance to ransomware victims.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the May 3 edition previewing RSA Conference 2024 and the May 10 edition from the ISMG studio at RSA Conference 2024.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm Anna Delaney. And in this episode, we'll explore key takeaways from the RSA Conference, delve into the surge of synthetic ID fraud in the auto lending industry and examine the new initiative by the U.K.'s National Cyber Security Center and major insurance associations to combat ransomware threats. The team today includes Tom Field, senior vice president of editorial; Suparna Goswami, associate editor at ISMG Asia; and Mathew Schwartz, executive editor of DataBreachToday and Europe. Great to see you all. Tom, we are back from the 32nd Annual RSA Conference. We did it and we're all set up with topics and content for the next few months at the very least. How was it for you? Tell us about it.
Tom Field: It's interesting, because yesterday, I went to New York City, and I participated in the half day event with box for financial services. Topic was, drumroll please. Artificial intelligence. And what did people want to know from me? What was being talked about at RSA? So already a week later, these topics are permeating the conversation. And I would say that during 90% of our interviews, we discussed AI at some point. That came up as a topic in our conversations, and it certainly dominated what we discussed. And I think what I understood from the event is that the topic has matured. A year ago, people were just discussing policies and thinking about use cases. And do we ban this? Do we not ban this? And we couldn't even agree on a single way to pronounce ChatGPT. Things have changed. You now have got some excellent use cases. I was being asked about specifically what's being done in cybersecurity. So we heard about people using generative AI to enhance their SOC, specifically to relieve some of the burden from level one analysts in terms of all the telemetry coming in and analyzing that and giving them something that they can work with. We heard about the use of gen AI for malware analysis. We heard about its use for writing code, as well as examining code. And then I was asked what about adversarial use? And as we've talked about some of our discussions, we even had the CISO of Google Cloud saying that the use of AI by adversaries is overblown, overhyped - that is not the reality when you talk to practitioners. They certainly talked about the use of AI for audio in video deepfakes, for identity fraud and a lot more in terms of using gen AI to socially engineer financial institutions and manipulate things to commit greater amounts of fraud, and I heard a lot about that yesterday as well. So I'm pleased that we've got somewhat robust in refining use cases, that we're seeing how it's employed in organizations that are talking more about how they protect their data, how they refine their governance and how they prepare for what looks like a world of regulation coming to artificial intelligence. So that was my big takeaway. I tell people we should be referring to it as the RSAI conference because that's pretty much what it was.
Delaney: Absolutely! 100% agree with that, and I think this AI revolution has brought focus on software and data security. Also picking up on your point on data security, I think policymakers, technologists are discussing enhancing the security of systems that interact with the physical world and managing data security across all environments. So that's definitely what I picked up on.
Field: It was nice to talk with people in financial services in New York yesterday because they tend to be pretty cutting edge when it comes to the technologies and the deployment of them. And they were speaking collectively about AI in ways that we heard the internet being discussed, when that was introduced in the 90s. The way we heard the shift to mobile being discussed in the 2000s. The way we heard about cloud over the past decade, but at a greater velocity and a greater scale than anything that we've seen. So my advice to the audience, which seemed to resonate because they came back and talked to me about it, is when it comes to gen AI, run to it, not from it. I think that's a good way to sum up the takeaway from RSA.
Delaney: Yeah! Any unexpected insights?
Field: No.
Delaney: Mat, was it AI AI for you as well?
Mathew Schwartz: It was either we're going to talk about AI or we're not going to talk about AI, and here's why. So, it either came up one way or the other. But like Tom said, I did hear a lot more very welcomed nuance about AI at RSA. One thing that I appreciated was the discussion of AI for security, how it can improve security and also the increasing need for security for AI. And this isn't necessarily something that we don't know how to do yet. I heard some great discussions about people who naturally watched the data flowing over a network. And they said, we can very easily tell if someone's feeding something into an LLM or ChatGPT. We can very easily say, we don't think that social security numbers should be going in there. Are you sure you want to do that? So again with the nuance, but also securing this is something we already know how to do in many respects.
Delaney: Oh! Excellent and brilliant show all round and bring on RSA 2025.
Suparna Goswami: I think after a long time, I see all practitioners across all industries. So excited about one particular technology. You usually see the financial industry being very gung-ho about adapting a technology, but here, everyone, right from manufacturing to OT to healthcare, everyone is very gung-ho. They are a little skeptical, but they are all involved in this technology. It's not that none of them are talking about it. So it is a reality. It is happening right now. As Tom rightly said, we can't go away from it, we have to adapt to it.
Field: You make a good point. I'm not sure I've ever seen every sector run toward a technology revolution such as this. Internet was much slower. So I think we've never seen such a mass and again, at the velocity at the scale.
Delaney: We're all in it together. Thanks Tom. So Suparna you've written about a worrying 98% surge in synthetic ID fraud in the auto lending industry. It is $7.9 billion in losses in 2023. So just tell us what's happening here and what makes the auto lending industry particularly attractive to fraudsters?
Goswami: So auto industries have started tracking only now. They were not tracking so much before. But last year, it was $1.8 billion. So you can imagine the kind of increase it has seen. But as you rightly point out. First, let us think of the life cycle of a synthetic ID. So a fraudster makes a synthetic ID by taking someone's date of birth, phone number and so on. And the fraudster will probably attach it to a parent company and grow credit over time and stay under the radar for a few months or a few years, but at the same time, they will grow their credit over this period of time. At one point in time, they would have to bust out, which means they will have to max out the amount of credit they have and disappear. Now what makes a great place for them to bust out - the auto industry, and why? Because it's one of those industries where you get the highest loan without much of scrutiny. So it's that kind of industry, especially with online purchasing of auto, you can finance, you can get a loan and purchase a car without visiting a shop. So they've all made it very, very easy. And so it has become very attractive to fraudsters. Having said that, though we have been speaking a lot about synthetic identity fraud in this space, not all fraudulent activities in the auto lending sector is that sophisticated. So you would have borrowers inflate their income or misrepresent their financial status to enhance their chances of getting a loan or securing a loan. And there are no checks on pay slips. So if I submit a pay slip to you, they're not going to check the document. The judge is going to accept it the way it is. And interestingly, fraudsters also use shell companies to create false employment verifications. So this is all related. I did an article maybe last month on how shell companies or frauds are always increasing. So that's also related. They use the shell company to create a false employment verification. And the report says that more than 11,000 fake companies are circulating only within the auto industry in 2023. So that's massive. Of course, like I said, I've not covered it, but if I remember it correctly, it was only around 3,000-4,000, but it has seen a massive increase; 11,000 in a year is a lot. And of course there is the dark web. There is Telegram that makes fraud as a service very easily available for as little as $150. The report also points out that a new service, called premade CPN - CPN referring to the credit privacy numbers. So premade CPNs are also available for a higher amount. So premade CPNs are essentially fake credit profiles created that have aged over three, four years. And that makes it very easy for them to get credit. CPNs are typically created with generic names and allowed to age making them very ideal for bypassing your synthetic identity detection systems. Also there is credit washing, where certain people claim identity thefts on the previous credits that they have taken, erase records and apply for new loans. So all these methods are there. It's not only synthetic ID but yes, what caught my attention was the massive increase of 98% that has increased in this industry.
Delaney: What about the banks, Suparna? And in what ways are they falling short in combating this fraud and what steps should they take to address these vulnerabilities?
Goswami: Yes! Once a fraudster has established a credit identity, it is slightly difficult to distinguish them from a genuine customer. So generally a credit union or a bank check whether a name is correct or whether the checking address is correct. But they look at it independently. They don't correlate the data. There is no harmony in the data. And of course, there is a lack of identification at the door itself as in when you're onboarding a customer, there has to be right tools, you need to add it to ensure that the customer is stopped right at that point. So we need to move away from transaction-based identification to identity-centric approach. It's not to say that you don't check the customer once they are on-boarded. But banks are more focused on transaction-based identification. So something happens, they see an alarm, an unusual activity, it's only then. But by that time, losses have happened. So you need to catch them right when they're trying to enter your system. So suppose for example, suppose you're a medium-sized bank and XYZ comes to you and applies for an auto loan. You're checking against the credit bureaus to see if the person exists or not. But the reason these people are in those records are because they've already applied for credit. So you are checking against a file that is already compromised. So I think all these things need to change. You need to run your synthetic models, fraud models nicely. Like I said, identity checks, velocity checks and information sharing, all these are common practices that have been spoken about a lot but have not been implemented.
Delaney: Right. Suparna, thank you so much. Mat, you've reported this week that the U.K.'s National Cyber Security Center and major insurance associations have launched an initiative to enhance organizational resilience against ransomware. Tell us about it and whether you think this could fundamentally change how U.K. organizations handle ransomware threats.
Schwartz: It would be lovely if it led to more organizations handling ransomware and proactively defending themselves against ransomware in a more effective manner. This announcement is tied to the annual CYBERUK conference that is run by the National Cyber Security Center here in Britain. As you may know, it's the public facing arm of intelligence agency GCHQ. And it's the lead agency for incident response in Britain. So a lot of organizations that get hit will reach out to NCSC for help, and it often provides that kind of help. So it has seen a lot. And one of the things it has seen is that British organizations, not to single out Britain but just to focus on Britain for the moment, lots of organizations here are not properly prepared to defend themselves against ransomware. Now, it feels like we've been banging on about this for a long time. And we have, but apparently, this message is not percolating into every corner of every boardroom across the four nations of Great Britain. So here we are at the latest attempt to up level the defenses and a lot of organizations. And in this case, it is a tie up between the three major insurance associations and the NCSC. So the insurance associations have publicly backed a set of guidelines from the NCSC, which are written for a nontechnical audience; so think board level, CEO. And the very first thing it says if you fall victim to a ransomware incident is to not panic. Good little British advice there. So after that, it also recommends working with experts, such as incident response firms that have experience with ransomware or law firms with the same sort of experience, the NCSC, law enforcement agencies, that sort of thing. All motherhood and apple pie stuff as far as those of us in the cybersecurity fields are concerned. But apparently this needs to be heard. And we know this needs to be heard because there was a joint parliamentary committee on the national security strategy, which last December issued these precise recommendations. It reviewed the country's ransomware posture or anti-ransomware posture for a whole year. And what it found was room for improvement. And so it recommended that NCSC produce more detailed guidance accessible to a nontechnical audience about how to best avoid payment of ransoms after an attack, including accessing good negotiating techniques, sources of support, especially for smaller organizations. So if we look at ransomware incident response firms, they report that maybe about 28% of victims are still paying a ransom. We have seen a decline - and the insurance industry says this as well - in the number of victims who pay just in case, that is a very welcomed decline. Paying just in case means you don't know if you'll be able to recover but you just go ahead and pay anyway. And with over a billion dollars being given to ransomware groups in 2023, there's a huge pipeline of profit that needs to be disrupted. So hopefully, this insurance and NCSC initiative will help disrupt that. We've had a lot of other research. There is RUSI, which is a think tank, white paper that came out at the end of last year sponsored by the NCSC, which found that paying the ransom didn't always lead or wouldn't guarantee any better outcomes. We know that oftentimes, even if you pay for a decrypter, it can be faster to not use the decrypter. Even if you pay for the decrypter, it could still be months or more of recovery time. It's not to get out of jail for free card. So it's good to give that nuance to organizations who unfortunately may not access that guidance until it's too late, i.e., they're thinking about paying a ransom, but at least we have that guidance and hopefully it's going to get across to more organizations. So they choose to not pay. Ideally, they would just prepare better and have much better business resilience capabilities in place for years now. That's been the advice. So, if you do get hit, you never need to consider paying a ransom. We keep hearing that advice, and we keep seeing organizations who have failed to hit it. So maybe things will change one day.
Delaney: I suppose that's not convincing. Mat, what do you think are the main challenges or even limitations that you foresee when it comes to the implementation of this guidance? Are there any gaps that still need to be filled?
Schwartz: It is just guidance; whether or not a business pays, it's a business decision. That's true in the U.K., that's true in the U.S. and a lot of other countries. What we have here are insurers saying, look, ransomware victim, here's government guidance! We suggest you follow it. Insurers get a bad rap. I think a lot of people say well, they're the ones allowing ransoms to be paid. But insurance companies don't recommend paying ransoms. If you access cyber insurance, you can choose to pay. In many cases, insurers say look, if you are going to pay, we are only going to reimburse you if you let our negotiators come in first. And they have a reputation for driving ransom payments way down. If a company does choose to pay. Hopefully, though, this will just again get boards and senior executives to think twice and if they do move forward to at least not pay so much.
Delaney: Well said! Hopefully this will make some positive changes, let's hope. And finally, just for fun, in a world dominated by AI, what job do you think would be the last to be taken over by robots? And why? Tom go for it.
Field: I'm not sure what it is called in the U.K. In the U.S. and I believe in India, it's called the Department of Motor Vehicles, maybe DVLA in the U.K. I don't think a robot can make the experience as painful as a human. So I think the person that meets you at the DMV and guides you through the painful process will be there for a while.
Delaney: Going on, Suparna?
Goswami: Mine is more cybersecurity-related. I just got the idea after Mat said that don't panic. So I think yes, when there is a ransomware attack, the stress that emerges, for that, you need that human element for effective communication during a crisis, you would need the empathy, leadership and interpersonal skills that are uniquely human. So I don't think you can replace that with bots.
Delaney: Keep calm and carry on. Very good. Mat?
Schwartz: So I know it's already a trope of dystopian science fiction, but childminding, or daycare as they say in the States, I think would be uniquely unequal to AI's capabilities or put that around, I think AI would not be a very good babysitter.
Delaney: And you could extend that to parent care. I'm going for a scent critic or anybody who uses their nose or their sense of smell for a career. So those who assess perfumes or wines or gourmet foods, and I think they'll have a job for a while because I think even though AI can probably analyze the chemical makeup of these, it can't match the human's ability to detect the subtle differences and connect the sense of memories and emotions and stories, which is a very human skill. So those with good noses, you will be fine.
Schwartz: The scent of AI.
Delaney: The scent of AI! Tom, Mat, Suparna, this has been a pleasure as always. Thank you so much for all your insight.
Goswami: Thank you so much Anna.
Field: We will do it again.
Schwartz: Thanks Anna!
Delaney: Yeah and thank you so much for watching. Until next time.