Access Management , Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
ISMG Editors: Are Frequently Used Usernames a Security Risk?
Also; The 'Quantum Divide'; Global AI Regulatory Trends Anna Delaney (annamadeline) • September 15, 2023In the latest weekly update, four editors at Information Security Media Group discuss important cybersecurity and privacy issues, including how to keep assets secure in the quantum era, when common usernames pose a cybersecurity threat, and how to strike the right balance between regulation and innovation in AI.
See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity
The panelists - Anna Delaney, director, productions; Tony Morbin, executive news editor, EU; Mathew Schwartz, executive editor of DataBreachToday & Europe; and Tom Field, senior vice president, editorial - discuss:
- Highlights from a conversation with retired U.S. Air Force Col. Jen Sovada of SandboxAQ at BlackHat 2023 on the "quantum divide" and how to prepare for threats to encryption;
- Whether organizations should attempt to enhance security by using nonstandard usernames or focus on strong credentials, such as cryptographic keys;
- The challenges and opportunities in the realm of AI regulation over the next few years.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Sept. 1 identity security special edition and the Sept. 8 edition on reasons to cheer about the cybersecurity market.
This transcript has been edited and refined for clarity.
Anna Delaney: Hello, I'm Anna Delaney, and welcome to the ISMG Editors' Panel - a weekly spot where ISMG journalists examine and discuss the latest cybersecurity news, trends and events. I'm joined today by my brilliant colleagues, Tony Morbin, executive news editor for the EU; Mathew Schwartz, executive editor of DataBreachToday and Europe; and master and commander, Tom Field, senior vice president of editorial. Good to see you all.
Tony Morbin: Good to see you, Anna.
Tom Field: Thanks for having me.
Mathew Schwartz: Glad to be here.
Delaney: Tony, are you in the void? What's happening there?
Morbin: This is HAL 2001: A Space Odyssey, one of the unintended consequences of AI. So looking at that.
Delaney: Looking forward to that. Tom, what a sky again?
Field: Yeah, I'm about to join HAL up in space. This is a beautiful sunset at my regional airport where I fly in and out from. And I would have to say that my aeroplane is not quite the same as the shuttle that reached the spaceship in 2001. But it aspires.
Delaney: And Mat, do we have the urban version of HAL?
Schwartz: Old school transport here. The steam train in Leith, which is in the north part of Edinburgh, Scotland's capital city, where I've been spending some time recently, and I just love this street art.
Delaney: Love it - from train to tank. There we go. I'm in Stockholm today to host a roundtable this evening, and I took this pic of the Army Museum, which is not too far from my hotel. And as early apparently as the 17th century, this building was used to store artillery pieces and weaponry. And I just love these railings, just check them out. Only the Vikings.
Field: Are those bass?
Delaney: Almost, yeah. Well, Tom, you conducted a very interesting interview recently on keeping assets secure in the quantum era and were introduced to a new term, were you not?
Field: Indeed, the quantum divide. And I got this from sitting down having a conversation at Black Hat recently with Jen Sovada. She's a retired U.S. Air Force colonel and currently the president, global public sector at the company SandboxAQ. And so we talked about the quantum divide. I asked her in this era of quantum computing, what does a quantum divide mean, and why should we be concerned about it? So I'll talk a bit about that. But first, let me just show you a clip of my discussion with her, where she defines this for us and gives us a little bit of context.
Jen Sovada: So interestingly enough, we've had this thing called the digital divide for a very long time. And the quantum divide is a similar topic. It identifies the 18 countries that have quantum programs, quantum technology, quantum ecosystems and strategies. And the rest of the world right now does not have either an ecosystem or a plan to get their money invested. And so it's creating this haves and have nots within the quantum world. And the way that we see it manifesting is in things like ensuring that we're protected against quantum attack and making sure that those that need new emerging technology, like quantum technology, have access to it, whether it's in the cloud, or whether it's a quantum computer when quantum computers are available, or even technology, like quantum navigation for safety of flight, or for medical devices, so that we can continue to have better welfare around the world.
Field: I'll give you that as a teaser then - the quantum divide. Now she talks about 18 countries, but what we need to talk about are Russia and China. They're among the countries that are spending an estimated 36 billion on quantum information science in this year alone. And their focus is on breaking traditional encryption and safeguarding their own data. So you see many hacks of encrypted files today being done with a store now, decrypt later strategies that will have this information we can get into it, we will - you got to be concerned about that. And that's one of the reasons you need to prepare now and to bridge this gap.
Delaney: Well, exactly, but the one question that's asked time and time again, why should we be focused on this? Now there's so many other things to be focused on, this is potentially a future threat. So how do we as journalists go about answering that? What's the immediate risk?
Field: The quantum has been a future threat forever, but I think we are getting closer. And as people start saying, now quantum is 10 years out, it's five years out, that time period is getting shorter, but I think that you are not to be Mr. fear, uncertainty and doubt here but the thing I would point out is that Russia is very much partnered with China on quantum right now. We recognize that China is well ahead of what's happening in the U.S., U.K. and EU. And we just aren't developing at the same speed. And Russia's Vladimir Putin was quoted recently as saying, we need to have a quantum Manhattan Project, very similar to what happened in the 1940s with the atomic bomb. To me, that's recent enough to be concerned.
Schwartz: Huge geopolitical potential here for things to go wrong. On the flip side, I've also heard some experts advising that on an intellectual property front, for example, if quantum breakage or if the ability to wreck encryption using quantum computers does happen within a certain timeframe, could it affect you? And so for some organizations, two or three years, whatever secrets they're storing, may no longer be secret or may no longer be valuable. They don't need to worry about it so much. When it comes to government matters, like spycraft, that thing that starts to get worrying.
Field: It does. It's a time to be quantum-proof as one would say. So to me, I don't want to use the term wake-up call because that should have happened a long time ago. But it's something to be aware of. I think it's something that organizations need to be spending some more time thinking about because what's protecting us today is not going to protect us tomorrow, no matter when that tomorrow happens to arrive.
Morbin: Yeah, because some secrets, you just can't change where your secret bases are. They're going to be the same in the future as they are now. Well, the recipe to Coca-Cola.
Delaney: Yeah. Well, my roundtable this evening is about "Post-Quantum Cryptography: Are You Ready?" So thank you for that background. It's proven useful, and I'll report back with nuggets next week. Well, Mat, it's not often we discuss usernames because we're too busy talking about weak passwords. But we're bucking the trend this week, as you asked when do common usernames pose a threat? So have you got the answer to that question?
Schwartz: I got a great answer to that question. And I don't want to seem boring here by talking about usernames. Something about passwords seems a lot sexier. And we know from repeat dumps of people's passwords that as humans, we're predisposed to do the easy thing. And so a lot of people's passwords are horrible. If it's not their pet's name, it's 12345. Or it is literally password or my password, or my favorite in IT environments, admin admin, as in username admin, password admin, or oftentimes just a blank field for the password. So there's a lot to be concerned about there. But there was some interesting research that got published recently by Jesse La Grew, CISO of Madison College in Wisconsin. He works with the SANS Institute's Internet Storm Center, and they work with a lot of honeypots and see what's going on. And Jesse published a blog post, basically rounding up 16 months of honeypot results. And he was seeing a lot of attempts to brute force attack SSH. And this because the username that was being attempted is root, which is basically what you attempt to do if you're trying to get remote access to a Linux system via SSH. And so he published this table of all the different information. About half the time, the honeypot was seeing attack attempts using root, after that was admin, which is the default for Windows remote access. That was in about 4% of cases. After that you were seeing things like user, test, Ubuntu, Oracle, and another favorite of mine, FTP user, which should just strike horror into everybody's hearts because there could still be live FTP systems online. Few years back, there were 21 million I think FTP systems still connected to the internet. And it might also be someone else's username for a different system, you just don't know. So this isn't rocket science. But I thought this was a great reminder that hackers are the ones who are attempting to exploit things that give them easy access. So if we're seeing this in the wild, it means most likely that it's working for at least some percentage of attacks - it might be a small percentage. But of course, you don't want to be that percentage at your organization. So this is a great reminder, if you have a remotely accessible Linux system for which root is the username. I spoke with experts and I said, does this in and of itself pose a security risk? And the consensus is, No. It's also very difficult to get rid of these root services. There's lots of different ways they can be root not just with the name "root," and people are going to know what they are. So you need to have strong passwords. Even better, Johannes Ullrich, who's the CTO of the SANS Institute, told me you want to be using a cryptographic approach. You want cryptographic keys for these things to make it difficult for somebody to attempt to crack it. When you're using strong passwords, you've got to have MFA. And we know that multi-factor authentication can be bypassed, but it makes things much more difficult for attackers. And if for some reason, there is a simple or easily guessable password, the password gets dumped, or the password gets stolen via a social engineering attack, which continues to happen with unfortunate regularity, then at least with multi-factor authentication, you will be blocking the attack. The cybersecurity and infrastructure security agency in the U.S. has been making a lot of noise about MFA for precisely this reason. This sounds basic, but we still continue to see too many breaches happening because we don't have these sorts of defenses in place. So usernames, if you've got root, admin, user, or test, think about disabling remote access for these. Think about assigning usernames to actual users, especially where administrative-level tasks are concerned. Another thing I heard from the experts I spoke to is that this is important. From an auditing standpoint, you want to have granularity about who's doing what and when doesn't mean it might not be a malicious insider, doesn't mean it might not be a hacker who managed to gain access to this account. Too often when we see a breach in an organization, it isn't clear what happened or when. Doesn't know if data got exposed, looked at, copied or altered. When you've got better granularity with user accounts and usernames, in particular, it'll help incident responders figure out what happened. So again, none of this is earth shattering. I did think it was fascinating that so many attackers are attempting to remotely hack in via root. And it's a great reminder again, just to pay attention to those basics.
Delaney: And as you said alarmingly many FTP servers are still internet connected. So what are the risks of these servers with the usernames and passwords still accessible online?
Schwartz: Well, the big risk with FTP is if you're still using FTP, what other skeletons do you have in your closet, because FTP should not be internet connected? Lots of other protocols like Telnet should not be used at all. They should be blocked outright because it cannot be used for encrypted communications. SSH can be; so if you see FTP, you have to wonder what are the old easily hacked protocols, old easily hacked usernames and simple passwords that are under the hood there, in that IT infrastructure. That's a real warning sign.
Delaney: Mat, just finally, briefly, is there any merit in attempting to enhance security by using the non-standard or unusual usernames? Or is it just better to focus on strong credentials?
Schwartz: That's a great question. I wondered if it's obvious how an organization puts this stuff together. Like if it's John Smith is an employee at Acme so, it's going to be J. Smith or J. Smith01, 02 or whatever? Does it behoove us to try to do something a little more Trixie and throw in a bunch of random characters or something? And the security experts I spoke to said, No. They completely shut me down. They didn't even say that's nice in theory. Basically, they said it wasn't worth the effort. You need to have some consistency with how you administer these things. And if you're looking for security via obscurity with your usernames, then I think the battle is already lost. You need to be focusing on things that do something. Strong passwords, MFA and cryptographic keys for account access. So great question. I asked.
Field: Yeah, but we won't be seeing matpuffyschwartz anytime soon?
Schwartz: I know. Yeah or dollar signs in there, just to bling it up a little bit. Apparently not.
Morbin: I just got to throw in the one about, when asked for a strong eight-character password, the solution was Snow White and the Seven Dwarfs.
Delaney: Very good. Well, on to you, Tony. You're covering an obligatory AI spot this week. So what's unfolding that's caught your interest?
Morbin: You can't avoid AI at all at the moment. And if you even just dabbled with ChatGPT, generative AI, or any other generative articles on artificial intelligence, you'll have an inkling of the huge potential benefits. But if you've got no concerns about the potential risks, then you've just got no imagination. It's difficult to discern what are the real threats that we should be worried around what's just hype or doom mongering. But the consensus is that we do need to be careful about unintended consequences. So we find ourselves being urged to deploy generative AI immediately to avoid missing the boat, while simultaneously being advised to apply caution, guardrails and regulation. Now George Colony Forrester CEO, speaking during the company's 2023 North America technology and innovation event in Austin, Texas, this week, was unequivocal. He said, you must begin to embrace and engage with this technology now - not next month, not next quarter, definitely not next year. There was a more cautious tone in the comments from Amy Matsuo, KPMG, principal in a report published Tuesday, where she urges firms to focus on developing policies that manage how AI is used in the organization and by whom. Educate stakeholders on the emerging risks and appropriate use policies. Monitor regulatory developments, and ensure that they're complied with. So regulation is lacking behind the pace of AI use case deployment. Everyone says security would be built in if we launched the internet today. But the truth is, generative AI has been unleashed on the world in a freefall, with governments and organizations scrambling to catch up. In the U.S., the White House Chief of Staff Jeff Zients says the President has been clear, harness the benefits of AI, manage the risks, and move fast, very fast. In practice, it appears that the U.S. industry is heeding the call to move fast more than the warning to manage the risks. And it seemed the EU is leading the race to set AI standards as well down the route to implementing its cautious approach that would see outright bans for some applications is legislation which began in 2021. Before the unveiling of ChatGPT has been passed in the EU Parliament, and it's in the process of being enacted. It classifies AI systems based on their risks with a list of banned applications including biometric identification systems in public accessible spaces, bulk scraping of images to create facial recognition databases, and systems that use physical traits such as gender or race or inferred attributes such as religious affiliation to categorize individuals. High-risk AI systems such as those used in critical infrastructure, law enforcement or the workplace would come under elevated requirements for registration, risk assessment and mitigation, as well as human oversight and process documentation. Now, the U.S. is a little further behind in its legislative program, but it does also envisage restrictions being introduced this year on high-risk applications. The proposed legislation does include a framework covering the licensing regime, legal liability for AI firms when their models breach privacy or violate civil rights or otherwise called harms. But in comparison to the EU, the U.S. is putting the emphasis on enabling creative exploitation of AI, ensuring that regulation doesn't hinder innovation. In the U.K., we're seeking a middle route for its legislation, but the reality is commentators expected to have to follow the rules of its dominant trade partner, the EU. In the U.S., ahead of its legislation, several tech companies have signed up to a voluntary pledge drafted by the Biden administration. It commits signatories to investing in AI model cybersecurity, red teaming against misuse or national security concerns, but accepting vulnerability reports from third parties, plus introducing watermarking of AI-developed audio and visual material. Signatories do include Amazon, Google, Meta, OpenAI, Microsoft, and now Adobe, IBM, NVIDIA and Salesforce. Microsoft President Brad Smith, whose company has recently partnered with ChatGPT maker OpenAI to embed AI in many of its products, commented that AI needs a safety break before it can be deployed without concerns. And William Dally Chief Scientist and Senior Vice President of chip designer NVIDIA adds, the way we make sure we have control over all sorts of AI is keeping a human in the loop. Now, everybody recognizes that beyond talk of the end of humankind, there are real risks to AI deployment right now. Legislation is lacking, making it incumbent on industry players to each implement their own culture of risk management during the design, development, deployment and evaluation of AI systems. They should do so to avoid creating unnecessary risk for their customers, but also to ensure compliance with future regulation. And going forward, regulation is expected to seek to address various unintended consequences of AI systems. Transparency, limits on access to consumer information and data safeguards. And this is going to make compliance increasingly complex to achieve particularly as different jurisdictions implement their own regulations. Ideally, we'd like to see agreed standards, because we have laws of the CEO aviation. But in reality, it's an area where perceptions of privacy and security vary widely. So if something's possible, it's likely that somebody, somewhere will implement it.
Delaney: As you said, Tony, many countries are looking to form their own AI regulation at the moment. Do you foresee global harmonization when it comes to AI regulation? And if so, how challenging might that be?
Morbin: I think there is probably a baseline limit that people can agree on. But I think it's a bit like any kind of legislating for governments. If they decide they don't want to agree with it, they won't, whether they sign up or not. And if you look at Chinese facial recognition use, compared to Italy banning ChatGPT because it was carrying too much data. There are very different perceptions on what's acceptable and what's not.
Delaney: Well, so much happening in the AI world. So great to have some updates there, Tony. Thank you. And finally, and just for fun, if you could design a cybersecurity-themed amusement park ride, what would it be called? And what thrills would you offer?
Field: I got one for you.
Delaney: Go for it.
Field: It is one of my favorite rides at Disney. It's called Pirates of the Cryptocurrency. It's a thrill a minute, you don't know where it's going. Sometimes you don't even know which way is up.
Delaney: And there are no seats
Schwartz: But it came down, unexpectedly, quickly.
Delaney: Excellent. Love it! Matthew?
Schwartz: So I would do something on the cyberpunk front called Crash Override. And I don't want to be too literal on how this gets enacted. But basically, I think you could think you're always about to crash and then somehow, you don't. And there will be a lot of green lights. That's about as far as I've gotten so far.
Field: I think I rode that at Universal actually.
Morbin: Well, my issue - the problem with most amusement park rides is that the passenger is passive. And so the analogy with cybersecurity is not so good there. Otherwise, I would have gone for a Ghost Train with appropriate flights along the way. But instead, I'd go for bumper cars or Dodgems. But we have an adversary team out there. So you have to identify and avoid them, perhaps teaming up with others, even though you can't be sure that they're not on the other side. And I'd leave the name as it is, Dodgems, sounds fine to me.
Delaney: Yeah, I'm going to roller coaster as well. The firewall theory, headsets, all the virtual reality, interactive 4D, roller coaster in the dark, and lots of twists, unexpected twists and high-speed chases through data tunnels.
Field: We might be seeing Tony's background in that.
Schwartz: You throw in a little Pew Pew map and you've got kind of cyber nirvana.
Delaney: Cyber nirvana. That's it. Well …
Field: That will be the name of the park, Mat.
Delaney: Yeah. We got it sorted.
Morbin: Suddenly realized, some kind of paintball game might have worked there. I was trying to think of something more active. But yeah, Mat, that's a good idea.
Delaney: As we talk, we are inspired. Thank you so much, Tom, Tony and Mat. It's been a pleasure as always and lot of fun.
Field: Thanks so much. Good luck with your roundtable. I can't wait to hear what you come back with about the quantum divide.
Delaney: I'll be back.
Morbin: Quantum divide won't be very big there, will it?
Delaney: Whose to judge
Morbin: Because of the quantum
Schwartz: One qubit at a time.
Delaney: One qubit at a time. Thanks so much. And thank you so much for watching. Until next time.