3rd Party Risk Management , Application Security , Governance & Risk Management
How to Deal With Endemic Software Vulnerabilities
Amit Shah of Dynatrace on the Implications of Log4j, Need for Constant Monitoring October 5, 2022 23 MinutesExploitation of the Log4Shell software vulnerability in the popular Java library Apache Log4j gives attackers complete control over any internet-connected service that uses the library anywhere in the software stack.
Amit Shah is one of the early discoverers of Log4Shell, which the Cybersecurity and Infrastructure Security Agency in July declared to be an "endemic vulnerability" that will keep popping up for years to come. Shah predicts other flaws with similar consequences will pop up as well.
In addition to patching these vulnerabilities, he recommends using runtime application protection, which he compares to "changing the locks to your house now that the burglar has keys." In fact, Shah says, nothing short of "constant monitoring of every single layer of your technology stack" - both IT and OT - is required to deal with vulnerabilities that arise because of the complexity of technology today.
In this episode of "Cybersecurity Unplugged," Shah discusses:
- The benefits of using open-source libraries and software as well as Kubernetes and the need for "observability into how things are running in production" to identify flaws arising from their use;
- How digital transformation's imperative of "gets things out the door faster" affects security;
- The need for a "new approach that's going to bring everybody together, looking at the same set of data with a common understanding of what's important and what the threats are within your environment."
Shah, director of product marketing at Dynatrace, has worked with product marketing teams at Splunk and PayPal and has experience in technology roles ranging from software development to IT strategy.
Steve King: Good day, everyone. This is Steve King. I'm the managing director here at CyberTheory. Today's podcast welcomes Amit Shah, who's the director of product marketing at Dynatrace and one of the guys who discovered the Log4j vulnerability early on. Amit has participated with product marketing teams at places like Splunk, AND Digital and PayPal after graduating from UC Berkeley with an undergrad in electrical engineering and computer science, and earning an MBA from Cornell. We're going to talk about vulnerabilities today. Welcome, Amit. And thanks for joining us.
Amit Shah: Thanks, Steve. Excited to be here.
King: Terrific! Log4Shell is a software vulnerability and Apache Log4j 2, popular and well known Java library for logging error messages in applications. It's a known vulnerability with severity rating of 10. And several patches had been released by the time it was discovered, one of which didn't work so well. What makes it so dangerous is it's virtually everywhere, runs on Amazon Web Services all the way to VMware with a whole host of dependencies among affected platforms and services that makes patching a nightmare. It gives attackers complete control over any internet connected service that uses the Log4j library anywhere in the software stack. How did you guys discover it? And what do you do to deal with it?
Shah: That's a great question, Steve. We were certainly not the first folks to discover it. I think we partner with another organization called Snyk, we get a vulnerability feed from them. And they were one of the first to both publish this within their vulnerability database, and give us the ability to automatically find it both within our environment as well as in our customers' environments. And so we found out about it, just like everyone, on December 9, 2021. And what makes our situation a little bit different is that we use Dynatrace Application Security on Dynatrace as well. And so we were able to find out within minutes of the discovery of the vulnerability and it had been published in the vulnerability database, where all it was within our own environments being the software as a service (SaaS) environment that we provide to our customers. And we were able to use the information from there to patch it within a couple of hours and prioritize where all it is, and which instances of it need to be absolutely fixed immediately versus which ones are in parts of the application that are not necessarily easily accessible from the internet or don't have access to sensitive data and could wait a day or two in order to be patched. And this is similar for the rest of our customers as well. And so it's one of those cases where our customers discovered within their environments at the same time as we did and they were able to get into action in a similar quick timeframe.
King: Yeah, that's terrific. But as a pure zero day attack, Log4j raises a lot of difficult questions and among them, how many more of these will we discover and will we ever be free from that particular brand of threat?
Shah: The U.S. CISA recently declared Log4j 2, Log4Shell to be an endemic vulnerability, meaning they expect that we're going to keep seeing it pop up for over the next year or a couple of years to come. And part of that has to do with what you earlier mentioned, which is it's fairly widespread, it's not just components of software that you've developed internally, but it could be in any components that you bought from a third party or customized off-the-shelf software or any other third-party library that you might be using, open-source library, this one is probably going to be around for a while. In fact, if you look at the most exploited vulnerabilities in any given year, this year Log4Shell is - in the past, most vulnerabilities that have been most exploited have been around for a while for similar reasons to Log4Shell and as the use of open-source software explodes, as the pace of software development keeps on increasing, I think it's safe to say that we're going to see a lot more of these in the future.
King: That's not very encouraging. And I think patching doesn't seem to be the solution here either, as this one is difficult to patch, and you guys have characterized this as enabling malicious actors to execute any code on the system to access any sensitive configuration data and gain complete control. That means all data and all applications like a burglar's got keys of the front door and a combination to the safe inside. Sounds like the end of days to me.
Shah: That's a great point, Steve. It's not just that the burglars have the keys to your front door, it's that a lot of times you've left the front door unlocked, and you didn't even know that you left, or you left the side door to your house unlocked. And so you've given open access to any burglar in addition to giving them the keys. It's one of those things where patching is one of our best defenses against it, there's not much one can do about that. But in addition to patching, there's other things that we could be doing to protect ourselves as well. Now that we know about it, one such capability would be what we call runtime application protection. And so that's recognizing attacks on the vulnerability like in this case, a JNDI attack, in technical terms. They're recognizing the signature of that and blocking it before it's able to do any real damage. And so that's the equivalent of changing the locks to your house now that the burglar has keys to them.
King: No, that's a good idea. My thesis is that we have created an environment for ourselves here in the last couple of years that has gotten increasingly complex way past our own ability to manage it effectively. And in that context, I would throw hybrid cloud and Kubernetes and other container configurations and various other things, including a lot of open-source code. And I've always been an open-source advocate; you get to the point, I think where you have to ask yourself is when is it enough? Are we doing the things on the other side of the equation that need to be done? And it doesn't appear to me to be that, and my question is why we keep relying upon open-source code and public repositories about these dependencies. We haven't a clue in terms of programming dependencies. It doesn't seem strange to you that we build production IT and even some OT systems that ride on these dependencies about which we can answer even the most fundamental questions.
Shah: That's a great point, Steve. And you got two choices here, you could either go back to the time when it would take 18 months to 36 months for any new bit of functionality to be developed when you had to develop everything from scratch. Or you can live in a time like now where you can roll out new things every other day, if you chose to, and the use of open-source libraries, open-source software, other third-party libraries that you could purchase. That is one of the key things that enables organizations to be able to develop code so fast. In fact, one of our partners, I mentioned earlier, estimates that about 80% of the code base of most cloud applications is composed of these third-party, open-source libraries. We, as an organization, are also no stranger to open source. In fact, we have our captain framework, which is a cloud automation framework that we've open sourced as well, because we recognize the value of open source in terms of developing the library itself faster with higher quality, in addition to helping organizations develop software that rely on those libraries a little bit to develop faster as well. It's no surprise that we did a survey of CISOs report that came out back in June: about 1300 CISOs worldwide. Now, 67% of them think that developers don't have the time to code and scan for all the vulnerabilities in their application. And this applies across the board, whether it's for open-source software that the community is developing, or whether it's code that you're developing in-house as well. And so what you need is observability into how things are running in production in order to be able to identify these things in a timely manner and prioritize which vulnerabilities you need to fix.
King: I think the number could be as high as 90-95% of code having its origins in open source and if there were ever opportunity for AI or ML-based product to automate some of that verification process, it would seem to me to be in validating the dependencies and some of the transient dependencies that exist in an open-source software before they're embedded in our production systems. And I understand the reasons why we use it. But the question is, do you want to take 18 months to roll out product that works? Or do you want to roll out a lot of product in 18 minutes, all of which is probably going to be full of known and unknown vulnerabilities? Kubernetes also comes to mind as another technology that we've fully embraced with, I think, only a sliver of understanding about how they work. I read a report that researchers recently found 380,000 publicly exposed Kubernetes API servers. Do you think people simply spin up these new technologies with securities as afterthought and then abandon them when they're no longer useful?
Shah: I think that that's definitely a fair interpretation of that, between 180,000 publicly exposed API services, Kubernetes services, does sound like a lot. I think another interpretation of that would be that security has traditionally been a bottleneck for development. It's not that security was an afterthought. It said, you have to make a choice between whether you want to be fully secure, or whether you want to get stuff out the door faster. The imperative with digital transformation has been to get things out the door faster.
King: We'll see what folks think about that in about a year or two. And we're getting things out the door faster. But we're continually under attack by very successful hackers who won't be working very hard. Kubernetes is, from my point of view, incredibly complex. Do you think that leads it all by itself to challenges around the proper configuration and securing of their instances? And any thoughts you've got on software supply chain security, as it relates both to containers and Kubernetes, as well? I'm interested in hearing.
Shah: Kubernetes introduces a lot of benefits, right in terms of being able to be more efficiently spin up or down your applications or parts of your application to adjust to demand, but it adds yet another layer to your already complex application stack. And if you think of your application stack from the top to the bottom, you could be vulnerable at any point within that application stack. And the addition of Kubernetes into the mix adds yet another potential point of weakness in your armor or your application, whether it has to do with the version of the Kubernetes cluster that you're running that has an inherent security flaw in it, or whether it has to do with the code that's running within the containers of your Kubernetes application. If you think of your supply chain as not just the components that you wrote, that you're running within the Kubernetes infrastructure, but also the Kubernetes infrastructure itself is also part of that software supply chain. Having visibility into the components of that software supply of every component of that supply chain across all layers of your application stack becomes important in order to be able to identify any chink in your armor.
King: It becomes extremely important and almost impossible. As the more complex they get, the harder it is. We already know that. Do you know any network engineer that you could go to right now and say, "Show me our network topology right now," and have them be able to do it?
Shah: It's a nearly impossible task, if you don't have the right tools in place. Because these network topologies and application topologies are constantly evolving, constantly changing. It's one of the reasons why configuration management databases or CMDBs in the past were considered. It's a great idea in theory, but in reality, as soon as you've updated your configuration management database, it's out of date. It's a nearly impossible task without a fair degree of automation. And so, as you're looking for tools to evaluate your overall topology or understand it, it's extremely important to look for tools like Dynatrace that are able to automatically do these mappings in real time, so that you don't have to.
King: No kidding. And it's unfortunate, but they depend upon those configuration databases for their effectiveness and the fact that they're immediately out of date is at least depressing. You and I have been around the space for a while. And maybe I'm just getting old, it seems to me that in the last four or five years, the trajectory has shifted and we're now building systems based on technologies that we understand very well. Is it just me? Or do you think that's true? And do you think that's what the future holds, as well?
Shah: It comes back to the old analogy of software eating the world, number one, and two, that everything is about becoming more complex. There was a time when if you owned a car, you just pop up the hood of your car and figure out how to fix it based on the manual that the manufacturer provided you. Now, if you pop open the hood of your car, there's so many computer chips involved, it's nearly impossible to fix anything on your own, unless you have the right diagnostic equipment. And that's what you need. As you're adopting technologies that are so complex that it's nearly impossible to fully understand them. You need a monitoring and observability in order to be able to make sense of them, make sure that you're using them in the right way, and that you're not doing things that were not originally intended. So I would say that observability-led approach to adopting complex software development technologies has to be the way to go for organizations, going forward.
King: I agree completely with that. And progress is progress. You deal with it as you deal with it. But you yourself have talked about the implications of Log4Shell on critical infrastructure. And we have incredible exposure on the OT side. The energy companies can have power supply disruptions for millions of customers. And what's the exposure in your mind, on the physical side?
Shah: It's huge. Just taking the example of the recent discovery in our energy infrastructure in the US and in other countries at the Lazarus Group was able to find one chink in the armor of many power generators and suppliers. It was in the most innocuous place where you wouldn't not necessarily have thought to think you would find Log4Shell, but yet they were able to find it. It goes down to show that you need constant monitoring of every single layer of your technology stack, whether it's IT, whether it's OT and keep in mind that operational technology reaches often occur through your IT infrastructure. And this is another case of that happening. You do need to have full visibility across your entire stack in order to be able to prevent something like that from happening.
King: Yeah, in about 24 hours, I can get you a thousand people who will agree with everything you just said. Why don't we do that?
Shah: That's a great question. And I think that some of it has to do with the established processes within organizations, or different parts of the organization working in silos. IT and security have typically worked in silos in the past. One of my greatest fears would be that we're not able to bring the IT and security together in order to be able to resolve issues like that. We need a new approach that's going to bring everybody together, looking at the same set of data with a common understanding of what's important and what the threats are within your environment.
King: I don't know how to do that either. But we're about to launch online education platform here on which we've been working for over a year. And we expect and hope that it'll be groundbreaking in its approach. How important do you think education is to our future in cybersecurity? If you were to design a program, where would you place the heaviest emphasis?
Shah: Now, it's a great question, Steve. As someone who has been accused of being over educated, I think that the education is an extremely important part to solving a lot of this problem. If I was to reverse a place, emphasis on anywhere, it's in the explosion of data across the IT security, being able to make sense of it and then using it to make rational decisions, whether it's bringing everybody to the same page, a common understanding of the prioritization, even discovering that you've been briefed on what your vulnerability looks like. I think the use of data and analytics would be one of the areas that I would suggest emphasizing.
King: I hear you. And I agree. I just hope we can get there in time. Final question - and I'm aware of our clock here that we're almost at time. What's your greatest fear about cybersecurity?
Shah: Steve, my greatest fear is that we're going to be stuck in our old ways and thoughts, or doing things that security and IT are going to continue to operate in silos, looking at difference of tool sets that reach the same decisions, not agreeing on what needs to happen, going forward, that we're not going to shift our mindset and the way we're organized in time in order to be able to prevent the onslaught of vulnerabilities and attacks. So we're going to see coming our way, the only way I can see to prevent this is to have a common understanding across IT security operations or your DevSecOps into what's going on in your environments and how you can prioritize the right things, going forward.
King: I hear you. That's definitely one of them. For sure. Well, we're at the end of our allocated half hour here. I appreciate you taking the time out of your busy schedule to join us and share your thoughts. And I thought they were illuminating. So thank you again, and I thank also our audience for taking the time to listen to this today. And I hope they were able to take away something of value. This is Amit Shah, the director of application security product marketing at Dynatrace. Have a nice day.
Shah: To you as well. Steve, thank you. I enjoyed our conversation, and I look forward to chatting with you in the future.
King: We'll do it again. Thank you.