WEBVTT 1 00:00:07.200 --> 00:00:09.330 Anna Delaney: Hello, and thanks for joining us at the ISMG 2 00:00:09.330 --> 00:00:11.910 Editors' Panel. I'm, Anna Delaney. And, this is a weekly 3 00:00:11.910 --> 00:00:15.810 show where ISMG editors dissect the crucial stories and trends 4 00:00:15.900 --> 00:00:20.010 in the fields of AI, cyber and infosecurity. On the show today, 5 00:00:20.010 --> 00:00:24.180 Tom Field, senior vice president of editorial; Suparna Goswami, 6 00:00:24.180 --> 00:00:28.290 associate editor of ISMG, Asia; and Tony Morbin, executive news 7 00:00:28.290 --> 00:00:30.630 editor for the EU. Great to see you all! 8 00:00:31.170 --> 00:00:31.830 Tom Field: Thanks for the having me! 9 00:00:31.830 --> 00:00:32.640 Suparna Goswami: Glad to be back. 10 00:00:33.780 --> 00:00:36.480 Anna Delaney: It's great to see you, Suparna, and Tom, and Tony. 11 00:00:36.480 --> 00:00:38.820 So Suparna, you're in the office, I think. 12 00:00:38.000 --> 00:00:40.261 Suparna Goswami: Oh, yes, I thought I'll not have a virtual 13 00:00:38.000 --> 00:00:46.700 One door closes, another one opens. Wishing you all the best 14 00:00:40.314 --> 00:00:43.652 background today. This is the actual background of our office. 15 00:00:43.706 --> 00:00:47.044 So, today's the last day in this office in Bangalore space, we 16 00:00:47.098 --> 00:00:50.166 are moving to a bigger and hopefully a much better space. 17 00:00:47.150 --> 00:00:55.400 for that move. Tom! the clouds are out! 18 00:00:50.220 --> 00:00:53.558 So, lots of memories associated with this office so thought of 19 00:00:53.612 --> 00:00:56.788 having this as my background. So, we have been here for two 20 00:00:56.842 --> 00:01:00.018 years - two and a half years. Lots and lots of memories. So 21 00:01:00.072 --> 00:01:03.410 hopefully, yes, in another month, we'll shift to a new office. 22 00:01:11.570 --> 00:01:13.910 Tom Field: Well, technically, this is my office. It's outside 23 00:01:13.910 --> 00:01:16.100 my home out in the middle of the woods. And I was just coming 24 00:01:16.100 --> 00:01:20.330 home one evening and saw this mass of clouds and thought it 25 00:01:20.330 --> 00:01:23.270 was a pretty picture. And, it lends itself to the conversation 26 00:01:23.270 --> 00:01:25.490 I want to have about cloud security today. So, there you 27 00:01:25.490 --> 00:01:25.700 go. 28 00:01:26.030 --> 00:01:28.700 Anna Delaney: Very good! Tony, is that your office as well? 29 00:01:29.180 --> 00:01:32.540 Tony Morbin: No, no, this is just a frozen pond in the woods 30 00:01:32.540 --> 00:01:37.880 nearby. We just had a recent cold snap. Nothing like the feet 31 00:01:37.880 --> 00:01:40.820 of snow that Tom's experienced. Butz you know, unusual for us 32 00:01:40.820 --> 00:01:43.490 just to see the pond frozen over for sure. 33 00:01:43.850 --> 00:01:47.450 Tom Field: Where I live, Tony, there's one restaurant nearby. 34 00:01:47.600 --> 00:01:50.690 And, I see people driving their snowmobiles across the lake to 35 00:01:50.690 --> 00:01:52.370 park up at the restaurant and get a meal. 36 00:01:53.630 --> 00:01:55.580 Tony Morbin: Well it was a big deal for my granddaughter to be 37 00:01:55.580 --> 00:01:57.170 able to actually stand on the ice. 38 00:01:58.280 --> 00:02:01.670 Anna Delaney: Sweet! Well, behind me is one of Toronto's 39 00:02:01.670 --> 00:02:06.080 hidden treasures, Toronto Public Labyrinth. So, true to its name, 40 00:02:06.350 --> 00:02:11.060 it is a winding path designed to inspire contemplation and evoke 41 00:02:11.060 --> 00:02:15.260 serenity. So, it's rather lovely, wish it was my home! 42 00:02:15.770 --> 00:02:17.540 Tom Field: Why were you in Toronto? You were in Toronto to 43 00:02:17.540 --> 00:02:19.280 host a roundtable discussion, weren't you? 44 00:02:19.550 --> 00:02:23.630 Anna Delaney: I was, it's all work, and a little bit of play. 45 00:02:24.530 --> 00:02:27.950 Well, Tom, talking of clouds. So, let's talk about the use of 46 00:02:27.950 --> 00:02:31.160 AI and cloud security. I know that you recently moderated a 47 00:02:31.160 --> 00:02:34.610 roundtable on that very topic. So, tell us what you learned, 48 00:02:34.610 --> 00:02:35.990 what were the key takeaways? 49 00:02:36.050 --> 00:02:37.820 Tom Field: Well, that's why I brought up your roundtable 50 00:02:37.820 --> 00:02:40.910 discussion, because you know, all of us host a series of these 51 00:02:40.910 --> 00:02:43.970 roundtables throughout the year. And they're just a critical part 52 00:02:44.390 --> 00:02:48.050 of the work that we do, because it's our opportunity to get out 53 00:02:48.050 --> 00:02:52.370 of our clouds and our ponds and our office buildings and go into 54 00:02:52.370 --> 00:02:55.490 a small group, where you get to sit with security and technology 55 00:02:55.490 --> 00:02:58.820 leaders and find out what is really happening in their world. 56 00:02:58.850 --> 00:03:01.850 So, as you know, I think you've participated in one or two of 57 00:03:01.850 --> 00:03:07.040 these, we've conducted a series of roundtables with Wiz, and the 58 00:03:07.040 --> 00:03:11.810 topic has been primarily multi-cloud security. But, as 59 00:03:11.900 --> 00:03:16.820 it's gone on, it's morphed into the role of artificial 60 00:03:16.820 --> 00:03:21.170 intelligence in multi-cloud security. And it's a natural 61 00:03:21.170 --> 00:03:24.770 evolution because, as you know, all of us, we can't go anywhere 62 00:03:24.770 --> 00:03:28.430 these days, and have a conversation about cloud or 63 00:03:28.430 --> 00:03:34.460 identity or about ransomware, or incident response without having 64 00:03:34.460 --> 00:03:38.150 it become a discussion about AI. So, it's been terrific. I've 65 00:03:38.150 --> 00:03:41.900 done the majority of these with subject matter experts, Swaroop 66 00:03:41.900 --> 00:03:45.500 Sham of Wiz, and it's been quite an education. We've done this 67 00:03:45.680 --> 00:03:48.440 all over North America, different regions, some live 68 00:03:48.560 --> 00:03:52.520 some virtual, and yet the diverse group that we've had - 69 00:03:52.550 --> 00:03:54.920 healthcare, financial services, government, - name it - 70 00:03:55.160 --> 00:03:58.550 manufacturing, retail - there have been some common themes. 71 00:03:58.670 --> 00:04:01.040 And, so I want to share some of the common takeaways from the 72 00:04:01.040 --> 00:04:04.700 series that was just wrapped up this past week. We have one live 73 00:04:04.700 --> 00:04:08.120 one in Chicago, and we have a virtual one in the U.S. Midwest. 74 00:04:08.120 --> 00:04:12.710 So, among the takeaways, as you talk to organizations, there is 75 00:04:12.710 --> 00:04:18.740 no single one-cloud journey; maturity is all over the map. 76 00:04:18.740 --> 00:04:22.550 Now I can remember pre-pandemic, talking about cloud security. 77 00:04:22.610 --> 00:04:26.120 I've said this before, we had so many security leaders saying 78 00:04:26.120 --> 00:04:29.450 "we're just starting to dip our toes into the cloud!" Pandemic 79 00:04:29.450 --> 00:04:32.330 came, hybrid work came, all of a sudden, they were swimming over 80 00:04:32.330 --> 00:04:35.360 their heads in the cloud. That hasn't changed. You've got 81 00:04:35.360 --> 00:04:37.520 organizations that are still trying to put together their 82 00:04:37.520 --> 00:04:40.700 cloud strategies. You've got organizations that are just 83 00:04:40.700 --> 00:04:43.250 putting their first workloads into the cloud. You have 84 00:04:43.250 --> 00:04:47.270 organizations that are more than 90% into the cloud now. So 85 00:04:47.270 --> 00:04:51.770 there's not any one journey, but I would say that the cloud-first 86 00:04:51.800 --> 00:04:55.550 mentality is really dominating. Security leaders are coming 87 00:04:55.550 --> 00:04:59.360 around to whatever we do next, cloud first is going to be our 88 00:04:59.360 --> 00:05:02.510 strategy. And, you're seeing that really take root across 89 00:05:02.510 --> 00:05:07.790 sectors. I would say along with that, multi-cloud isn't the 90 00:05:07.790 --> 00:05:12.170 exception, but it really is the rule. Few organizations are 91 00:05:12.170 --> 00:05:16.070 putting their eggs all in one basket - AWS, Azure, Google, 92 00:05:16.070 --> 00:05:20.900 whatever it might be - by design, or just by the way the 93 00:05:20.900 --> 00:05:24.620 enterprise has adopted cloud. They're in every one of these 94 00:05:26.150 --> 00:05:29.090 environments in one way or another. And part of this is 95 00:05:29.090 --> 00:05:32.510 because cloud has become so accessible. It doesn't 96 00:05:32.510 --> 00:05:35.270 necessarily come from a strategy from the security or the 97 00:05:35.270 --> 00:05:39.230 technology offices, it comes from business needs, anybody 98 00:05:39.230 --> 00:05:42.680 with a corporate card can spin up a cloud instance. And, so 99 00:05:42.680 --> 00:05:46.850 it's taken what we think of traditionally as shadow IT, 100 00:05:46.850 --> 00:05:50.000 which not so many years ago, shadow IT was just a rogue 101 00:05:50.000 --> 00:05:52.970 printer on the network. Now, it's a cloud instance somewhere 102 00:05:53.300 --> 00:05:58.250 that someone's got to account for. And, so I think that - I'm 103 00:05:58.250 --> 00:06:00.680 on a personal campaign to get people to stop using the term 104 00:06:00.680 --> 00:06:04.010 shadow IT and call it overshadow IT, because it really does 105 00:06:04.010 --> 00:06:08.030 overshadow so much of what was being done. That is becoming the 106 00:06:08.030 --> 00:06:11.210 challenge for organizations just to get their hands and 107 00:06:11.210 --> 00:06:14.660 visibility around all the usages of cloud within their 108 00:06:14.660 --> 00:06:19.100 enterprises. And that leads to two pitfalls that come up 109 00:06:19.100 --> 00:06:23.870 consistently. Visibility is one, organizations are having a hard 110 00:06:23.870 --> 00:06:27.650 time seeing where their presence is in the cloud, because there's 111 00:06:27.650 --> 00:06:30.500 so much out there that's not governed, and they have a hard 112 00:06:30.500 --> 00:06:34.940 time having visibility into it. Even with what is governed, it's 113 00:06:34.940 --> 00:06:38.630 hard to have consistent visibility across all the 114 00:06:38.630 --> 00:06:42.290 different cloud workspaces. Azure is different from Google, 115 00:06:42.290 --> 00:06:46.850 which is different from AWS, and it's hard to have one way to see 116 00:06:46.850 --> 00:06:50.270 across all your presences. The other one, no surprise - 117 00:06:50.300 --> 00:06:53.840 misconfigurations! It comes up in every conversation. The 118 00:06:53.840 --> 00:06:57.800 number one bane of security and technology leaders is that 119 00:06:57.800 --> 00:07:01.010 misconfiguration is going to lead to an accidental but 120 00:07:01.010 --> 00:07:05.240 critical data loss. Misconfigurations become the new 121 00:07:05.240 --> 00:07:08.990 computer hygiene. It's something that everybody is aware of they 122 00:07:08.990 --> 00:07:11.840 know they need to take care of, but still fall victim of it 123 00:07:11.840 --> 00:07:14.990 because there's so many uncovered instances of cloud out 124 00:07:14.990 --> 00:07:18.620 there, they can't get their arms around. So, where does AI play 125 00:07:18.620 --> 00:07:22.910 into this? Every conversation becomes "what about AI?" Now, 126 00:07:22.910 --> 00:07:25.550 this is still very much in the experimental phase, I'm not 127 00:07:25.550 --> 00:07:28.370 going to say there are a lot of organizations that have got a 128 00:07:28.370 --> 00:07:33.530 mature AI approach. They've got strategies, they've got proofs 129 00:07:33.530 --> 00:07:37.370 of concept, they have these as their priorities, but not a lot 130 00:07:37.370 --> 00:07:41.090 of organizations have moved forward. Those that have are 131 00:07:41.090 --> 00:07:44.930 using generative AI, in particular, for event 132 00:07:44.930 --> 00:07:47.870 correlation. So, one of the things that AI does very well is 133 00:07:47.870 --> 00:07:50.210 be able to take disparate information, bring it together, 134 00:07:50.540 --> 00:07:54.380 analyze it and give you a consistent logical view, using 135 00:07:54.380 --> 00:07:59.360 it as a co-pilot as a way to augment the human resources that 136 00:07:59.360 --> 00:08:02.840 they have, and give that assistance, again, to automate 137 00:08:02.840 --> 00:08:06.860 the manual where possible, and to create analysis of large 138 00:08:06.860 --> 00:08:12.290 datasets that are hard to do manually in one's own. And to 139 00:08:12.290 --> 00:08:15.050 some extent, some coding, which comes with its own issues, 140 00:08:15.050 --> 00:08:19.520 because when you start using AI for code you bring it up at a 141 00:08:19.520 --> 00:08:23.270 scale, and then your problems come at a different scale as 142 00:08:23.270 --> 00:08:25.820 well, and something organizations are struggling 143 00:08:25.820 --> 00:08:30.800 with. So, still not a lot of concrete usage, but it's 144 00:08:30.800 --> 00:08:33.350 something that you're seeing organizations approach more 145 00:08:33.350 --> 00:08:37.220 consistently. Now, I think that these topics are going to go 146 00:08:37.220 --> 00:08:39.860 together, I hope that we continue this discussion of 147 00:08:39.860 --> 00:08:43.850 cloud through multi-cloud security in AI. Because what I 148 00:08:43.850 --> 00:08:46.310 see consistently from the attendees that our sessions - 149 00:08:46.310 --> 00:08:50.150 and these bring out many attendees - everybody thinks 150 00:08:50.300 --> 00:08:53.870 that someone else has got better answers about multi-cloud 151 00:08:53.870 --> 00:08:57.350 security and about AI and the use of AI. In some cases they 152 00:08:57.350 --> 00:09:01.520 do. So, I hope that we continue this dialogue through the mid 153 00:09:01.520 --> 00:09:06.800 part of 2024. Because there's a hunger for new ideas and 154 00:09:06.800 --> 00:09:10.160 strategies about both of these. And it's a great dialogue that 155 00:09:10.160 --> 00:09:13.250 brings people together. I've learned a ton from hosting these 156 00:09:13.250 --> 00:09:17.450 discussions. I hope that our in future attendees get just as 157 00:09:17.450 --> 00:09:19.280 much out of it. So, there you go. 158 00:09:20.090 --> 00:09:21.740 Anna Delaney: For sure it is a great dialogue. And it was 159 00:09:21.740 --> 00:09:25.970 really good to have Troy Leach join us last week from the Cloud 160 00:09:25.970 --> 00:09:27.890 Security Alliance. I thought it was very interesting what he 161 00:09:27.890 --> 00:09:32.480 said about enlightening the development of AI security with 162 00:09:32.690 --> 00:09:36.860 cloud security evolution over the past 10 to 15 years. And, he 163 00:09:36.860 --> 00:09:41.720 really emphasized the importance of learning from cloud security 164 00:09:41.720 --> 00:09:44.390 experiences, the lessons involving new shared 165 00:09:44.390 --> 00:09:47.810 responsibility models and threat detection and incident response, 166 00:09:48.110 --> 00:09:51.950 and they will hopefully shape and improve practices in the 167 00:09:51.950 --> 00:09:54.290 field of AI. It was great to have him with us, wasn't it? 168 00:09:54.000 --> 00:09:57.360 Tom Field: It was great! And those two discussions have to go 169 00:09:57.360 --> 00:10:00.240 together. I think the one thing I would add to that- as quickly 170 00:10:00.270 --> 00:10:04.350 as the cloud evolution happened over the past decade, it's going 171 00:10:04.350 --> 00:10:08.340 to happen even faster with AI. That speed is something we've 172 00:10:08.340 --> 00:10:09.600 got to be prepared to accept. 173 00:10:10.130 --> 00:10:14.120 Anna Delaney: Yeah, sure. Any more of these conversations 174 00:10:14.000 --> 00:10:16.583 Tom Field: That was the last one of that series. But, I'm hoping 175 00:10:14.120 --> 00:10:14.630 left? 176 00:10:16.631 --> 00:10:19.653 we renew this series. I feel, in many ways, we're just getting 177 00:10:19.702 --> 00:10:22.724 the conversation started, and attendees are so eager- I've had 178 00:10:22.772 --> 00:10:25.891 people attend multiple sessions of these because they're getting 179 00:10:25.940 --> 00:10:28.670 so much out of it. So yeah, I'm looking forward to more. 180 00:10:28.930 --> 00:10:31.990 Anna Delaney: So much to learn! Thank you, Tom. Well, Tony, 181 00:10:32.020 --> 00:10:34.840 let's turn to the recent cyberattack on Microsoft by 182 00:10:34.840 --> 00:10:38.350 Russian state hackers, which has raised a few concerns about 183 00:10:38.500 --> 00:10:41.920 Microsoft's ability to not only secure its customers, but also 184 00:10:41.920 --> 00:10:44.950 itself. Tell us about it and what it all means for the 185 00:10:44.950 --> 00:10:45.460 industry. 186 00:10:46.110 --> 00:10:48.955 Tony Morbin: Well, yet again, you know, we're talking about 187 00:10:49.017 --> 00:10:52.729 cloud security, but from the perspective of the actual cloud 188 00:10:52.791 --> 00:10:56.627 providers. In January, the news broke that an attack targeting 189 00:10:56.689 --> 00:11:00.030 Microsoft 365 had enabled Russian intelligence service 190 00:11:00.092 --> 00:11:03.680 hackers to exfiltrate emails and documents from the senior 191 00:11:03.742 --> 00:11:07.516 leadership and employees across Microsoft's cybersecurity and 192 00:11:07.578 --> 00:11:11.166 legal departments since late November. Now, apart from the 193 00:11:11.228 --> 00:11:14.817 potential consequences of how the information access might 194 00:11:14.879 --> 00:11:18.529 help the Russians in future attacks, it does raise a lot of 195 00:11:18.591 --> 00:11:22.117 questions about Microsoft's security, as the company said 196 00:11:22.179 --> 00:11:25.396 that it had recently strengthened its defenses. That 197 00:11:25.458 --> 00:11:28.861 followed a disclosure in July last year that a group of 198 00:11:28.923 --> 00:11:32.202 Chinese hackers had gained unauthorized access to its 199 00:11:32.264 --> 00:11:35.976 customers email systems as part of an intelligence gathering 200 00:11:36.038 --> 00:11:39.503 campaign. It was particularly targeting the U.S. federal 201 00:11:39.564 --> 00:11:42.843 agencies and other major organizations. At that time, 202 00:11:42.905 --> 00:11:46.741 U.S. senators said that heavy dependence on Microsoft alone to 203 00:11:46.803 --> 00:11:50.453 provide email security had helped to ensure the breach. The 204 00:11:50.515 --> 00:11:53.609 latest hack, Microsoft's attributing it to a group 205 00:11:53.671 --> 00:11:57.568 Midnight Blizzard, which we also know as Cozy Bear, which as we 206 00:11:57.630 --> 00:12:01.466 said, is a group associated with Russia's Foreign Intelligence 207 00:12:01.528 --> 00:12:04.498 Service. Well, it was a successful compromise of 208 00:12:04.560 --> 00:12:08.581 Microsoft's legacy nonproduction test tenant account. Microsoft's 209 00:12:08.643 --> 00:12:12.169 cloud-based email was breached by using a test account to 210 00:12:12.231 --> 00:12:15.387 authorize a custom-built malicious application. The 211 00:12:15.448 --> 00:12:19.037 attackers then built their own applications for Office 365 212 00:12:19.099 --> 00:12:22.440 OAuth, and granted the applications complete access to 213 00:12:22.502 --> 00:12:24.420 Microsoft's own outlook estate. 214 00:12:24.000 --> 00:12:27.669 The way they did this, again, yes, it was a sophisticated 215 00:12:27.734 --> 00:12:30.888 group. But, some of the attack methods weren't so 216 00:12:30.953 --> 00:12:35.073 sophisticated, password spraying - you know brute force attack - 217 00:12:35.137 --> 00:12:38.807 typically running the same password gets through a number 218 00:12:38.871 --> 00:12:42.863 of accounts. They did use some fancy obfuscation techniques to 219 00:12:42.927 --> 00:12:46.018 avoid detection. But this compromised the legacy 220 00:12:46.082 --> 00:12:49.301 nonproduction test tenant account that didn't have 221 00:12:49.365 --> 00:12:53.164 multifactor authentication enabled. They used that account, 222 00:12:53.228 --> 00:12:57.091 found and compromised a legacy OAuth application that had an 223 00:12:57.155 --> 00:13:00.954 elevated access within the Microsoft corporate environment. 224 00:13:01.018 --> 00:13:04.302 They used that to create additional malicious OAuth 225 00:13:04.366 --> 00:13:08.164 applications, created a new account to grant consent in the 226 00:13:08.229 --> 00:13:12.092 Microsoft corporate environment to their own malicious OAuth 227 00:13:12.156 --> 00:13:15.954 applications. And that included the full access to multiple 228 00:13:16.019 --> 00:13:19.688 office 365 Exchange Online mailboxes. Now unsurprisingly, 229 00:13:19.753 --> 00:13:23.551 Microsoft - which provides cybersecurity services to others 230 00:13:23.616 --> 00:13:27.157 - has come in for a lot of criticism from cybersecurity 231 00:13:27.221 --> 00:13:31.084 practitioners. Critics of the incident have pointed out that 232 00:13:31.148 --> 00:13:35.204 it's the continued systemic risk created by Microsoft's lack of 233 00:13:35.268 --> 00:13:39.324 support for legacy technologies. And, they said that it shows a 234 00:13:39.389 --> 00:13:43.509 disregard of basic security best practices and highlights issues 235 00:13:43.573 --> 00:13:47.501 within their ability to secure their cloud infrastructure. In 236 00:13:47.565 --> 00:13:51.363 particular, the critics are pointing out, there was no MFA; 237 00:13:51.428 --> 00:13:55.226 a standard password; no log analytics, sim XDR that alerted 238 00:13:55.291 --> 00:13:58.960 password spraying; no separation of production and versus 239 00:13:59.025 --> 00:14:02.630 non-production; no micro segmentation; creation of a new 240 00:14:02.694 --> 00:14:06.557 OAuth account didn't trigger an alert on sim or XDR identity 241 00:14:06.622 --> 00:14:10.613 management; hardening policies were only being applied for new 242 00:14:10.678 --> 00:14:14.733 systems, not the existing ones; no thorough conditional access; 243 00:14:14.798 --> 00:14:18.081 no user and entity behavior analytics; and no least 244 00:14:18.146 --> 00:14:22.137 privilege. Now Microsoft says it's going to act immediately to 245 00:14:22.202 --> 00:14:25.743 apply its current security standards to Microsoft's own 246 00:14:25.807 --> 00:14:29.348 legacy systems and internal business processes. And the 247 00:14:29.412 --> 00:14:33.339 better defenses are already now in place to guard against any 248 00:14:33.404 --> 00:14:37.331 repeats of this type of attack. However, given the vital role 249 00:14:37.395 --> 00:14:40.743 that they play, some commentators are now suggesting 250 00:14:40.807 --> 00:14:44.284 that these major service providers should end up being 251 00:14:44.348 --> 00:14:48.404 recognized as critical national infrastructure as their failure 252 00:14:48.469 --> 00:14:52.332 would be analogous to that of banks. Consequently, there are 253 00:14:52.396 --> 00:14:56.194 now also calls for critical cloud and security providers to 254 00:14:56.259 --> 00:15:00.186 be regulated, at least in the U.S. and the EU. And whether it 255 00:15:00.250 --> 00:15:04.435 is regulation or some other form of mandatory standards that will 256 00:15:04.499 --> 00:15:08.233 be the key driver to ensure compliance with best practice, 257 00:15:08.298 --> 00:15:11.968 certainly the word is the industry is expecting better of 258 00:15:12.032 --> 00:15:13.320 its leading players. 259 00:15:15.940 --> 00:15:19.300 Tom Field: I think you're spot on, and it reminds me of the 260 00:15:19.300 --> 00:15:25.150 tagline of a very famous British graphic novel - "Who watches the 261 00:15:25.150 --> 00:15:29.290 watchmen?" When you've got an organization like Microsoft that 262 00:15:29.290 --> 00:15:32.680 has made itself so indispensable, the errors that 263 00:15:32.680 --> 00:15:38.380 you have outlined are just inexcusable. The world operates 264 00:15:38.380 --> 00:15:41.980 on these systems, and these systems deserve better than 265 00:15:41.980 --> 00:15:43.000 default passwords. 266 00:15:44.200 --> 00:15:46.540 Tony Morbin: Absolutely, yeah. I mean, you know, I've seen other 267 00:15:46.540 --> 00:15:49.480 comments online. So you know, just do a Shodan search, and 268 00:15:50.320 --> 00:15:51.970 it's horrifying what you find. 269 00:15:51.000 --> 00:15:53.627 Suparna Goswami: And, I think one of the analysts' reports 270 00:15:53.691 --> 00:15:57.664 also said that, you know, they are very positive on Microsoft, 271 00:15:57.728 --> 00:16:01.765 because it offers such an open platform. But, at the same time, 272 00:16:01.829 --> 00:16:05.162 they said, businesses are worried about its security 273 00:16:05.226 --> 00:16:09.071 posture, because they're not really confident - this came in 274 00:16:09.135 --> 00:16:13.044 December, the report - and the analyst said that the security 275 00:16:13.108 --> 00:16:16.376 is something that the practitioners do worry about. 276 00:16:16.440 --> 00:16:20.157 And that is something that might, you know, give advantage 277 00:16:20.221 --> 00:16:22.080 to other players in the area. 278 00:16:22.560 --> 00:16:24.900 Tony Morbin: I mean, it's ironic that we spend a lot of our time 279 00:16:24.900 --> 00:16:27.480 here convincing people, you know, the cloud is secure, 280 00:16:27.480 --> 00:16:32.880 probably more secure than your on-premise. But, it's not 281 00:16:33.090 --> 00:16:34.440 without its problems as well. 282 00:16:35.970 --> 00:16:38.866 Anna Delaney: And yet, Microsoft invests a huge number of 283 00:16:38.933 --> 00:16:43.041 resources in cybersecurity. I don't think anybody in the room 284 00:16:43.109 --> 00:16:47.419 can deny that and has faced, as you say, high-profile incidents. 285 00:16:47.487 --> 00:16:51.730 So from your perspective, Tony, what areas can they improve on? 286 00:16:51.797 --> 00:16:55.030 What aspects of the cybersecurity strategy might 287 00:16:55.097 --> 00:16:56.310 need reevaluation? 288 00:16:56.000 --> 00:16:59.480 Tony Morbin: Well, it's like Tom said, you know, who's watching 289 00:16:59.480 --> 00:17:02.570 the watchers, maybe they do need some kind of oversight, you 290 00:17:02.570 --> 00:17:05.090 know, to make sure- they know what to do! It is to make sure 291 00:17:05.090 --> 00:17:07.670 they do do the things that they know how to do! To do the 292 00:17:07.670 --> 00:17:10.280 basics, the fact that they hadn't applied their own 293 00:17:10.280 --> 00:17:17.900 security controls to legacy systems. They're creating some 294 00:17:17.900 --> 00:17:21.350 of the tools that will resolve these problems, they just need 295 00:17:21.350 --> 00:17:22.130 to apply them. 296 00:17:23.140 --> 00:17:25.870 Anna Delaney: Well, thank you, Tony. There's lots to say and 297 00:17:25.870 --> 00:17:30.790 ask on that. But, I'm sure the debate will continue. Suparna! 298 00:17:31.000 --> 00:17:34.540 You've been looking at securing APIs in the age of zero trust. 299 00:17:34.570 --> 00:17:35.260 What have you learned? 300 00:17:36.430 --> 00:17:38.710 Suparna Goswami: Yes, I had a lovely conversation with one of 301 00:17:38.710 --> 00:17:43.390 the CISOs here - his name is Rohit Rane, he's a CISO of HDFC 302 00:17:43.390 --> 00:17:46.900 pension bank - about the concept of zero trust, especially in the 303 00:17:46.900 --> 00:17:50.020 context of API integration. I thought that was an interesting 304 00:17:50.020 --> 00:17:53.860 topic to talk about. And, he said the first step which 305 00:17:53.920 --> 00:17:58.360 organizations must do as they try and integrate API and bring 306 00:17:58.360 --> 00:18:01.390 the concept of zero trust is a feasibility test, which is 307 00:18:01.390 --> 00:18:04.000 crucial before one begins implementing zero trust 308 00:18:04.000 --> 00:18:06.520 framework. And we have heard this before, this is something 309 00:18:06.520 --> 00:18:10.630 even John Kindervag in our CyberEd.io masterclass on zero 310 00:18:10.630 --> 00:18:14.350 trust talks about. And as we know, it involves understanding 311 00:18:14.350 --> 00:18:18.040 an organization's devices, identity mechanisms, data 312 00:18:18.040 --> 00:18:22.000 locations and user landscape. So, in the context of API, this 313 00:18:22.000 --> 00:18:27.850 process will help decide which applications should be brought 314 00:18:28.120 --> 00:18:31.060 or should not be brought into the zero trust architecture. So, 315 00:18:31.060 --> 00:18:32.830 that was the gist of the conversation. 316 00:18:33.880 --> 00:18:36.520 Anna Delaney: So, what are the hurdles that arise or challenges 317 00:18:36.550 --> 00:18:40.900 when incorporating zero trust principles into API integration, 318 00:18:40.930 --> 00:18:41.500 Suparna? 319 00:18:42.610 --> 00:18:45.310 Suparna Goswami: Yes, so the challenges that he spoke about 320 00:18:45.310 --> 00:18:50.110 that intrigued me the most. So, zero trust emphasizes that every 321 00:18:50.110 --> 00:18:53.320 connection should continuously be monitored and given access 322 00:18:53.320 --> 00:18:57.520 with least privileges. Now, the typical approach that is taken 323 00:18:57.520 --> 00:19:02.590 in API is always a token-based approach. When two different 324 00:19:02.650 --> 00:19:05.920 applications sitting on two different environments call each 325 00:19:05.920 --> 00:19:09.400 other for any data transfer, the traditional approach is that you 326 00:19:09.400 --> 00:19:12.970 have a token and - on top of that - you have a static API 327 00:19:13.000 --> 00:19:17.860 key. According to the principle of zero trust, every API request 328 00:19:17.890 --> 00:19:22.270 call must undergo thorough checks and verification, which 329 00:19:22.390 --> 00:19:25.870 actually introduces a lot of complexity into the process. For 330 00:19:25.870 --> 00:19:30.100 example, just to explain, when you are authenticated, you have 331 00:19:30.100 --> 00:19:33.610 verified as the user that you're pretending to be, and then one 332 00:19:33.610 --> 00:19:36.700 needs to provide you direct access to whatever you are 333 00:19:36.700 --> 00:19:41.470 authorized to access. Now from an API integration point of 334 00:19:41.470 --> 00:19:45.040 view, if an organization has a lot of infrastructure-based set 335 00:19:45.040 --> 00:19:49.270 up and less applications in scope, integration with zero 336 00:19:49.270 --> 00:19:53.530 trust principle is easy. It's not a very difficult thing to 337 00:19:53.530 --> 00:19:57.040 do. You can just micro segment these applications whenever a 338 00:19:57.040 --> 00:20:01.870 user comes, they can land all those 50-60 applications. The 339 00:20:01.870 --> 00:20:04.150 problem or the challenge arises when you talk about an 340 00:20:04.180 --> 00:20:07.390 application-based organization that is driven by APIs where we 341 00:20:07.390 --> 00:20:11.140 have hundreds and thousands of applications, then we need to 342 00:20:11.140 --> 00:20:14.980 take into consideration multiple scenarios. And we then can't 343 00:20:14.980 --> 00:20:18.370 simply say that 100% of my applications are under zero 344 00:20:18.370 --> 00:20:21.970 trust principle, and to verify and authenticate requests for 345 00:20:22.000 --> 00:20:25.840 each application is complex. And on top of that, there are data 346 00:20:25.840 --> 00:20:29.020 flows, because while we are giving this access after 347 00:20:29.020 --> 00:20:32.560 validating the user's identity, we also need to give him access 348 00:20:32.560 --> 00:20:35.680 to the data. And, in order to give access to data, the data 349 00:20:35.680 --> 00:20:39.280 flow needs to be understood. And this is an important part of the 350 00:20:39.280 --> 00:20:43.390 feasibility test. So, this overall thing adds to a lot of 351 00:20:43.420 --> 00:20:46.240 complication, because there are a lot of organizations who are 352 00:20:46.240 --> 00:20:47.140 application heavy. 353 00:20:48.160 --> 00:20:50.290 Anna Delaney: So, what advice do you have for those organizations 354 00:20:50.290 --> 00:20:54.220 that do have huge numbers of applications? What approaches 355 00:20:54.220 --> 00:20:58.630 can they take to really simplify the integration of zero trust 356 00:20:58.630 --> 00:21:00.220 principles into their processes? 357 00:21:00.790 --> 00:21:03.400 Suparna Goswami: I asked him the same. And he said that, you 358 00:21:03.400 --> 00:21:06.760 know, most organizations now a days achieve this by using a 359 00:21:07.150 --> 00:21:11.590 token-based mechanism, especially Java web tokens. And 360 00:21:11.620 --> 00:21:14.470 this makes things a lot easier. And he explained it to me 361 00:21:14.470 --> 00:21:19.030 beautifully. For every request, the server generates a unique 362 00:21:19.690 --> 00:21:23.830 key and token and in the process, identifies a particular 363 00:21:23.830 --> 00:21:27.430 system. So, for example, if you take the same example, so when 364 00:21:27.460 --> 00:21:31.810 application A request data from application B, the request goes 365 00:21:31.810 --> 00:21:34.510 to the central server for authentication. Once 366 00:21:34.510 --> 00:21:37.930 authenticated, the token and key are passed to communicate with 367 00:21:38.020 --> 00:21:41.980 application B. This process ensures secure data flow with 368 00:21:42.010 --> 00:21:48.070 each call having a unique key. And, this method is getting a 369 00:21:48.070 --> 00:21:51.190 lot of popularity for API integration in organizations 370 00:21:51.430 --> 00:21:55.210 implementing zero trust. And the use of such architecture is 371 00:21:55.210 --> 00:21:58.210 becoming common, especially in scenarios with multiple 372 00:21:58.210 --> 00:22:01.750 applications and API integrations. So, that's what 373 00:22:01.780 --> 00:22:04.900 most of the organizations are nowadays, doing. So, if you ask 374 00:22:04.900 --> 00:22:07.840 me the key takeaways from this conversation, this would be 375 00:22:07.840 --> 00:22:11.230 implementing zero trust, it requires a well-defined scope 376 00:22:11.230 --> 00:22:14.710 and feasibility check that will be one, second would be API 377 00:22:14.710 --> 00:22:21.190 integration into zero trust will probably demand dynamic key 378 00:22:21.190 --> 00:22:24.040 management for security and performance. That would be the 379 00:22:24.040 --> 00:22:26.650 two key takeaways for this. 380 00:22:28.450 --> 00:22:30.220 Tom Field: Remember what I've learned from Kindervag 381 00:22:30.250 --> 00:22:33.670 University: two challenges organizations face one, they 382 00:22:33.670 --> 00:22:36.280 better know what it is they're trying to protect, and that's 383 00:22:36.280 --> 00:22:40.420 extremely challenging in the API universe. And, two beware of 384 00:22:40.420 --> 00:22:43.450 vendors coming and saying, "I've got your zero trust solution 385 00:22:43.450 --> 00:22:44.140 right here for it." 386 00:22:45.010 --> 00:22:47.020 Suparna Goswami: Oh, yes, definitely. And he said, it's 387 00:22:47.020 --> 00:22:49.450 not as easy. He goes to organizations and they say, 388 00:22:49.480 --> 00:22:52.420 "yes, we are 100% zero-trust compliant," but that's not how 389 00:22:52.420 --> 00:22:55.300 it works. And, for APIs when he was explaining the process, of 390 00:22:55.300 --> 00:22:58.210 course, I have simplified it and made it much shorter. If you 391 00:22:58.210 --> 00:23:00.010 listen to the interview, we'll get to know that it's so 392 00:23:00.010 --> 00:23:04.180 complex! With every application when such requests come, and it 393 00:23:04.180 --> 00:23:07.870 has to be done in microseconds because you need to have a good 394 00:23:08.020 --> 00:23:11.740 user experience as well. So yes, it adds a lot of complexity. 395 00:23:11.740 --> 00:23:13.870 But, it was interesting conversation around zero trust 396 00:23:13.870 --> 00:23:15.160 and API integration. 397 00:23:15.920 --> 00:23:18.140 Anna Delaney: Well, we'll recommend viewers to watch that 398 00:23:18.140 --> 00:23:21.230 interview in full. Thank you so much, Suparna. I finally and 399 00:23:21.230 --> 00:23:24.920 just for fun, I'd like you to share a memorable quote from a 400 00:23:24.920 --> 00:23:28.670 movie and illustrate how you would relate it to a real-life 401 00:23:28.670 --> 00:23:29.960 cybersecurity scenario. 402 00:23:30.590 --> 00:23:32.300 Suparna Goswami: So, I have one from one of the episodes of 403 00:23:32.300 --> 00:23:35.720 Sherlock that I've watched. I may not be on the side of the 404 00:23:35.720 --> 00:23:39.230 angels, but don't think- I may be on the side of the angels, 405 00:23:39.230 --> 00:23:43.520 but don't think for one second that I'm one of them. So, you 406 00:23:43.520 --> 00:23:47.600 know, related it to this because Sherlock may work with the good 407 00:23:47.600 --> 00:23:51.080 guys. But he wasn't afraid to play dirty and destroy Moriarty 408 00:23:51.080 --> 00:23:54.140 the way Moriarty was destroying him. So, white hat hackers are 409 00:23:54.140 --> 00:23:56.690 on this side, but I'm sure if they're required to act the way 410 00:23:56.690 --> 00:23:59.360 the hackers work, they will obviously use those tools and do 411 00:23:59.360 --> 00:24:01.760 that. So I thought it was very relevant to the cybersecurity 412 00:24:01.760 --> 00:24:02.000 world. 413 00:24:02.600 --> 00:24:05.720 Anna Delaney: Fantastic, that was excellent. Tom, go ahead. 414 00:24:06.140 --> 00:24:08.360 Tom Field: Mine's actually quite complimentary. So, I'm going to 415 00:24:08.360 --> 00:24:11.690 give you the quote, I'm going to ask who remembers it, quote is 416 00:24:11.840 --> 00:24:14.780 "keep your friends close, but your enemies closer." 417 00:24:16.550 --> 00:24:18.980 Anna Delaney: I've heard that come up in politics a lot. 418 00:24:19.980 --> 00:24:22.800 Tom Field: Well, it originated with the Godfather Part Two, 419 00:24:22.830 --> 00:24:27.150 which was released 50 years ago this year in 1974. And, to me, 420 00:24:27.150 --> 00:24:31.170 it resonates in terms of how you need to practice your own threat 421 00:24:31.170 --> 00:24:34.080 intelligence gathering these days. And it's great to know who 422 00:24:34.080 --> 00:24:36.870 your friends are, but boy! You better have a better handle on 423 00:24:36.870 --> 00:24:38.280 who your enemies are and what they're up to. 424 00:24:39.000 --> 00:24:42.750 Anna Delaney: Totally, you quoted The Godfather on the ISMG 425 00:24:42.750 --> 00:24:45.630 Editors' Panel. That is a first, I love it! Tony? 426 00:24:46.290 --> 00:24:49.980 Tony Morbin: Anna, you said you'd be asking for a quote and 427 00:24:49.980 --> 00:24:55.230 I totally missed the movie bit. So, I just grabbed one from this 428 00:24:55.230 --> 00:25:00.390 morning that I heard. It was coming from U.S. Former 429 00:25:00.390 --> 00:25:05.340 Republican Senator, Will Hurd, from Texas, talking about a 430 00:25:05.340 --> 00:25:07.620 briefing that he had while serving on the board of 431 00:25:07.650 --> 00:25:12.900 ChatGPT's maker OpenAI. And he was saying, "if unchecked, 432 00:25:13.020 --> 00:25:16.440 artificial general intelligence could lead to consequences as 433 00:25:16.440 --> 00:25:20.760 impactful and irreversible as those of a nuclear war." So 434 00:25:21.480 --> 00:25:25.710 yeah, it's very much a call for action to ensure guardrails, but 435 00:25:26.550 --> 00:25:29.370 not coming from a movie, but it sounds like it should be. 436 00:25:31.850 --> 00:25:34.700 Anna Delaney: Yeah, we better next year. Well, my quote is, 437 00:25:34.970 --> 00:25:40.370 "frankly, dear, I don't give a damn." Which movie was that? 438 00:25:40.760 --> 00:25:41.630 Tony Morbin: Gone With The Wind. 439 00:25:42.800 --> 00:25:44.930 Anna Delaney: Very good. One of the best lines in the history of 440 00:25:44.930 --> 00:25:48.680 cinema. But, I thought it could be said by the regulator's a 441 00:25:48.680 --> 00:25:52.490 good quote for the regulatory bodies out there for dismissing 442 00:25:53.120 --> 00:25:56.750 an organization's justifications for poor cybersecurity measures. 443 00:25:56.750 --> 00:26:00.800 So maybe, it could be adapted to, "frankly, this regulatory 444 00:26:00.800 --> 00:26:03.710 body doesn't accept poor security excuses." 445 00:26:04.080 --> 00:26:05.940 Tom Field: The Rhett Butler approach to cybersecurity 446 00:26:05.940 --> 00:26:06.930 regulation! I like it! 447 00:26:07.880 --> 00:26:10.910 Anna Delaney: And then he walks out the door. Well, thank you 448 00:26:10.910 --> 00:26:13.850 very much, always enjoyable and informative. Fantastic. 449 00:26:13.910 --> 00:26:15.350 Suparna Goswami: Thank you, Anna. Thank you so much. 450 00:26:15.570 --> 00:26:18.600 Anna Delaney: Thank you so much for watching. Until next time.