WEBVTT 1 00:00:07.170 --> 00:00:09.780 Anna Delaney: Hello and welcome back to the ISMG Editors' Panel. 2 00:00:09.780 --> 00:00:13.320 I'm Anna Delaney, and this is a weekly spot where we examine the 3 00:00:13.320 --> 00:00:17.100 most important news and events in cyber and InfoSec right now. 4 00:00:17.460 --> 00:00:20.670 And, I'm thrilled to be joined by my colleagues Akshaya Asokan, 5 00:00:20.700 --> 00:00:24.690 senior correspondent; Rashmi Ramesh, assistant editor, Global 6 00:00:24.690 --> 00:00:27.960 News Desk; and Tony Morbin, executive news editor for the 7 00:00:27.960 --> 00:00:29.610 E.U. Great to see you all! 8 00:00:30.360 --> 00:00:30.930 Tony Morbin: Good to be here. 9 00:00:30.930 --> 00:00:31.710 Rashmi Ramesh: Glad to be here. 10 00:00:32.759 --> 00:00:34.649 Anna Delaney: Rashmi, a stunning background behind you. 11 00:00:35.250 --> 00:00:38.520 Rashmi Ramesh: It's a railway in Mumbai, where we hosted our 12 00:00:38.520 --> 00:00:41.550 India flagship Cybersecurity Summit just a few days ago, in 13 00:00:41.550 --> 00:00:42.060 fact. 14 00:00:42.420 --> 00:00:46.050 Anna Delaney: We look forward to hearing more about it. Tony! 15 00:00:46.050 --> 00:00:47.730 Baptism of fire behind you. 16 00:00:48.420 --> 00:00:50.730 Tony Morbin: Well, we've just had bonfire night here, you know 17 00:00:50.730 --> 00:00:56.250 Guy Fawkes Night. So, I'll be leading on from there into AI. 18 00:00:56.910 --> 00:01:00.120 Anna Delaney: Very good. We look forward to that. And, Akshaya, 19 00:01:00.180 --> 00:01:02.760 beautiful painting, I think. 20 00:01:03.540 --> 00:01:06.210 Akshaya Asokan: Yeah, so this is a sketch of Bletchley Park where 21 00:01:06.210 --> 00:01:11.400 the U.K. Government hosted the U.K. AI Safety Summit. About 22 00:01:11.400 --> 00:01:14.250 which I'll be talking later on. 23 00:01:15.060 --> 00:01:18.120 Anna Delaney: Yes, so you've been busy reporting on it all of 24 00:01:18.120 --> 00:01:22.560 last week. So, we look forward to it. And, I'm in the interior 25 00:01:22.560 --> 00:01:25.950 of the National Gallery in Trafalgar Square. And, I was 26 00:01:25.950 --> 00:01:28.770 there recently to see a Frans Hals exhibition, but I was just 27 00:01:28.770 --> 00:01:31.770 reminded of how beautiful the architecture is, so I'm just 28 00:01:31.770 --> 00:01:34.830 sharing a glimpse of it with you now. And, Rashmi it's probably 29 00:01:34.830 --> 00:01:40.020 made at the same time, this was in 1838, I believe, it moved to 30 00:01:40.380 --> 00:01:44.160 the Trafalgar Square. So, it's all about history today, I 31 00:01:44.160 --> 00:01:49.230 think. Well, Tony, you have been working very hard on a 32 00:01:49.230 --> 00:01:52.710 generative AI survey. Tell us about it. And, what have we 33 00:01:52.710 --> 00:01:53.130 learned? 34 00:01:53.970 --> 00:01:56.820 Tony Morbin: Okay, I will get to that. I'm just starting off 35 00:01:56.820 --> 00:01:59.130 saying you know, we're celebrating, or just recently 36 00:01:59.130 --> 00:02:02.550 celebrated bonfire night with bonfires and fireworks here in 37 00:02:02.550 --> 00:02:06.450 the U.K. I say, celebrating, commemorating how Guy Fawkes 38 00:02:06.450 --> 00:02:09.060 tried but failed to blow up the Houses of Parliament in the 39 00:02:09.060 --> 00:02:13.380 Gunpowder Plot back in 1605. Now, firstly, as a kid, I really 40 00:02:13.380 --> 00:02:16.740 enjoyed Bonfire Night. And, thanks to an old children's 41 00:02:16.740 --> 00:02:20.400 encyclopedia, which used the recipe for making gunpowder to 42 00:02:20.400 --> 00:02:23.610 explain percentages, plus a chemistry set and a local 43 00:02:23.610 --> 00:02:27.000 chemist for extra supplies of potassium nitrate, I made my own 44 00:02:27.060 --> 00:02:30.270 gunpowder and fireworks. And, I know some people might say, No, 45 00:02:30.270 --> 00:02:35.610 Surely not. But, I've actually- there's the book here. And, I 46 00:02:35.610 --> 00:02:38.460 don't know if you're going to be able to see the page. But, there 47 00:02:38.460 --> 00:02:42.420 we have the recipe for making gunpowder. I'll do it very 48 00:02:42.420 --> 00:02:46.170 briefly. Now, many will consider that that kind of knowledge is 49 00:02:46.200 --> 00:02:50.490 dangerous, and as Jen Easterly of CISA noted in her Vanderbilt 50 00:02:50.760 --> 00:02:54.630 Summit keynote, the internet has been used to spread all sorts of 51 00:02:54.630 --> 00:02:58.050 dangerous information. And, she cited a treatise on how to make 52 00:02:58.050 --> 00:03:00.720 a bomb in the kitchen of your mom, which was used by the 53 00:03:00.720 --> 00:03:04.740 Boston Marathon bombers to actually kill people. The fact 54 00:03:04.740 --> 00:03:09.060 is, we launched the internet with no security controls, we 55 00:03:09.060 --> 00:03:11.520 added social media without considering any negative 56 00:03:11.520 --> 00:03:15.090 impacts. And, we now see that AI is exceeding Moore's Law and 57 00:03:15.090 --> 00:03:18.840 doubling in capability every six months with no agreed controls. 58 00:03:19.650 --> 00:03:22.950 Now, AI is going to make knowledge more widely available, 59 00:03:23.670 --> 00:03:26.730 and without controls that will include how to make cyber 60 00:03:26.730 --> 00:03:30.000 weapons, bio weapons and worse, up to and including, the 61 00:03:30.000 --> 00:03:33.960 potential end of humanity. So, the call is- the time for action 62 00:03:33.960 --> 00:03:39.210 is now. At the recent U.K. AI Safety Summit, which Akshaya is 63 00:03:39.210 --> 00:03:42.390 going to be discussing, U.S. Vice President Kamala Harris 64 00:03:42.390 --> 00:03:46.080 noted how AI could enable cyberattacks at a scale beyond 65 00:03:46.080 --> 00:03:49.080 anything we've seen before. And, in the resulting Bletchley 66 00:03:49.080 --> 00:03:52.800 Declaration, governments agreed to tackle AI safety risks. And, 67 00:03:52.800 --> 00:03:56.730 then in the U.S., the Biden AI safety bill, which includes a 68 00:03:56.730 --> 00:04:00.120 requirement for tech firms to submit test results for powerful 69 00:04:00.120 --> 00:04:03.180 AI systems to the government before they're released to the 70 00:04:03.180 --> 00:04:07.260 public. So, clearly, AI-powered cyberattacks are now on the 71 00:04:07.260 --> 00:04:10.980 radar of governments. But, without compulsion, what are 72 00:04:10.980 --> 00:04:14.160 organizations themselves doing when it comes to mitigating the 73 00:04:14.160 --> 00:04:20.250 risks of AI? Now, in a recent survey conducted by ISMG of both 74 00:04:20.250 --> 00:04:23.100 business leaders and cybersecurity professionals, it 75 00:04:23.100 --> 00:04:26.610 wasn't surprising that business leaders were less cautious about 76 00:04:26.610 --> 00:04:29.310 the introduction of AI, keen to harness the potential 77 00:04:29.310 --> 00:04:32.670 productivity gains. And, these gains are real, with more than 78 00:04:32.670 --> 00:04:36.180 half of the respondents in our survey deploying AI saying that 79 00:04:36.180 --> 00:04:39.600 they've achieved gains of more than 10%, and often considerably 80 00:04:39.600 --> 00:04:42.960 more. And, the use cases were really wide ranging from 81 00:04:43.140 --> 00:04:46.020 automation, back office functions, marketing and content 82 00:04:46.020 --> 00:04:49.050 creation, to patching vulnerability management, Risk 83 00:04:49.050 --> 00:04:51.900 and Incident Management, research and diagnosis of 84 00:04:51.900 --> 00:04:56.220 medical conditions. Unfortunately, AI regulations, 85 00:04:56.250 --> 00:04:59.850 such as they are, are clearly changing too fast for users to 86 00:04:59.850 --> 00:05:02.820 keep up, and they lack uniformity on a global scale. 87 00:05:03.150 --> 00:05:06.030 So, it wasn't really a surprise to find that only 38% of 88 00:05:06.030 --> 00:05:09.900 business leaders and 52% of cybersecurity leaders say that 89 00:05:09.900 --> 00:05:13.080 they actually understand what the AI regulations are that 90 00:05:13.080 --> 00:05:16.380 apply to their vertical sector and geography. It was also 91 00:05:16.380 --> 00:05:20.130 concerning to see that only 30% of respondents report having 92 00:05:20.130 --> 00:05:23.520 playbooks for AI deployment, even though 15% of respondents 93 00:05:23.520 --> 00:05:27.300 have already deployed AI and a further 60% have plans to do so. 94 00:05:28.320 --> 00:05:31.050 The concerns - which we've discussed, obviously, quite a 95 00:05:31.050 --> 00:05:35.940 lot on our channel here - about AI risks among respondents, they 96 00:05:35.940 --> 00:05:39.030 were quite wide ranging, but the top was leakage of sensitive 97 00:05:39.030 --> 00:05:43.110 data by staff using AI, cited by 80%; followed by ingress of 98 00:05:43.110 --> 00:05:46.590 inaccurate data, you know, the hallucinations, cited by 7%; 99 00:05:47.370 --> 00:05:52.200 with the third place given to AI bias and ethical concerns, cited 100 00:05:52.200 --> 00:05:55.980 by about 60% of respondents. There were a multitude of 101 00:05:55.980 --> 00:05:58.710 mitigation strategies mentioned from encryption and blocking 102 00:05:58.710 --> 00:06:02.820 software to blacklists spanning access for certain AI formats, 103 00:06:02.820 --> 00:06:05.850 user groups or individuals, or whitelisting those that were 104 00:06:05.850 --> 00:06:10.320 going to be allowed. Somewhat contradicting the plans of 70% 105 00:06:10.680 --> 00:06:14.940 of respondents to use AI, it was significant that 38% of business 106 00:06:14.940 --> 00:06:18.990 leaders and 48% of cybersecurity leaders also said that they 107 00:06:18.990 --> 00:06:22.200 intend to continue banning the use of generative AI in the 108 00:06:22.200 --> 00:06:25.980 workplace, and more than 70% intend to take a walled garden 109 00:06:25.980 --> 00:06:30.150 approach to AI going forward. Now, both of these latter 110 00:06:30.150 --> 00:06:34.140 suggestions imply that there's a desire to return to the wall and 111 00:06:34.140 --> 00:06:37.350 moat of the past, as businesses strive to retake and regain 112 00:06:37.350 --> 00:06:40.260 control of the AI genie that's been let loose from the bottle. 113 00:06:40.830 --> 00:06:44.550 But, the reality is that generative AI is really useful, 114 00:06:44.730 --> 00:06:48.570 and it's becoming as necessary as search engines. So, bans on 115 00:06:48.570 --> 00:06:52.260 employees using generative AI are likely to result in users 116 00:06:52.260 --> 00:06:55.920 circumventing the regulations with the introduction of shadow 117 00:06:55.920 --> 00:07:00.360 AI, and particularly lesser known brands whose security 118 00:07:00.360 --> 00:07:03.480 levels and origins are less known, and again, potentially 119 00:07:03.480 --> 00:07:06.390 more vulnerable to poisoning, if not actually outright supplied 120 00:07:06.390 --> 00:07:10.920 by adversaries. So, rather than banning AI, we need to rapidly 121 00:07:10.920 --> 00:07:13.350 mature the guardrails and regulations now under 122 00:07:13.350 --> 00:07:17.190 discussion, implement them and enforce them, and do it now. 123 00:07:17.610 --> 00:07:20.190 There's no going back, and we need to embrace this new 124 00:07:20.190 --> 00:07:23.580 opportunity with zeal, but temper it with clear-sighted 125 00:07:23.580 --> 00:07:26.010 realism and not blind enthusiasm. 126 00:07:27.660 --> 00:07:29.460 Anna Delaney: That's a great overview! And, in looking ahead, 127 00:07:29.460 --> 00:07:32.700 Tony, what will you have your journalistic eye on when it 128 00:07:32.700 --> 00:07:36.210 comes to the key topics and developments in AI governance 129 00:07:36.270 --> 00:07:37.920 and regulation in the near future? 130 00:07:39.450 --> 00:07:42.000 Tony Morbin: There's two totally different things. One is the - 131 00:07:42.030 --> 00:07:46.770 if you like the big long term kind of, you know, the unknown, 132 00:07:46.800 --> 00:07:49.920 the things that we don't know; "we don't know," as Rumsfeld 133 00:07:49.950 --> 00:07:54.810 said. And, so there's going to be, you know, as AI doubles, you 134 00:07:54.810 --> 00:07:59.490 know, the old thing of the grain of rice on the chessboard, if 135 00:07:59.490 --> 00:08:03.960 you doubled it each time you'd bankrupt the kingdom. AI is 136 00:08:03.960 --> 00:08:09.060 doubling capability every six months now, and goodness knows 137 00:08:09.090 --> 00:08:12.750 where we're going to get to with that. And, then there's the more 138 00:08:12.750 --> 00:08:16.320 practical thing of people are implementing AI right now. So, 139 00:08:16.320 --> 00:08:19.920 we need to have security measures that we can introduce 140 00:08:19.920 --> 00:08:25.680 now, you know, blocking the ingress of harmful data, 141 00:08:26.130 --> 00:08:30.960 blocking the egress of private and confidential data, you know, 142 00:08:31.470 --> 00:08:36.120 bigger use of encryption where it's not being used, and 143 00:08:36.120 --> 00:08:41.160 generally setting up exactly what is and isn't allowed. Right 144 00:08:41.160 --> 00:08:44.790 now, we're just working out what our rules are, and we need to 145 00:08:44.790 --> 00:08:48.090 very quickly sort out the practical rules, because people 146 00:08:48.120 --> 00:08:52.050 are using it, myself included! We're jumping in and giving it a 147 00:08:52.050 --> 00:08:57.960 go with no restrictions. So, there's that practical something 148 00:08:57.960 --> 00:09:02.340 now, and then also the long-term thinking of where is this going? 149 00:09:03.840 --> 00:09:06.330 Anna Delaney: Well, that leads very smoothly onto Akshaya's 150 00:09:06.360 --> 00:09:09.900 segment. Akshaya, you were very busy reporting, as we said, on 151 00:09:09.900 --> 00:09:13.410 the U.K. AI Safety Summit, which happened in Bletchley Park, 152 00:09:13.410 --> 00:09:16.230 behind you, last week. So, what were the key takeaways? 153 00:09:17.160 --> 00:09:19.530 Akshaya Asokan: Yes, so last week the U.K. government 154 00:09:19.530 --> 00:09:22.440 concluded the world's first ever summit on artificial 155 00:09:22.440 --> 00:09:25.950 intelligence risk and safety - the U.K. AI Safety Summit. We 156 00:09:25.950 --> 00:09:30.030 saw heads of states from 28 countries, as well as founders 157 00:09:30.270 --> 00:09:34.890 and CEOs of leading AI companies come under one roof to discuss 158 00:09:34.920 --> 00:09:39.240 risk tied to frontier artificial intelligence systems. Among the 159 00:09:39.240 --> 00:09:42.900 attendees were U.S. Vice President Kamala Harris, Chinese 160 00:09:42.900 --> 00:09:48.210 Vice Minister of Science and Technology Wu Zhaohui, then 161 00:09:48.330 --> 00:09:53.610 OpenAI founder Sam Altman, DeepMind co-founders Demis 162 00:09:53.610 --> 00:09:58.470 Hassabis, Mustafa Suleyman among other notable names. The event 163 00:09:58.500 --> 00:10:01.380 was historic, firstly, because the venue, Bletchley Park, was 164 00:10:01.380 --> 00:10:04.740 the principal center for allied codebreaking during the Second 165 00:10:04.740 --> 00:10:08.610 World War. Secondly, and most importantly, this was the first 166 00:10:08.610 --> 00:10:14.190 time that nations' big names and big names from technology sector 167 00:10:14.190 --> 00:10:17.820 and civil society members come together to acknowledge the 168 00:10:17.820 --> 00:10:22.440 risks posed by AI. One key thing is, the timing of the event is 169 00:10:22.440 --> 00:10:25.500 hugely important on the British government's part, because it 170 00:10:25.500 --> 00:10:28.950 comes as the European Union is set to finalize what is going to 171 00:10:28.950 --> 00:10:33.210 be the first ever global regulation on AI, which is the 172 00:10:33.210 --> 00:10:37.020 EU AI Act. And, as the Biden administration just recently 173 00:10:37.020 --> 00:10:40.470 announced the new artificial intelligence Executive Order in 174 00:10:40.470 --> 00:10:44.100 the U.S. So, the event, in a way, helped the U.K. government 175 00:10:44.100 --> 00:10:48.000 to strategically position itself in the conversations about AI 176 00:10:48.030 --> 00:10:52.350 regulations and governance of the time. Now, the two notable 177 00:10:52.350 --> 00:10:56.400 developments from the events were how the heads of state use 178 00:10:56.400 --> 00:10:59.910 the platform to shed light on the national AI strategies, and 179 00:10:59.910 --> 00:11:04.320 secondly, how they event gave rise to the AI dual faction with 180 00:11:04.320 --> 00:11:07.800 backers of closed-source AI, which is the proprietary 181 00:11:07.800 --> 00:11:12.330 applications trained on private data, calling for open sourcing 182 00:11:12.360 --> 00:11:17.520 AI models and criticizing open-source AI such as OpenAI 183 00:11:17.520 --> 00:11:21.810 and DeepMind for fear mongering. I think these two were the big 184 00:11:21.810 --> 00:11:25.920 takeaways from the event and how nations, including China, came 185 00:11:25.920 --> 00:11:31.410 together to stress the need for a common global understanding of 186 00:11:31.440 --> 00:11:35.250 AI risks, and leading unified efforts to control these risks. 187 00:11:35.760 --> 00:11:39.000 As much as these attendees acknowledged that AI did pose 188 00:11:39.000 --> 00:11:44.130 risks, such as job loss and bias, the common understanding 189 00:11:44.130 --> 00:11:49.980 was that the existential risks posed by the technology are not 190 00:11:50.010 --> 00:11:53.670 imminent. Another interesting thing was that none of the big 191 00:11:53.670 --> 00:11:57.120 companies use the platform for product pitches or big 192 00:11:57.150 --> 00:12:02.850 announcements. Rather, they stressed on regulating the 193 00:12:02.880 --> 00:12:06.330 technology without stymieing innovation in the field. 194 00:12:06.990 --> 00:12:09.810 Overall, the response to the event was positive with 195 00:12:09.840 --> 00:12:15.540 attendees lauding the U.K.'s success and bringing important 196 00:12:15.570 --> 00:12:19.290 AI stakeholders under one roof. The event was so successful that 197 00:12:19.290 --> 00:12:22.830 soon a similar event will be held in France and later in 198 00:12:22.830 --> 00:12:23.580 South Korea. 199 00:12:24.090 --> 00:12:26.550 Anna Delaney: Yeah, that was a perfect overview. And a couple 200 00:12:26.550 --> 00:12:28.590 of questions for you, and maybe Tony would want to chime in as 201 00:12:28.590 --> 00:12:32.250 well. Rishi Sunak refer to AI as a co-pilot. 202 00:12:32.730 --> 00:12:33.090 Akshaya Asokan: Yes. 203 00:12:33.090 --> 00:12:36.090 Anna Delaney: Just from your journalistic perspective, what 204 00:12:36.090 --> 00:12:38.700 do you think this means? And, how does this metaphor really 205 00:12:38.700 --> 00:12:42.390 reflect the government's approach to AI? Any thoughts? 206 00:12:42.390 --> 00:12:46.650 Akshaya Asokan: Yes. So, it's a very apt way for him to put it 207 00:12:46.650 --> 00:12:50.760 because there's so much concern and fear around the technology. 208 00:12:50.760 --> 00:12:56.250 So, and the first fear is - as journalists or as anybody in the 209 00:12:56.250 --> 00:13:00.180 field who's writing and creating content, be it video, art, 210 00:13:00.180 --> 00:13:05.670 anywhere - the fear is whether this AI will take over your job. 211 00:13:06.390 --> 00:13:11.670 So, as the head of the state, his responsibility is to, you 212 00:13:11.670 --> 00:13:15.000 know, assuage that fear, assure the people that you know, "we're 213 00:13:15.000 --> 00:13:18.540 here; we are going to help you." So, that is what he meant by 214 00:13:18.540 --> 00:13:22.530 being a co-pilot. Instead of fearing this technology, adopt 215 00:13:22.530 --> 00:13:26.520 this technology to your daily life, so that it can better your 216 00:13:26.520 --> 00:13:31.200 job. And, I think, he put it rightly and it reflects the 217 00:13:31.230 --> 00:13:36.990 sentiments of the event, which is, you know, we assure, yes, we 218 00:13:36.990 --> 00:13:40.650 know there is risk, but there are also ways to overcome these 219 00:13:40.680 --> 00:13:41.910 risks. So, yeah. 220 00:13:42.960 --> 00:13:46.110 Tony Morbin: Yeah, Akshaya, I am quite interested in the 221 00:13:46.110 --> 00:13:48.300 situation before, I know, you've done a lot of reporting on 222 00:13:48.570 --> 00:13:53.430 regulation in AI, and the EU was taking a more cautious approach 223 00:13:53.460 --> 00:13:56.910 with more things being banned, such as real-time facial 224 00:13:56.910 --> 00:13:59.940 recognition, compared to the U.S., which was a little bit 225 00:13:59.940 --> 00:14:03.900 more, you know, gung ho, in terms of, you know, let's get 226 00:14:03.900 --> 00:14:07.560 this AI out there. And, the U.K. was, kind of, taking a middle 227 00:14:07.560 --> 00:14:11.580 position. But, the critics that I'd heard - of the UK - were 228 00:14:11.580 --> 00:14:14.700 saying, well, actually, because our biggest trade partner is the 229 00:14:14.700 --> 00:14:18.330 EU, we're just going to have to do whatever they do. Is that 230 00:14:18.570 --> 00:14:22.020 what came across from that meeting? Was it the U.S. is 231 00:14:22.020 --> 00:14:26.280 taking the lead? The EU is taking the lead? The U.K. 232 00:14:26.280 --> 00:14:27.840 genuinely has an alternative? 233 00:14:30.600 --> 00:14:33.180 Akshaya Asokan: How you said, the U.K., right now, is 234 00:14:33.180 --> 00:14:36.930 positioning itself in the middle in terms of governance, which 235 00:14:36.930 --> 00:14:42.690 is, yes, we are looking into risk, yet, at the same time, 236 00:14:42.690 --> 00:14:46.050 we're also promoting innovation and, you know, development, 237 00:14:46.230 --> 00:14:51.030 because all the governments do realize how this technology can 238 00:14:51.060 --> 00:14:55.530 bring in money into their economy. So, yes, if you look at 239 00:14:55.530 --> 00:14:59.850 the media reports and coverage there are some reports that say 240 00:14:59.880 --> 00:15:03.090 that the EU, sorry, the U.S. government's announcement 241 00:15:03.120 --> 00:15:07.110 overshadowed the U.K. government's initiatives in AI. 242 00:15:07.560 --> 00:15:12.720 And, rightly so, because Kamala Harris came in and she, along 243 00:15:12.720 --> 00:15:16.770 with the U.S. Secretary of Commerce, Gina Raimondo, they 244 00:15:16.770 --> 00:15:21.000 announced a number of initiatives. And, that is in 245 00:15:21.000 --> 00:15:24.330 addition to the EU, sorry, the U.S. Executive Order on 246 00:15:24.330 --> 00:15:28.530 artificial intelligence. So, they came in, they delivered, 247 00:15:28.860 --> 00:15:33.990 and I think they did manage to get some limelight from the 248 00:15:33.990 --> 00:15:39.630 media. But, again, by the end of the second day, the overwhelming 249 00:15:39.630 --> 00:15:43.380 response from the attendees was that the U.K. managed to pull 250 00:15:43.380 --> 00:15:48.060 out a successful event, which saw participation from the tech 251 00:15:48.120 --> 00:15:51.330 and government, as well as the civil society. Even though there 252 00:15:51.330 --> 00:15:54.150 was some criticism that, you know, the number of attendees 253 00:15:54.180 --> 00:15:59.700 were limited, and the venue was very small, but, I think, for a 254 00:15:59.700 --> 00:16:03.660 first-time event, this was successful, and this model may 255 00:16:03.690 --> 00:16:10.050 be replicated in a larger scale, maybe in France, and we'll learn 256 00:16:10.050 --> 00:16:11.490 as we go, so yeah. 257 00:16:12.690 --> 00:16:14.760 Anna Delaney: And, what is the response to China's 258 00:16:14.820 --> 00:16:17.910 participation in the summit? You know, how did their minister's 259 00:16:17.910 --> 00:16:21.390 call for global cooperation in AI governance really influence 260 00:16:21.420 --> 00:16:22.680 the discussions, do you think? 261 00:16:22.680 --> 00:16:27.360 Akshaya Asokan: Yes. So, ahead of the event, the U.K. AI Safety 262 00:16:27.360 --> 00:16:29.760 Summit, there were concerns, especially from the western 263 00:16:29.760 --> 00:16:34.860 quarters, who're from the U.K. leadership, and in the U.S., 264 00:16:34.890 --> 00:16:38.940 that Chinese participation, you know, there is the fear, like, 265 00:16:38.970 --> 00:16:43.560 why are we having the Chinese in these events, because we know 266 00:16:43.560 --> 00:16:48.060 how China is using artificial intelligence to spy on its 267 00:16:48.180 --> 00:16:53.100 citizens or surveil even the Uyghur communities, which is a 268 00:16:53.100 --> 00:17:00.660 very controversial application of AI. But, the Chinese Vice 269 00:17:00.660 --> 00:17:03.120 Minister for Science and Technology, he came in, he 270 00:17:03.120 --> 00:17:09.030 delivered a speech stressing on the nation's right to deploy the 271 00:17:09.030 --> 00:17:14.610 technology, and not having anybody, sort of, any nation say 272 00:17:14.640 --> 00:17:19.140 that you don't or you cannot get to deploy this technology. So, 273 00:17:19.560 --> 00:17:26.280 it was a very mild message to the west that we can also deploy 274 00:17:26.280 --> 00:17:31.350 this technology without any intervention by other nations. 275 00:17:31.350 --> 00:17:34.590 So, they stressed on the right of nations to deploy the 276 00:17:34.590 --> 00:17:39.030 technology without, you know, while focusing on the risk and 277 00:17:40.380 --> 00:17:42.660 the existential risk posed by AI. Yeah. 278 00:17:43.620 --> 00:17:46.140 Anna Delaney: Okay. Well, that's not the end of the conversation, 279 00:17:46.140 --> 00:17:50.910 I'm sure. Because Rashmi, we've got more AI to talk about at the 280 00:17:50.940 --> 00:17:53.940 Mumbai Summit. But, Akshaya, that was great. Thank you so 281 00:17:53.940 --> 00:17:57.690 much for that insight. So, as I said, you're in Mumbai this 282 00:17:57.690 --> 00:18:01.050 week, and from what I hear the event was a great success. So, 283 00:18:01.650 --> 00:18:03.690 tell us about it. What were the key takeaways for you? 284 00:18:04.290 --> 00:18:08.250 Rashmi Ramesh: Yeah, I think, inexplicably, AI is woven into 285 00:18:08.250 --> 00:18:13.950 every conversation that we have right now. But, our summit was 286 00:18:13.950 --> 00:18:18.450 not as focused on AI as the AI Security Summit was, of course. 287 00:18:18.750 --> 00:18:22.980 So, we had about, I think, 700 executives who attended the 288 00:18:22.980 --> 00:18:27.930 event in person. So, the first session of the day, sort of, set 289 00:18:27.930 --> 00:18:32.340 the tone for the rest of the day. So, the speaker was Sameer 290 00:18:32.340 --> 00:18:36.810 Ratolikar, who is the CISO of HDFC Bank, which is India's 291 00:18:36.810 --> 00:18:40.830 largest private-sector bank. So, he spoke about how CISOs must go 292 00:18:40.830 --> 00:18:45.270 beyond what is expected of them, from a technical standpoint, if 293 00:18:45.270 --> 00:18:48.990 they want to be taken seriously, and how they can shape a better 294 00:18:48.990 --> 00:18:51.930 future for themselves and the industry, and how they need to 295 00:18:51.930 --> 00:18:54.660 focus on, you know, communication and inclusive 296 00:18:54.660 --> 00:18:58.620 leadership skills, become primary role models, while doing 297 00:18:58.620 --> 00:19:02.550 their primary job of minimizing attack services while responding 298 00:19:02.610 --> 00:19:07.170 to market opportunities. Then, we had the keynote from Dr. 299 00:19:07.320 --> 00:19:12.030 Yoginder Talwar, who is the CISO of National Informatics Centre 300 00:19:12.030 --> 00:19:15.660 Services, which offers IT services to the government, and 301 00:19:15.930 --> 00:19:19.800 he spoke about how the Indian government passed the Digital 302 00:19:19.830 --> 00:19:24.180 India Act, which aims to secure the internet and develop a 303 00:19:24.180 --> 00:19:27.180 future-ready cybersecurity framework. So, he gave the 304 00:19:27.180 --> 00:19:30.690 security experts in the audience insights on cybersecurity 305 00:19:30.690 --> 00:19:34.710 challenges, on opportunities, on defense strategies in the 306 00:19:34.710 --> 00:19:40.050 country, and also, how they're leveraging AI and ML tools to 307 00:19:40.050 --> 00:19:45.390 strengthen security. So, we had a total of, about, 26 sessions 308 00:19:45.420 --> 00:19:48.450 running in two parallel tracks. But, I just want to highlight a 309 00:19:48.450 --> 00:19:51.450 couple of them that I got great feedback on from the audience. 310 00:19:52.200 --> 00:19:57.690 One was from Dr. Bhimaraya Metri, who is the director of 311 00:19:57.690 --> 00:20:00.690 the Indian Institute of Management in Nagpur. He spoke 312 00:20:00.690 --> 00:20:05.550 about issues that have existed for years about communicating 313 00:20:05.550 --> 00:20:11.460 with the board, but with a twist in the age of AI. So, how do 314 00:20:11.460 --> 00:20:16.020 your tactics change? What can you do differently? And, why are 315 00:20:16.020 --> 00:20:19.290 the strategies that you've deployed so far, may not work as 316 00:20:19.290 --> 00:20:23.910 well right now? So, there was also a Fireside Chat on incident 317 00:20:23.910 --> 00:20:27.630 reporting requirements and cyber threat information sharing, 318 00:20:27.930 --> 00:20:34.740 which was a debate between a CISO and a CFO, on how they view 319 00:20:34.740 --> 00:20:38.610 the same situation so differently? Where the friction 320 00:20:38.640 --> 00:20:41.040 arises? And, how they work around it? 321 00:20:42.720 --> 00:20:46.530 Anna Delaney: That's excellent. I love those debates. It was 322 00:20:46.530 --> 00:20:49.920 always a lot to learn from them. So, on the topic of AI, what 323 00:20:49.920 --> 00:20:52.710 exactly were the security practitioners asking about? 324 00:20:52.710 --> 00:20:55.020 Where do they need more knowledge or more insight? 325 00:20:56.190 --> 00:20:59.880 Rashmi Ramesh: Right. So, of course, as I mentioned, AI is 326 00:20:59.880 --> 00:21:04.260 sort of interwoven into every conversation that we had. So, I 327 00:21:04.290 --> 00:21:07.350 overheard a lot of CISOs speaking amongst each other. 328 00:21:07.380 --> 00:21:13.470 And, also, when I spoke to attendees, AI was- every session 329 00:21:13.470 --> 00:21:17.700 that they liked, or thought that they took away from had an AI 330 00:21:17.700 --> 00:21:22.800 element in it. So, one was how, you know, traditional rule-based 331 00:21:23.010 --> 00:21:26.520 SIEMs struggled to keep up with the evolving threat landscape, 332 00:21:26.970 --> 00:21:30.900 because they primarily rely on predefined rules and signatures 333 00:21:30.900 --> 00:21:34.800 that may not be able to capture emerging attacks. And, the focus 334 00:21:34.830 --> 00:21:38.820 of that session was on how AI is likely to make this problem 335 00:21:38.820 --> 00:21:44.040 worse, but also how AI can support behavior-based threat 336 00:21:44.040 --> 00:21:48.210 detection, which can help address this issue. And, there 337 00:21:48.210 --> 00:21:52.170 was another session about where traditional threat monitoring 338 00:21:52.200 --> 00:21:56.370 tools are falling short, and how AI can help automate some of the 339 00:21:56.370 --> 00:22:00.930 SecOps and how weaving in what they call AI Ops with IT 340 00:22:00.930 --> 00:22:05.340 operations can practically help mitigate some of the age-old 341 00:22:05.340 --> 00:22:10.170 cybersecurity issues that we have. So, that's not to say that 342 00:22:10.170 --> 00:22:18.030 everything was about AI. Also, it was a very high conversation 343 00:22:18.030 --> 00:22:21.990 topic, there was quite a bit of interest in Zero Trust in supply 344 00:22:21.990 --> 00:22:26.730 chain, as always, cyberinsurance, ransomware, of 345 00:22:26.730 --> 00:22:30.720 course, and I'll close the loop with my personal favorite 346 00:22:30.720 --> 00:22:35.040 session on AI in the age of banking, which was a session 347 00:22:35.040 --> 00:22:39.300 that was led by Professor Janakiram, who is the director 348 00:22:39.300 --> 00:22:41.850 at the Indian Institute of Development and Research in 349 00:22:41.850 --> 00:22:45.060 Banking Technology. And, Anna, let me tell you, there are very 350 00:22:45.060 --> 00:22:48.210 few people who have the decades of experience in financial 351 00:22:48.210 --> 00:22:51.690 services that he does, is able to weave in emerging tech 352 00:22:51.690 --> 00:22:54.870 solutions for traditional problems the way he does, and is 353 00:22:54.870 --> 00:22:57.870 able to communicate all of this as clearly as he does. 354 00:22:58.710 --> 00:23:00.270 Anna Delaney: Fantastic! Well, it sounds like you had a great 355 00:23:00.270 --> 00:23:03.030 session that was excellently conveyed by you. Thank you so 356 00:23:03.030 --> 00:23:07.710 much Rashmi. And, finally, and just for fun, in the age of IoT 357 00:23:07.710 --> 00:23:11.160 and smart devices, what's the weirdest or most unexpected item 358 00:23:11.160 --> 00:23:12.960 you've come across that can be hacked? 359 00:23:17.070 --> 00:23:19.560 Rashmi Ramesh: For me, I think that would be baby monitors. I 360 00:23:19.560 --> 00:23:23.580 mean, I get the spying part. But, honestly, I think I'd just 361 00:23:23.610 --> 00:23:27.150 give up if that information gathering includes voluntarily 362 00:23:27.150 --> 00:23:30.900 listening to babies cry for hours on end, and babies that 363 00:23:30.900 --> 00:23:35.370 are not even yours. So, yeah, I think that that would be mine. 364 00:23:35.610 --> 00:23:39.390 Anna Delaney: That's pretty creepy, isn't it? Akshaya? 365 00:23:39.900 --> 00:23:42.360 Akshaya Asokan: Yeah, so, I think I was surprised that you 366 00:23:42.360 --> 00:23:46.770 could hack 3D medical scanning. And, I was thinking, why would a 367 00:23:46.770 --> 00:23:50.190 hacker target a medical scan? What do they gain out of it? 368 00:23:50.190 --> 00:23:53.820 But, obviously, the report was based on research. So, the 369 00:23:53.820 --> 00:23:57.210 researchers were working on it, and they hacked into it, and 370 00:23:57.210 --> 00:24:01.530 they used artificial intelligence, GAN network, which 371 00:24:01.530 --> 00:24:05.580 is the Generative-powered Adversarial Network, to, sort 372 00:24:05.580 --> 00:24:12.870 of, create tumors - fake tumors - and then they confused an AI 373 00:24:12.870 --> 00:24:18.330 system with an AI-generated tumor. So, yeah, that was 374 00:24:18.330 --> 00:24:19.320 interesting to me. 375 00:24:19.680 --> 00:24:22.860 Anna Delaney: Yeah, very scary stuff. And, also collecting, I 376 00:24:22.860 --> 00:24:27.270 guess, data on the patients as well. Tony? 377 00:24:27.990 --> 00:24:30.450 Tony Morbin: Well, yeah, of course. Yeah. Any connected 378 00:24:30.450 --> 00:24:33.030 device can potentially be hacked, including, however 379 00:24:33.030 --> 00:24:38.040 private. But, for me, I'd hark back to 2017 when Darktrace 380 00:24:38.040 --> 00:24:40.980 reported how an internet connected fish tank in an 381 00:24:40.980 --> 00:24:44.910 unnamed casino was hacked. And, then the attackers moved 382 00:24:44.910 --> 00:24:47.160 laterally in the system, and were able to actually steal 383 00:24:47.160 --> 00:24:50.940 data, send it off to Finland. So, attacking a casino via the 384 00:24:50.940 --> 00:24:53.610 fish tank I thought was incredible. 385 00:24:53.610 --> 00:24:56.490 Anna Delaney: Fascinating! Impressive, isn't it? Well, this 386 00:24:56.490 --> 00:24:59.850 is a story from 2013, but it's also resurfaced recently in 387 00:24:59.850 --> 00:25:03.540 2023. It's not a recent story, obviously. But, you may remember 388 00:25:03.660 --> 00:25:07.620 reading about how smart toilets were found to have security 389 00:25:07.620 --> 00:25:10.890 vulnerabilities and the settings could be tampered with, or 390 00:25:12.090 --> 00:25:16.050 hackers could collect data on user habits. Let's just say, so, 391 00:25:16.050 --> 00:25:20.010 via built-in Bluetooth radio, hackers were able to remotely 392 00:25:20.280 --> 00:25:24.450 flush the toilet open and close the lid, and more concerningly, 393 00:25:25.050 --> 00:25:29.190 activate the built in bidet function. So, it gives backdoor 394 00:25:29.190 --> 00:25:30.960 vulnerability a whole new meaning! 395 00:25:31.200 --> 00:25:34.770 Tony Morbin: Oh dear! Yeah, I think apart from Japan, I've not 396 00:25:34.770 --> 00:25:35.460 seen those. 397 00:25:36.000 --> 00:25:37.830 Anna Delaney: No, I've not used it as well. It's not a personal 398 00:25:38.610 --> 00:25:43.290 usage for me. But, anyway, as you say, all devices, all 399 00:25:43.290 --> 00:25:46.740 connected devices can be hacked in some way. Well, Rashmi, Tony, 400 00:25:46.740 --> 00:25:49.320 Akshaya, this was absolutely brilliant! Thank you so much! 401 00:25:49.710 --> 00:25:50.310 Tony Morbin: Thank you. 402 00:25:50.670 --> 00:25:51.570 Rashmi Ramesh: Thank you, Anna. 403 00:25:52.170 --> 00:25:54.120 Anna Delaney: And, thanks so much for watching! Until next 404 00:25:54.120 --> 00:25:54.240 time.