Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime , Next-Generation Technologies & Secure Development
ISMG Editors: What Did the Sam Altman-OpenAI Saga Teach Us?
Also: ChatGPT Turns 1 Year Old; Police Nab Ransomware Gang Chief in Ukraine Anna Delaney (annamadeline) • December 1, 2023In the latest weekly update, four editors at Information Security Media Group discuss Sam Altman and OpenAI's brief leadership nightmare, the state of generative AI one year after the general release of ChatGPT, and how police nabbed a suspected ransomware group ringleader in Ukraine.
See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity
The panelists - Anna Delaney, director, productions; Mathew Schwartz, executive editor of DataBreachToday and Europe; Tom Field, senior vice president, editorial; and Michael Novinson, managing editor, ISMG business - discussed:
- What the OpenAI/Sam Altman saga taught us about leadership dynamics, organizational governance and the delicate balance between rapid innovation and stability;
- Highlights from an interview with Avivah Litan of Gartner Research on generative AI, as ChatGPT marked its one-year anniversary;
- How police arrested a group of criminals in Ukraine, who they suspect launched ransomware attacks against large organizations across more than 70 countries.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Nov. 17 edition on patients suffering after relentless cyberattacks on hospitals and the Nov. 24 edition on whether federal budget cuts will bite U.S. security.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hello, this is the ISMG Editors' Panel where ISMG editors gather to evaluate the week's top cybersecurity and technology stories and figure out what they will mean. I'm Anna Delaney, director of productions at ISMG. And I'm joined by colleagues Tom Field, senior vice president of editorial; Mathew Schwartz, executive editor of DataBreachToday and Europe; and Michael Novinson, managing editor of ISMG business. Very good to see you all.
Tom Field: Thanks for having us over.
Mathew Schwartz: Nice to be back.
Michael Novinson: Nice to see you.
Delaney: Well, Tom, you have light shining, radiating, tell us more.
Field: I wanted to set the tone here for this discussion. I thought this might be a good look. Just a couple of weeks ago, I was flying out of my local airport in Augusta, Maine, and it happened to be about time the sun was coming up. It's one of those you look out the window and you just see it and you know. It's the moment, and then you think there I have got a virtual background for the next Editor's Panel. You know that feeling.
Delaney: I do. Indeed. Always on the lookout. Mathew, so from the sky to the upside-down world. Tell us more.
Schwartz: Yeah, I'm literally in the gutter here, Anna, and on a street in Dundee where the rain seems to never stop. It's been arresting visually. I've launched a project on reflections. Just because we have had so much of water these last couple of months that there's been some lovely reflections, especially around dusk. Where we are here.
Field: I was going to suggest that this was the Stranger Things version of Dundee, but apparently not.
Schwartz: It's always the upside-down world here.
Delaney: That's a reflection in the water.
Schwartz: Yeah, this is a reflection. So if you were to pan upward, there would be the building up there.
Delaney: Excellent. So clear. Brilliant. And, Michael, you've just got to explain.
Novinson: Of course, coming to you from Orange Connecticut side of the Pez Museum. It's been a Pez factory for quite a while. They opened it as a Pez Museum back in 2011. My family was passing through on the way home from Thanksgiving and figured why not take a little stop there to learn about the history of Pez and all the different dispensers. I just find all the Pez elves can make your own customized Pez and somehow came home with three separate Pez dispensers. So not a bad use of $10, not a bad way to spend our driving through the Connecticut highways.
Delaney: A museum for everything that seems.
Novinson: Yes.
Delaney: Well I'm in the beautiful town of Corleone in Sicily. And if you've watched the Godfather series, you may remember the character Vito Corleone, played by Marlon Brando and inspired by this town, because in previous years, it was known for its association with the mafia. And it's an image that the residents are very keen to dispel. But regardless, it's very much worth a visit if you are in Italy.
Field: You have new connections, Anna?
Delaney: Working on them. Well, Michael, let's start with you this week, because it's fair to say that the week leading up to the Thanksgiving holiday in the U.S. was dominated by one story and that was of OpenAI, Sam Altman, and that saga. So you were working, it has to be said around the clock on the continually changing story. Now the drama has settled somewhat, at least for now. I'd love to know where we are right now. But I'm sure that will include a recap of what happened and how the story evolved.
Novinson: Absolutely, Anna, and thank you for the opportunity. So let's briefly address how we got here. And then we can get into what things are going to look like going forward. So we are talking about OpenAI, that is, the nonprofit organization that has a for-profit wing that created ChatGPT, approximately a year ago publicly launched that, had three co-founders since it was founded by Sam Altman. So late Friday afternoon, Friday before Thanksgiving, a lot of Americans starting out to take a long Thanksgiving break. A news release was published at 3:30 that afternoon saying that there's been a CEO change. Sam Altman has been removed, specifically due to the fact that he wasn't completely candid with the board and the board had lost confidence in his ability to oversee the company. The news completely blindsided everyone, which is hard to do nowadays in the tech world. Led to a ton of drama. Multiple efforts to reinstate him. The company went through multiple interim CEOs; the first interim, who was their CTO essentially moved aside because she wanted Altman back. So they brought in another CEO, the former CEO of Twilio, as their second interim CEO who eventually put pressure on them saying that unless the board could produce documentation of what Sam Altman had done wrong, he was also going to resign. So after 106 hours and multiple false starts, threats of Altman and most of the OpenAI staff and camp going to Microsoft, they did finally come to a resolution. Sam Altman is back as CEO; Greg Brockman, who was one of the co-founders, was the president and had been removed as chair of the board, is back as president of the board. Ilya Sutskever, who was the one who had informed them of the firing, who is the third co-founder, chief scientist over there, for now appears to still be at the company, not on the board, but as the chief scientist. And just a lot of uncertainty about where things go from here. So you had two camps within OpenAI. The recent reporting is fleshed out that you had folks who are focused on commercializing and who have figured that OpenAI takes the lead here that we can develop AI in a safe and responsible manner. And we want to try to get our products in the hands of as many people as possible. And that's a camp that includes Sam Altman and Greg Brockman, perhaps some members of the board who had left in 2023. And then you had a more altruistic wing of the board. And maybe Sutskever was aligned with them, who was concerned that the company was moving too fast and was doing too much. From a development standpoint, it needed to focus on research and making sure that AI was working for the benefit of humanity and wanted the company to slow down a little bit. So those are the folks who essentially have forced Altman out. It's unclear how much of it was ideological; it's unclear how much of it was personal. But here's where we stand now, which is that essentially, in order to break this impasse, there's a three-member board and none of them are clear if they're in the more AI skeptic camp or in the AI acceleration camp. You don't have any OpenAI employees on the board. You have Bret Taylor, who's the former co-CEO of Salesforce, well-known, well-liked in Silicon Valley. He's chairing the board for now. You have Larry Summers, who's from the public policy world from a treasury secretary under Clinton, president of Harvard University in the 2000s, he's on the board - more of a public policy rather than a technology background. And then you have the CEO of Quora, who is the one holdover from the previous board to the current board. He was part of the board that fired Altman, but he remains on the board. The question from here internally to OpenAI is, how does the board get from three members today to nine members, which is where indications are the board wants to end up? Recent reporting has suggested that the board is still going to be a nonprofit board that's focused on developing AI for the good of humanity, the board is not going to be focused on the interests of shareholders or investors. So signals today are that Microsoft will not be getting a voting board seat. Other investors, Sequoia and Thrive Capital, will not be getting voting representation on the board. It will be different than a traditional for-profit board. In terms of will they have a non-voting observer, that's to be determined. In the Sam Altman firing, Microsoft was notified one minute beforehand, given that they own 49% of the for-profit entity and have invested 30 billion. I don't think that anybody wants a repeat of that going forward. So, I think there's just a lot of questions about who populates the board. It seems like if anybody is in one of the camps around AI, they're not going to be a good fit. But, you want people with subject matter expertise. There are also signs that they are conducting an internal investigation into some of the claims made by the board. Once that investigation concludes, does Altman get a board seat back, given that he's the CEO of the company? Normally, CEOs do sit on their own board, even if they're not the chair, they're usually a director. In terms of the external piece of it, OpenAI is still the clubhouse leader, they're aligned with Microsoft. ChatGPT was first to market with generative AI and by most reporting has a significant edge over Bard or Anthropic or some of the others who are taking this on. But, over the past several days since some larger enterprises kicked the tires, take a look at Anthropic or Bard, some of the alternates that they wouldn't have considered otherwise. Probably will some of those customers migrate over a better price, something took them by surprise, possibly. And then I think the other piece here is around those third parties who are building products, either on OpenAI or are on other platforms, and do they want to be so focused on building on just OpenAI? Or is that seen as too risky nowadays, given the corporate structure, they want to diversify a bit and have some capabilities on Anthropic or around Bard so that if OpenAI goes in a direction that they don't like, their company doesn't go kaput. So all important things to watch in the weeks and months ahead.
Field: May I ask you a question? Michael, you and I have been back and forth on this a lot from the weekend that this all was coming down. Where were the adults in the room?
Novinson: It's a good question. It is interesting, because we're trying to develop AI that can predict how they will behave the same way humans do. And it does seem like the board didn't think many steps ahead. So they made an accusation in public that Altman wasn't completely candid, which again is extremely unusual. I've seen a lot of CEO departure press releases, and they want to spend more time with their family, dealing with health issues, almost never anything of an accusatory nature. So of course, they say this. And then people, lots of people - private company employees, Microsoft, Sequoia, other investors go to them and say, we want evidence. When was he not completely candid? And of course, it's not that they have to put this out in the public, but at least in private to share with people, what are this? What are some of the examples of when this happened, and then, by all accounts, they didn't produce any. There were some talk that he was so deceptive that they weren't able to document his deception because he was so tricky. But again, after a while, that's going to hold up. So in terms of your question, Tom, adults in the room again, it was an unusual board. At the time, yes, CEO of Quora, you had somebody who had run the security research team at Georgetown, and then you had a philanthropist, the spouse of a well-known American actor. These were not people who generally sit on major corporate boards, who usually have C-suite figures from a major multinational corporation. So I just don't think they anticipated the level of scrutiny and pushback. And if they had, they might have engaged with legal counsel and crisis communications from prior to making that announcement.
Delaney: There is this recurring phrase that's lack of transparency. Do we know whether OpenAI plans to take steps to rebuild trust among investors, its employees and the public?
Novinson: Yeah, I certainly think that's important. I think, it would have been more important if it had been a different leadership team, and you're having to explain to customers, why are these new folks in charge, given that Altman, Brockman, Sutskever and the founding team are there and there was not a mass resignation of employees. Of course, some may have backed chosen a different path. I think there's less reassurance needed. But I think all eyes are on who fills out the rest of this board and to their view, seem to align more with Altman? Do they seem to align more with some of the AI skeptics? Are they folks just outside the AI sector because anybody who's been in AI is too hot to touch, because they have a point of view on this. So I think that's going to be awesome to watch. And I just thought that the one other piece to watch here too, is in terms of Microsoft, and they've had a very partner-centric AI strategy that leaning heavily on OpenAI to power their ambitions in this space. And do they, they've already talked about an AI research team, do they start to develop more capabilities internally? Do they feel it's too risky to rely solely on a partnership? Do they need to have native capabilities that they have direct control over? So that to me would also be an interesting piece to watch.
Delaney: So many questions. Well, thank you so much, Michael. Well, Tom, you have spoken with an analyst at Gartner research, Avivah Litan, more generally about generative AI trends. So what can you share?
Field: Well, maybe it came from this because all this was going on. It was Avivah, one of the foremost analysts at Gartner who follows the AI space and risk in particular. And so the conversation started out a week ago about her observations of this drama, when we were sitting down after things resolved and talked about the potential impacts of this and something in a conversation about it. And by the way, it coincides with the first anniversary, the public birthday of ChatGPT, which is this week, so a timely opportunity to sit and talk about the current state and future of generative AI. But I did ask her specifically, after all that went down in the 106 hours that Michael so thoroughly documented, what impact does she expect to see in this nascent industry. So want to share with you an excerpt of our conversation.
Avivah Litan: I think companies and organizations of any stripe that use AI will be more reluctant again to put their eggs in with one vendor. And we'll be looking for solutions that buffer them from the centralized powers, if you will. So there is a layer of middleware services that are emerging with AI, that basically ensure that the companies are keeping their own intellectual property, the data stays with them. And they're independent of the backend. So if one player fails, they can just move their logic to another player. So from a tech industry point of view, those solutions are going to become much more imperative now. There's a great selling point, like do you want to be locked in OpenAI? The answer is, No. Then use our middleware software which will keep you safe, secure, and you will own your intellectual property and independence. So I think that's how the industry has changed from a technology perspective. And from a psyche perspective, people realize how fragile this whole scene is. Where the fate of AI is in the hands of boards, CEOs and individuals, not that they're bad individuals, but they've got too much power. And to me that's scarier than anything.
Field: She's spot on. There's too much power. And that's scarier than anything.
Delaney: And so following the saga at OpenAI, Tom, has your perspective shifted in any way when it comes to how this tech evolves, but also how it's managed, how it's governed?
Field: We've said consistently for the past year, we've never seen a technology enter the market the way generative AI has. In all my years of journalism, I've never seen a story drama evolve the way the Sam Altman one did over the course of five days a week or so ago. It raises some significant questions. This isn't how investment properties behave. This isn't how boards behave. Everything here is new to the Silicon Valley scene. I think that I made the joke about where were the adults in the room, we need some more adults in the room. This is a technology that needs guardrails, not just within the customer organizations, but within the provider organization. So I don't want to use the term wake-up call. But I hope this makes somebody wake up and realize that we do need more guidance going forward for organizations such as these because the power is great. If Spider-Man taught us anything it is with great power comes great responsibility and good boards.
Schwartz: Yeah, well, and also move fast and break things has been a Silicon Valley mantra. And that got out of control here. As Michael and you have so eloquently stated.
Delaney: Well, Mat, moving on to your story. There's been a gain in the fight to stop ransomware gangs or at least make their lives more difficult as police have busted high-profile ransomware gang suspects in Ukraine. Tell us about it.
Schwartz: Yes. So some good news on the ransomware front. Last week, police in Ukraine, Ukrainian cybercrime police, backed by law enforcement officials or investigators, I guess, from multiple other countries, including I believe the FBI, the US Secret Service, also Norway, Germany, busted some ransomware suspects, including the alleged ringleader of a group that launched in 2020. And it's not clear to me if this is a stand-alone group that developed its own ransomware or if they procure the ransomware. Authorities said they definitely improved the ransomware. So we're talking about strains like LockerGoga, Dharma, MegaCortex, Hive. Now Hive I definitely know for sure was, well, was developed by another group and a lot of people participated in it. But this particular group that's been rolled up has been charged with hitting some big victims - 1,800 victims or more across over 70 countries. Since 2018, operating from Ukraine, and apparently operating not just from 2018, but after Russia launched its all-out invasion of Ukraine in February of 2022. One of the big victims ascribed to the group is Norsk Hydro, the Norwegian, louver aluminum giant, which led to Norwegian authorities getting closely involved in this investigation. The group in 2019, same year as Norsk Hydro, also allegedly hit a chemical company in the Netherlands, owned by a U.S. firm and demanded a ransom, then worth $1.3 million. So start to multiply this by the 1,800 victims or more. The ransom demands wouldn't have been the same, the ransom payments, who knows what they were, but you're looking at potentially a lot of illicit revenue for this group operating from Ukraine. This is the first time we've heard about this group. About a year ago, there was a first round of arrests of 12, what were described as high-value targets in Ukraine and Switzerland. Shortly thereafter, authorities released free decryptors for at least some of the files that would have been encrypted by this group, working with Romanian cybersecurity from Bitdefender. The Swiss authorities said they developed decryptors for LockerGoga and MegaCortex ransomware. And so now here a year later, we're seeing what looks to be the application of digital forensic evidence that was seized in that initial raid, based on an investigation that originally launched in 2019. And this additional evidence has allowed them to identify who they suspect the ringleader is, along with a handful of accomplices. So it appears to be the very slow, I won't say rolling up, because it appears the group has been defunct since the end of last year. But this very slow justice, catching up with these alleged ironing massive pain-causing ransomware-wielding criminals operating from Ukraine.
Delaney: Excellent, Mat. And so as you said, the operation against this cybercriminal group started in 2019. What challenges did law enforcement face throughout that time when it comes to tracking and dismantling these highly sophisticated operations in all sorts of countries? But also, do you think the war in Ukraine with Russia had an impact at all in hindering or even helping law enforcement?
Schwartz: The short answer is I have no idea. I've fired off questions about this to the FBI and others. Even with the decryptors that came out last year, just saying, Look, is this group responsible for developing LockerGoga? Or were they procuring it from somewhere else? And what I heard back was, No comment. And I suppose because this investigation is continuing, and authorities who said that it's still continuing, we're not going to get that kind of detail, that stuff maybe will come out if these people go to court as in a trial by jury, as opposed to reaching some kind of a plea agreement. Definitely, though, I would think that having this full-fledged war happening must complicate things. And so kudos to authorities in Ukraine for continuing to chase down the suspects, continuing to press forward with this investigation, working very closely with partners backed by Europol, which has been coordinating, offering intelligence. Eurojust, which has been coordinating and also helping with all this. So big investigation, lots of going back and forth, people from Ukraine at Europol and vice versa. So yeah, big kudos, middle of a massive war. This can't be easy, but great that they are continuing to press on these criminals. I hope we see more of this.
Field: Follow-up question. When you hear yourself talking about these ransomware criminal groups, do you ever stop to think that it sounds like you could be talking about the Transformers?
Schwartz: Give me a little bit more to work with that, Tom.
Field: Rattle off these names again.
Schwartz: Oh, LockerGoga and MegaCortex.
Field: Why can't they be Transformers?
Schwartz: Well, there's a sociology treaties or whatever to be written about the fact that a lot of these groups are men in their early 20s. With a lot of time on their hands, talking a lot of smack. There's this soap opera of my crypto locking code is worse than yours. There's lots of adolescent-level drama, so I'm not surprised that you're taking away a certain I don't know. I won't say youthful flavor, but less mature, perhaps language being used around a lot of this stuff.
Field: That would be my beat. Yes.
Delaney: That sets up my final fun question very nicely. If we were to transport a historical figure into the digital age with today's technology, which one do you think would excel as the most formidable hacker and why?
Field: Benjamin Franklin. He invented the lightning rod and invented the Franklin stove and bifocals, created the first library, was the first postmaster of the U.S., is notable for so many innovation and inventions. I think he would do a terrific job in today's era. I would look forward to reading his Poor Richard's AI Almanack.
Delaney: Excellent choice. Mathew?
Schwartz: Yeah, Benjamin Franklin's a good one. I would transport just for fun, or whatever, Charles Ponzi, who's not been with us since 1949. Although his name lives on. He certainly didn't invent the concept of robbing Peter to pay Paul, but perhaps industrialized it on a scale we've never seen before. Where you get people to invest and you pay off the early investors with the later investors, promising them returns that you just can't deliver.
Delaney: Very good, Ponzi choice, great. And Michael?
Novinson: I was thinking of Napoleon. In the pre-cyberworld, he escaped Elba, escaped an island with hundreds of people, commandeered a ship, and took over. He told the military that they're welcome to shoot him and they just let him take over as leader at least for another couple of months until all the other countries kicked him out again. But my goodness, if he had our cyber tools and cyber weaponry in his hands, imagine what he could do given how effective he was at social engineering in a non-cyber, non-digital world.
Delaney: That's a good one and very topical. While I was going for the 16th century astrologer and physician, Nostradamus. I think he'd serve very well as a modern-day digital Oracle, who will have the ability to perceive vulnerabilities and conduct predictive cyberthreat analysis and be a master at staying hidden in the digital realm and be able to hopefully use AI to predict the future developments. And I also want to ask him what he thinks about it all. And how soon would we see AGI. So plenty of questions for him.
Schwartz: Didn't he predict AI, Anna?
Delaney: Did he?
Schwartz: And cybercrime, everything.
Field: Everything else put together, it will be a hell of a dinner party.
Delaney: Yes this will be a great one. Thank you so much, Tom, Michael, Mathew. Always a pleasure. Thank you. Fantastic.
Novinson: Thank you, Anna.
Schwartz: Arrivederci, Anna.
Delaney: Arrivederci, ciao. Thanks so much for watching. Until next time.