At the RSA security conference in San Francisco this week, there’s been a feeling of inevitability in the air. At talks and panels across the sprawling Moscone convention center, at every vendor booth on the show floor, and in casual conversations in the halls, you just know that someone is going to bring up generative AI and its potential impact on digital security and malicious hacking. NSA cybersecurity director Rob Joyce has been feeling it too.
“You can’t walk around RSA without talking about AI and malware,” he said on Wednesday afternoon during his now annual “State of the Hack” presentation. “I think we’ve all seen the explosion. I won’t say it’s delivered yet, but this truly is some game-changing technology.”
In recent months, chatbots powered by large language models, like OpenAI’s, have made years of machine-learning development and research feel more concrete and accessible to people all over the world. But there are practical questions about how these novel tools will be by bad actors to develop and , fuel the creation of misinformation and , and expand attackers’ abilities to automate their hacks. At the same time, the security community is eager to harness generative AI to defend systems and gain a protective edge. In these early days, though, it’s difficult to break down exactly what will happen next.
Joyce said the National Security Agency expects generative AI to fuel already effective scams like phishing. Such attacks rely on convincing and compelling content to trick victims into unwittingly helping attackers, so generative AI has obvious uses for quickly creating tailored communications and materials.
“That Russian-native hacker who doesn’t speak English well is no longer going to craft a crappy email to your employees,” Joyce said. “It’s going to be native-language English, it’s going to make sense, it’s going to pass the sniff test … So that right there is here today, and we are seeing adversaries, both nation-state and criminals, starting to experiment with the ChatGPT-type generation to give them English language opportunities.”
Meanwhile, although AI chatbots may not be able to develop perfectly weaponized novel malware from scratch, Joyce noted that attackers can use the coding skills the platforms do have to make smaller changes that could have a big effect. The idea would be to modify existing malware with generative AI to change its characteristics and behavior enough that scanning tools like antivirus software may not recognize and flag the new iteration.
“It is going to help rewrite code and make it in ways that will change the signature and the attributes of it,” Joyce said. “That [is] going to be challenging for us in the near term.”
In terms of defense, Joyce seemed hopeful about the potential for generative AI to aid in big data analysis and automation. He cited three areas where the technology is “showing real promise” as an “accelerant for defense”: scanning digital logs, finding patterns in vulnerability exploitation, and helping organizations prioritize security issues. He cautioned, though, that before defenders and communities more broadly come to depend on these tools in daily life, they must first study how generative AI systems can be manipulated and exploited.
Mostly, Joyce emphasized the murky and unpredictable nature of the current moment for AI and security, cautioning the security community to “buckle up” for what’s likely yet to come.
“I don’t expect some magical technical capability that is AI-generated that will exploit all the things,” he said. But “next year, if we’re here talking a similar year in review, I think we’ll have a bunch of examples of where it’s been weaponized, where it’s been used, and where it’s succeeded.”