New AI “agents” could hold people for ransom in 2025

A paradigm shift in technology is hurtling towards us, and it could change everything we know about cybersecurity.

Uhh, again, that is.

When ChatGPT was unveiled to the public in late 2022, security experts looked on with cautious optimism, excited about the new technology but concerned about its use in cyberattacks. But two years on, much of what ChatGPT and other generative AI chat tools offer attackers is a way to improve what already works, not new ways to deliver attacks themselves.

And yet, if artificial intelligence achieves what is called an “agentic” model in 2025, novel and boundless attacks could be within reach, as AI tools take on the roles of “agents” that independently discover vulnerabilities, steal logins, and pry into accounts.

These agents could even hold people for ransom by matching stolen data online with publicly known email addresses or social media accounts, composing messages and holding entire conversations with victims who believe a human hacker out there has access to their Social Security Number, physical address, credit card info, and more. And if the model works for individuals, there’s little reason it wouldn’t work for individual business owners.

This warning comes from our 2025 State of Malware report, which compiled a year’s worth of intelligence to identify the most pressing cyberattacks on the horizon. Though the report’s guidance serves IT teams, its threats will impact individuals and small businesses everywhere. Remember that just last year a widespread IT outage grounded flights globally, cementing the relationship between companies, cybersecurity, and everyday people.

In 2025, agentic AI may further reveal just how closely tied everyone is in the battle for cybersecurity. Here’s what we might expect.

You can find the full 2025 State of Malware report here.

The generative AI non-revolution

The November 2022 launch of ChatGPT ushered forth a new relationship with our computers. No longer would we need to use our laptops, smartphones, and tablets to record or assist our creative work. Now, we could make those same machines complete the creative work for us.

AI image tools like Midjourney and DALL-E can create images when given simple text prompts. They can even mimic the styles of famous artists, like Van Gogh, Rembrandt, and Picasso. AI chat tools like ChatGPT, Google Gemini, and Claude—from OpenAI competitor Anthropic—can brainstorm ideas for marketing materials, write book reports, compose poems, and even review human-written text for legibility. These tools can also answer an endless array of factual questions, much like the separate AI tool Perplexity, which advertises itself not as a “search engine,” but as the world’s first “answer engine.”

This is the potential of “generative AI,” a term used to describe AI tools that can generate text, images, movies, summaries, and more, limited only by our imagination.

But where has that imagination brought us?

For unimaginative users, generative AI has made it easier to cheat in college classes and to abuse social media engagement algorithms to gain brief virality—hardly inspiring. And for malicious users, hackers, and scammers, generative AI has delivered oil-slick efficiency to proven attack methods.

Generative AI tools can more convincingly write phishing emails so that the tell-tale signs of a scam—like misspellings and clumsy grammar—are all but gone. The same is true for all text-based social engineering tricks, as AI chat tools can write alluring direct messages for romance scams and craft urgent-sounding texts that can fool people into clicking on links that carry malware.

Importantly, the attack methods here are not new. Instead, they’ve simply become easier to scale with the use of AI. But sometimes the AI pushes back.

With limitless, advertised potential, even tools like ChatGPT have boundaries, often precluding users from producing materials that could cause harm. In 2023, Malwarebytes Labs subverted these boundaries to successfully get ChatGPT to write ransomwaretwice.

Because of these prohibitive rules, a set of malicious copycat AI tools can now be found online that will produce text and images that often break the law. One example is in the creation of “deepfake nudes,” which utilize AI technology to digitally stitch the face of one person onto another person’s nude body, creating fake nude “photographs.” Deepfake nudes have caused multiple crises across high schools in America, serving as a new type of ammunition for old weaponry: Blackmail.

The ability to create false text, images, and even audio has also allowed cybercriminals to create more believable threats when fraudulently posing as CEOs or executives to convince employees to, say, sign a bogus contract or hand over a set of important account credentials.

These are real threats, but they are not novel. As we wrote in the 2025 State of Malware report:

“The limited impact of AI on malware stems from its current capabilities. Although there are notable exceptions, generative AIs tend to provide efficiency rather than brand new capabilities. Cybercrime is a very mature field that relies on a set of well-established tools, such as phishing, information stealers, and ransomware that are already feature complete.”

That could change in 2025.

“Agentic” AI and a new landscape of attacks

Agentic AI is the next big thing in artificial intelligence, even if you’ve never heard about it before.

Google, Amazon, Meta, Microsoft, and more have all begun experimenting with the technology, which promises to take AI out of its current chatbot silo and into a new landscape where individualized AI “agents” can help with specific tasks. These agents could, for example, more effectively respond to simple customer support questions, help patients find in-network providers with their health insurance, and even suggest strategy based on a company’s most recent performance. Microsoft, for its part, has already teased its AI agent that answers employee questions around HR policies, holiday schedules, and more. Salesforce, too, is investing heavily in agentic AI, positioning the technology as a personal assistant for everyone.

As we wrote in the 2025 State of Malware report:

“If agentic AIs arrive in 2025, they won’t just answer questions, they will be able to think and act, transforming AI from an assistant that responds to prompts, into a peer, or even an expert that can plan out tasks, interact with the world, and solve the problems it encounters.”

The implications for cyberattacks are enormous. If put into the wrong hands, malicious attackers could ask AI agents to:

  • Search vast troves of stolen data to match leaked Social Security numbers with leaked email addresses, composing and sending phishing emails that threaten more data exposure unless a ransom is paid.
  • Scrape public social media feeds for baby photos that are delivered to other AI agents that create fake profiles that weaponize those baby photos as empty threats against a child’s safety.
  • Scour LinkedIn to create a database of potentially viable email addresses from countless companies by deducing the email address format—first name, last name; first initial, last name; etc.—from publicly listed email addresses, and then mirroring that format to write and send bogus requests from executives to their direct reports.
  • Comb through public divorce records across multiple states and countries to identify targets for romance scams, who receive messages and who can carry on with whole conversations composed and controlled by another AI agent.

These attacks threaten not only individuals but small businesses, too, as a vulnerability in a person’s device can become a malware attack on a network. The same is true in reverse—if attacks on companies become more accessible, then the data that people give these companies becomes more vulnerable to exposure.

Thankfully, where agentic AI poses a risk, it also poses a boon, as individual AI agents could be tasked with finding a company’s vulnerabilities, responding to suspicious activity on its network, and even guiding everyday people into safely posting online, searching the web, and buying from unknown retailers.

The truth is that AI is here to stay. There is already too much investment from the largest developers and companies for that to reverse course any time soon. So, if the threat is that attackers might harness this AI, then the foreseeable future will involve a lot of defenders and everyday people harnessing it, too.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Read More

Scroll to Top