The Promptware Kill Chain: A New Kind of Cyber Threat

promptware kill chain

Written by Georg Lindsey

I am the co-founder and CEO of CGNET. I love my job and spend a lot of time in the office -- I enjoy interacting with folks around the world. Outside the office, I enjoy the coastline, listening to audiobooks, photography, and cooking. You can read more about me here.

April 16, 2026

A growing concern in AI security is “prompt injection” — the idea that someone can sneak malicious instructions posed as a query, into an AI system and get it to misbehave. That’s a real concern, but it’s only part of the picture.

What’s really happening is more than a single exploit. It is a full attack lifecycle that can turn AI systems into unwitting participants in a breach. Security researchers are calling this the “promptware kill chain.” That framing is useful because it shifts the focus away from one isolated vulnerability and toward a structured, multi-step sequence that resembles a traditional malware campaign — with one important twist: the AI itself ends up doing much of the work.

Why This Problem Is Different

This isn’t a bug that gets fixed in the next software update. It’s a bit more fundamental than that.

Traditional software keeps code and data in separate lanes. AI systems don’t — everything gets processed together: emails, documents, system instructions, all of it. That means a cleverly worded instruction tucked inside a document or calendar invite can look just like a legitimate command to the AI. It’s not that the system is broken; it’s just that this is how current AI systems work.

Which is why the defenses need to look a little different, too.

The Promptware Kill Chain (In Plain English)

What makes this framework handy is that it breaks the attack into recognizable stages. Here’s how it typically plays out:

  1. Initial access: The attack gets in — often through something harmless-looking like an email, document, or calendar invite.
  2. Privilege escalation: The attacker convinces the AI to ignore its guardrails.
  3. Reconnaissance: The AI is used to figure out what systems and data it has access to.
  4. Persistence: The malicious instructions are embedded so they keep running over time.
  5. Command and control: The attacker maintains ongoing influence over the AI’s behavior.
  6. Lateral movement: The attack spreads across systems — often using email or collaboration tools.

The thread running through all of this: the more the AI can do, the more important it is to shape what it’s allowed to do.

This Isn’t Theoretical

These aren’t just thought experiments. Researchers have already demonstrated real-world versions of these attacks in controlled settings. In one case, a prompt was hidden inside a calendar invite — the AI processed it as part of its normal workflow and took unintended actions, all without the user noticing. In another, an email-based attack caused an AI assistant to spread to other users while quietly scooping up sensitive data along the way.

Once a system like this is pointed in the wrong direction, it can move fast.

What This Means for Your Organization

If your organization uses AI tools — whether that’s Microsoft Copilot, Google Workspace AI, or anything else connected to your email and documents — it’s worth thinking about this. The same integrations that make these tools so useful also expand the attack surface. When an AI can read, write, send, and act, it becomes worth protecting deliberately.

For nonprofits and foundations, that calculus matters a little more. Donor information, internal communications, and program data are exactly what bad actors tend to be interested in.

Where to Focus Defensively

The goal isn’t to avoid AI — it’s to use it thoughtfully.

That means:

  • Limiting what AI tools are allowed to do — especially irreversible actions
  • Controlling what systems and data they can access
  • Monitoring what gets stored in long-term memory
  • Being cautious about granting communication privileges like sending email
  • Applying least-privilege principles across the board

A useful mental model: design as if the system could be tricked — because in some scenarios, it can be.

The Bigger Picture

None of this is entirely new territory. Cybersecurity has long taught us that keeping bad actors out is only half the job — the other half is limiting what they can do if they get in. That same lesson applies here. The question with AI isn’t really “can someone mess with it?” It’s “what happens next?” And with AI systems, “next” can happen surprisingly quickly.

 

 

If you’re exploring AI tools and still working out how to think about security, you’re in good company. At CGNET, we help mission-driven organizations bring AI on board in a way that’s practical and responsible — with the right guardrails built in from the start. Happy to talk through your approach whenever the time is right. Visit cgnet.com to learn more, or drop me a line at g.*******@***et.com.

 

You May Also Like…

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate »
Share This
Subscribe
CGNET
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.