Agentic Era Cyber Threats: AI Attack Guide

Cyber security Ngelaw todayApril 18, 2026

Background
share close

Let that sink in.

AI didn’t just make sophisticated threat actors more dangerous. It elevated
the less skilled ones too. It compressed the gap between intent and execution. It
automated reconnaissance, supercharged phishing campaigns, and shortened the
window from initial access to full impact.

But here’s the part that keeps me up at night: adversaries are now
targeting the very AI systems we’ve embedded into our
enterprises
. Development pipelines. SaaS platforms. Operational
workflows. The tools we built to accelerate innovation have become part of the
attack surface.

Attackers injected malicious prompts into legitimate AI tools to generate
unauthorised commands. Read that again. We’re not just defending against
AI-powered attacks — we’re defending AI itself.

In this post I’ll unpack what the CrowdStrike 2026 Global Threat Report
tells us about the new threat landscape, what it means for your security
programme, and how boards and CISOs in South Africa and across Africa need to
respond before AI-enabled cyberattacks become the business-ending event of 2026.

The Numbers That Should Reshape Your 2026 Security Strategy

CrowdStrike called 2025 “the year of the evasive adversary”. The
headline figures from the 2026 Global Threat Report are not incremental
— they describe a structural shift in how attacks are conducted.

None of these numbers exists in isolation. Read together, they describe an
adversary that is faster, stealthier, better resourced, and increasingly
machine-assisted at every stage of the kill chain — and a defender
population that is still, in most cases, operating at 2022 speed.

How AI Changed the Attacker — in Concrete Terms

The most useful way to think about the 89% surge is not as an abstract statistic,
but as evidence of AI being weaponised at specific points in the attack lifecycle.
CrowdStrike documents it across four distinct areas.

1. AI-accelerated reconnaissance and targeting

Russia-nexus FANCY BEAR deployed LLM-enabled malware — reported as
LAMEHUG — to automate reconnaissance and document collection at a
scale that would have been infeasible even 18 months ago. Tasks that once
required a skilled human operator are now being delegated to models that work
tirelessly and in parallel.

2. AI-generated tooling for less-skilled actors

eCrime actor PUNK SPIDER used AI-generated scripts to accelerate credential
dumping and erase forensic evidence. This is the democratisation problem in
plain English: attackers who previously could not write their own custom tooling
can now generate it on demand, in minutes. The barrier to entry for serious
intrusion work has fallen through the floor.

3. AI-fabricated identities and insider operations

DPRK-nexus FAMOUS CHOLLIMA leveraged AI-generated personas to scale insider
operations — creating fake identities convincing enough to be hired as
contractors or employees at real organisations, and then operating from inside
the trust boundary. A 109% increase in their activity, combined with
AI-generated fake CVs, photos, and interview answers, should force every HR and
security team to rethink their hiring verification.

4. Prompt injection against enterprise AI systems

This is the one every CISO should read twice. Attackers are injecting malicious
prompts into legitimate generative AI tools at more than 90 organisations —
using the organisation’s own AI assistants, copilots, and agentic workflows
to generate unauthorised commands, exfiltrate data, and pivot further into the
environment. Your AI isn’t just a productivity tool any more. It is a
new attack surface with its own TTPs.

Speed Is the New Attack Surface

Perhaps the most sobering statistic in the report is the eCrime breakout time
— the window between initial access and lateral movement onto another
system. In 2025 it dropped to 29 minutes. The fastest observed breakout was 27
seconds. In one intrusion, data exfiltration began within four minutes of
initial access.

Think about what that means operationally:

  • Any manual-only incident response process is already too slow.
  • Alert triage that relies on a human reading a ticket and deciding what to do has no chance of containing a modern intrusion.
  • The only viable response model is continuous detection plus automated containment — with humans making high-level decisions, not low-level clicks.

If your Mean Time To Detect (MTTD) or Mean Time To Respond (MTTR) is still
measured in hours or days, you are already outside the adversary’s
operating envelope.

Seven Questions Every Board Should Ask the CISO in 2026

In the agentic era, cybersecurity isn’t just a business function —
it’s the foundational infrastructure required to protect an AI-driven
enterprise. Here are the questions that should be on every board pack this year.

  1. Do we have an inventory of every AI system in use in our organisation — including shadow AI, SaaS features with embedded AI, and developer-adopted copilots?
  2. Have we assessed each of those AI systems against prompt injection, data leakage, and model abuse risks? Is there a documented owner for each?
  3. What is our current MTTD and MTTR, and how does it compare to a 29-minute breakout time?
  4. What automated containment capabilities do we have? Can we isolate an endpoint, disable an identity, or revoke a token without waiting for a human?
  5. Do our operator and third-party contracts reflect AI-specific risks — prompt injection, training data exposure, model poisoning?
  6. How are we hardening our hiring process against AI-generated personas and deepfake interviews?
  7. What is our incident response plan for an AI-specific incident — a compromised copilot, a poisoned model, a leaked prompt history?

If the answer to three or more of these is “we’re working on it”,
you have work to do — and the timeline is not on your side.

A Practical Response Plan for the Agentic Era

You cannot match an 89% attacker acceleration with a 10% increase in security
spend. What you can do is re-architect your programme around the assumption that
adversaries are now machine-assisted, and that your own AI is part of the attack
surface.

1. Govern your AI before someone else does

Establish an AI governance framework — ISO/IEC 42001
is the obvious starting point — covering acceptable use, data classification,
model selection, and third-party AI risk. If you have no AI acceptable use policy
in 2026, that is itself a finding.

2. Treat your AI systems like crown jewels

Apply the same discipline to your enterprise AI that you apply to your finance
systems: identity-bound access, least privilege, comprehensive logging, input
and output monitoring, red-team testing for prompt injection, and incident
response playbooks specific to AI misuse.

3. Close the identity gap

Adversaries are “logging in, not breaking in”. Phishing-resistant MFA,
continuous identity risk scoring, and aggressive deprovisioning are no longer
nice-to-haves. Identity is the front line.

4. Automate your containment

Build the muscle to respond in seconds, not hours. Endpoint isolation, token
revocation, and account suspension should be executable by your platform —
with human review, not human execution.

5. Harden your supply chain

Supply chain compromise was a defining tactic of 2025. Trojanised software,
malicious npm packages, and compromised upstream providers gave adversaries
broad access. Your third-party risk programme needs to extend into your build
pipeline, your package registries, and your AI model sources.

6. Invest in the humans

AI accelerates attackers, but humans still make the strategic calls on defence.
Your team needs modern skills — ISO 27001:2022, ISO 42001, privacy
regulations, cloud security, and AI risk — not a 2019 playbook.

What This Means for South African and African Organisations

There is a persistent myth that the African threat landscape lags behind. It
doesn’t. South African organisations have been experiencing a sustained
wave of breaches, and the average cost of a data breach in South Africa reached
around R44.1 million in 2025 — rising to roughly R70.2 million in the
financial sector.

Three realities matter specifically for African enterprises in 2026:

  • POPIA enforcement has teeth. The Information Regulator is issuing multi-million rand fines, and an AI-related breach will be assessed through the same Section 19 lens as any other security compromise.
  • Skills concentration risk is real. The gap between the number of organisations that need mature AI-era security and the number of practitioners who can deliver it is wider in this region than in most. That is both a risk and an opportunity.
  • AI adoption is already outpacing AI governance. Most organisations we advise have already deployed generative AI tools at scale, but have not yet implemented the governance, training, or technical controls to match.

How Naveg Technologies Helps You Respond

At Naveg Technologies, we help organisations translate reports like
CrowdStrike’s into concrete, implementable programmes. For the agentic era,
that means:

The Question Isn't Whether You Use AI

In the agentic era, the question isn’t whether your organisation uses AI.
It’s whether your security posture has kept pace with your AI adoption.
For most organisations, the honest answer is “not yet”. The good news
is that the same AI that is accelerating attackers can, with the right
investment, accelerate defenders too — and the organisations that move
first in 2026 will set the competitive baseline for the next decade.

Outpacing AI-enabled cyberattacks isn’t a one-time project. It is a
continuous capability that blends governance, technology, process, and people.
We would like to help you build it.

Frequently Asked Questions