By Vitalis Nkwenti, Risk & Cybersecurity Partner, NavegTech |
Published April 2026 | ~8 min read
We’ve entered the agentic era — and so have the attackers. The
CrowdStrike 2026 Global Threat Report paints a picture every security leader
needs to sit with: AI-enabled adversaries increased attacks by
89% year-over-year in 2025.
Let that sink in.
AI didn’t just make sophisticated threat actors more dangerous. It elevated
the less skilled ones too. It compressed the gap between intent and execution. It
automated reconnaissance, supercharged phishing campaigns, and shortened the
window from initial access to full impact.
But here’s the part that keeps me up at night: adversaries are now
targeting the very AI systems we’ve embedded into our
enterprises. Development pipelines. SaaS platforms. Operational
workflows. The tools we built to accelerate innovation have become part of the
attack surface.
Attackers injected malicious prompts into legitimate AI tools to generate
unauthorised commands. Read that again. We’re not just defending against
AI-powered attacks — we’re defending AI itself.
In this post I’ll unpack what the CrowdStrike 2026 Global Threat Report
tells us about the new threat landscape, what it means for your security
programme, and how boards and CISOs in South Africa and across Africa need to
respond before AI-enabled cyberattacks become the business-ending event of 2026.
The Numbers That Should Reshape Your 2026 Security Strategy
CrowdStrike called 2025 “the year of the evasive adversary”. The
headline figures from the 2026 Global Threat Report are not incremental
— they describe a structural shift in how attacks are conducted.
89%
increase in attacks by AI-enabled adversaries
29 min
average eCrime breakout time (65% faster than 2024)
27 sec
fastest recorded breakout time
82%
of 2025 detections were malware-free
90+
organisations had AI tools exploited via prompt injection
550%
increase in ChatGPT mentions in criminal forums
42%
rise in zero-days exploited before public disclosure
$1.46B
single-theft cryptocurrency heist (largest on record)
None of these numbers exists in isolation. Read together, they describe an
adversary that is faster, stealthier, better resourced, and increasingly
machine-assisted at every stage of the kill chain — and a defender
population that is still, in most cases, operating at 2022 speed.
How AI Changed the Attacker — in Concrete Terms
The most useful way to think about the 89% surge is not as an abstract statistic,
but as evidence of AI being weaponised at specific points in the attack lifecycle.
CrowdStrike documents it across four distinct areas.
1. AI-accelerated reconnaissance and targeting
Russia-nexus FANCY BEAR deployed LLM-enabled malware — reported as
LAMEHUG — to automate reconnaissance and document collection at a
scale that would have been infeasible even 18 months ago. Tasks that once
required a skilled human operator are now being delegated to models that work
tirelessly and in parallel.
2. AI-generated tooling for less-skilled actors
eCrime actor PUNK SPIDER used AI-generated scripts to accelerate credential
dumping and erase forensic evidence. This is the democratisation problem in
plain English: attackers who previously could not write their own custom tooling
can now generate it on demand, in minutes. The barrier to entry for serious
intrusion work has fallen through the floor.
3. AI-fabricated identities and insider operations
DPRK-nexus FAMOUS CHOLLIMA leveraged AI-generated personas to scale insider
operations — creating fake identities convincing enough to be hired as
contractors or employees at real organisations, and then operating from inside
the trust boundary. A 109% increase in their activity, combined with
AI-generated fake CVs, photos, and interview answers, should force every HR and
security team to rethink their hiring verification.
4. Prompt injection against enterprise AI systems
This is the one every CISO should read twice. Attackers are injecting malicious
prompts into legitimate generative AI tools at more than 90 organisations —
using the organisation’s own AI assistants, copilots, and agentic workflows
to generate unauthorised commands, exfiltrate data, and pivot further into the
environment. Your AI isn’t just a productivity tool any more. It is a
new attack surface with its own TTPs.
Why this is different: traditional security telemetry was not
designed to detect a user asking a chatbot to do something it shouldn’t.
Prompt injection often leaves no malware signature, no unusual network flow, and
no conventional log anomaly. The attack is semantic, not technical.
Speed Is the New Attack Surface
Perhaps the most sobering statistic in the report is the eCrime breakout time
— the window between initial access and lateral movement onto another
system. In 2025 it dropped to 29 minutes. The fastest observed breakout was 27
seconds. In one intrusion, data exfiltration began within four minutes of
initial access.
Think about what that means operationally:
- Any manual-only incident response process is already too slow.
- Alert triage that relies on a human reading a ticket and deciding what to do has no chance of containing a modern intrusion.
- The only viable response model is continuous detection plus automated containment — with humans making high-level decisions, not low-level clicks.
If your Mean Time To Detect (MTTD) or Mean Time To Respond (MTTR) is still
measured in hours or days, you are already outside the adversary’s
operating envelope.
Seven Questions Every Board Should Ask the CISO in 2026
In the agentic era, cybersecurity isn’t just a business function —
it’s the foundational infrastructure required to protect an AI-driven
enterprise. Here are the questions that should be on every board pack this year.
- Do we have an inventory of every AI system in use in our organisation — including shadow AI, SaaS features with embedded AI, and developer-adopted copilots?
- Have we assessed each of those AI systems against prompt injection, data leakage, and model abuse risks? Is there a documented owner for each?
- What is our current MTTD and MTTR, and how does it compare to a 29-minute breakout time?
- What automated containment capabilities do we have? Can we isolate an endpoint, disable an identity, or revoke a token without waiting for a human?
- Do our operator and third-party contracts reflect AI-specific risks — prompt injection, training data exposure, model poisoning?
- How are we hardening our hiring process against AI-generated personas and deepfake interviews?
- What is our incident response plan for an AI-specific incident — a compromised copilot, a poisoned model, a leaked prompt history?
If the answer to three or more of these is “we’re working on it”,
you have work to do — and the timeline is not on your side.
A Practical Response Plan for the Agentic Era
You cannot match an 89% attacker acceleration with a 10% increase in security
spend. What you can do is re-architect your programme around the assumption that
adversaries are now machine-assisted, and that your own AI is part of the attack
surface.
1. Govern your AI before someone else does
Establish an AI governance framework — ISO/IEC 42001
is the obvious starting point — covering acceptable use, data classification,
model selection, and third-party AI risk. If you have no AI acceptable use policy
in 2026, that is itself a finding.
2. Treat your AI systems like crown jewels
Apply the same discipline to your enterprise AI that you apply to your finance
systems: identity-bound access, least privilege, comprehensive logging, input
and output monitoring, red-team testing for prompt injection, and incident
response playbooks specific to AI misuse.
3. Close the identity gap
Adversaries are “logging in, not breaking in”. Phishing-resistant MFA,
continuous identity risk scoring, and aggressive deprovisioning are no longer
nice-to-haves. Identity is the front line.
4. Automate your containment
Build the muscle to respond in seconds, not hours. Endpoint isolation, token
revocation, and account suspension should be executable by your platform —
with human review, not human execution.
5. Harden your supply chain
Supply chain compromise was a defining tactic of 2025. Trojanised software,
malicious npm packages, and compromised upstream providers gave adversaries
broad access. Your third-party risk programme needs to extend into your build
pipeline, your package registries, and your AI model sources.
6. Invest in the humans
AI accelerates attackers, but humans still make the strategic calls on defence.
Your team needs modern skills — ISO 27001:2022, ISO 42001, privacy
regulations, cloud security, and AI risk — not a 2019 playbook.
What This Means for South African and African Organisations
There is a persistent myth that the African threat landscape lags behind. It
doesn’t. South African organisations have been experiencing a sustained
wave of breaches, and the average cost of a data breach in South Africa reached
around R44.1 million in 2025 — rising to roughly R70.2 million in the
financial sector.
Three realities matter specifically for African enterprises in 2026:
- POPIA enforcement has teeth. The Information Regulator is issuing multi-million rand fines, and an AI-related breach will be assessed through the same Section 19 lens as any other security compromise.
- Skills concentration risk is real. The gap between the number of organisations that need mature AI-era security and the number of practitioners who can deliver it is wider in this region than in most. That is both a risk and an opportunity.
- AI adoption is already outpacing AI governance. Most organisations we advise have already deployed generative AI tools at scale, but have not yet implemented the governance, training, or technical controls to match.
Building the Talent to Defend the Agentic Enterprise
The hardest part of the response plan above is not technology — it’s
people. The practitioners who can secure AI systems, run modern detection
engineering, and advise boards on AI risk do not appear by magic. They have to
be developed deliberately.
That is why Naveg Academy built the
Cyber Career Elite Pathway — a three-tier programme
designed for the realities of 2026 cybersecurity, not the textbooks of 2018.
Tier 1 — Foundation: Cybersecurity fundamentals, ISC2
Certified in Cybersecurity (CC), and CompTIA Security+ aligned content for
career-changers and early-career professionals.
Tier 2 — Practitioner: Hands-on GRC, ISO 27001:2022
Lead Implementer, privacy, and security operations — with AI governance
and ISO 42001 context woven throughout.
Tier 3 — Elite / Strategist: CISSP-, CISM-, and
CCSP-aligned senior content, plus board-level advisory skills for the
professionals who will brief executives on AI risk in the years ahead.
Mentoring, labs, career coaching, and placement support are built in —
because “more training” isn’t the answer on its own.
A deliberate pathway from foundations to strategic leadership is.
Explore the Cyber Career Elite Pathway →
How Naveg Technologies Helps You Respond
At Naveg Technologies, we help organisations translate reports like
CrowdStrike’s into concrete, implementable programmes. For the agentic era,
that means:
The Question Isn't Whether You Use AI
In the agentic era, the question isn’t whether your organisation uses AI.
It’s whether your security posture has kept pace with your AI adoption.
For most organisations, the honest answer is “not yet”. The good news
is that the same AI that is accelerating attackers can, with the right
investment, accelerate defenders too — and the organisations that move
first in 2026 will set the competitive baseline for the next decade.
Outpacing AI-enabled cyberattacks isn’t a one-time project. It is a
continuous capability that blends governance, technology, process, and people.
We would like to help you build it.
Frequently Asked Questions
What is the CrowdStrike 2026 Global Threat Report?
It is CrowdStrike’s annual review of the global cyber threat
landscape, published on 24 February 2026 and summarising adversary behaviour
observed through 2025. The 2026 edition, based on intelligence from elite
threat hunters tracking more than 280 named adversaries, focuses on the
impact of AI on both attackers and the enterprise attack surface.
By how much did AI-enabled cyberattacks increase in 2025?
CrowdStrike observed an 89% year-over-year increase in attacks from
AI-enabled adversaries, with AI being weaponised across reconnaissance,
credential theft, evasion, and social engineering.
What is “breakout time” and why does it matter?
Breakout time is the window between initial access and lateral movement
onto another system. In 2025 the average eCrime breakout time dropped to 29
minutes — a 65% increase in speed from 2024 — with the fastest
observed breakout at just 27 seconds. In one intrusion, data exfiltration
began within four minutes of initial access. It is the clearest single
indicator that manual-only response models are no longer viable.
What is prompt injection and why should I care?
Prompt injection is an attack where adversaries manipulate the input to a
generative AI tool so it generates unauthorised commands, leaks data, or
performs actions on their behalf. CrowdStrike observed malicious prompts
being injected into legitimate GenAI tools at more than 90 organisations in
2025, turning enterprise AI into an active attack vector.
How should my organisation respond?
Start with an AI inventory, assess each AI system for AI-specific risks,
reduce breakout exposure by automating containment, harden identity, and
build an AI governance framework aligned with ISO/IEC 42001. Then invest in
the people who can run that programme.
How does the Cyber Career Elite Pathway fit in?
The Cyber
Career Elite Pathway is Naveg Academy’s structured training
journey from cybersecurity foundations to strategic leadership, explicitly
designed for the realities of AI-era security — including ISO
27001:2022, AI governance, and senior GRC topics.