Agentic AI has graduated from a research curiosity to a working member of the red team. Unlike the prompt-and-response chatbots most people are familiar with, agentic systems can plan, act, observe results, and adapt — all with minimal human prodding. In offensive security, that means an AI agent can now sit on an endpoint, iterate against it continuously, and shift tactics in response to defenses without waiting for an operator to type the next command. In penetration testing scenarios, AI agents can target an endpoint continuously and adapt their tactics as the engagement unfolds, compressing a workflow that used to take a skilled human days into something measured in minutes.

The numbers behind this shift are sobering. Microsoft’s 2025 Digital Defense Report found that AI-generated phishing emails achieved roughly a 54 percent click-through rate compared to 12 percent for traditional phishing — making the AI-crafted variants about four-and-a-half times more effective. Pair that with deepfake voice and video being used to impersonate executives and helpdesk staff, and the social-engineering surface has effectively been redrawn. Identity is the new perimeter, and the perimeter is being battered by attackers who never sleep, never get bored, and never forget a previously successful pretext.

Defenders are responding by leaning on the same technology, but the operational picture is messier. Security teams are already drowning in alerts, and bolting AI on top of a noisy stack mostly produces faster noise. The organizations getting traction are the ones pairing agentic detection with disciplined fundamentals — accurate asset inventories, ruthless removal of unused systems, identity-centric controls like passkeys and adaptive MFA, and continuous validation rather than annual point-in-time audits. The advanced tooling only pays off after the basics are in place.

Governance is becoming the other half of the conversation. Regulators, insurers, and internal auditors are no longer satisfied with “the model said so” as a justification for a security decision. Algorithmic transparency — the ability for an AI tool to explain why it flagged a behavior or approved an access request — is moving from a nice-to-have toward a procurement requirement. Expect contracts and frameworks in the coming year to demand evidence that AI security tools are testable, auditable, and free of the kind of black-box reasoning that becomes a liability when something goes wrong in court or in a board review.

None of this means human operators are obsolete. Scoping an engagement, weighing business context, recognizing when something looks technically legal but is ethically off — these still require judgment that no agent has demonstrated reliably. The practitioners who thrive in 2026 will be the ones who treat AI as a force multiplier on their existing tradecraft: automating the repetitive parts of recon, exploitation, and reporting, while reserving their own attention for the work that actually requires a brain. The arms race is real, but it’s still being run by people. The agents just make them faster.