Skip to content

Security Research

Beyond the "AI Pentest": Why Risk Assessment Is the Real End Game

In the consulting world, clients love the word "Pentest." It's familiar. It's a known line item. They know what the report will look like, how long it takes, and that it satisfies a "Security Testing" checkbox for auditors.

But when a client asks for an "AI Pentest" or "LLM Red Teaming," they are often asking for the wrong thing.

Unlike traditional software—where a SQL Injection vulnerability is a deterministic flaw that must be fixed—AI vulnerabilities are probabilistic, inherent, and often not patchable. You can always trick a model if you try hard enough. Spending 40 hours proving that a model occasionally makes mistakes is a costly dead end.

As security professionals, our job is to pivot the conversation from: "Can you break this?" → "Does it matter if this breaks?"

This guide outlines an AI Risk Assessment methodology that delivers far more value than a traditional pentest because it focuses on Threat Modeling, Governance, Compliance, and Context.

Cloud Red Team TTPs: Operationalizing AWS Console Credential Extraction

For years, one of my go-to TTPs during red team engagements has been bridging the gap between AWS Console access and the CLI. We've all been there: you land on a compromised workstation, or you're stuck in a restrictive VDI environment. You have access to the AWS Console via the browser, but you're handcuffed. You can't run scripts, you can't use tools like Pacu, and you can't mass-enumerate resources efficiently.

I knew the credentials had to be somewhere. AWS doesn't use magic; the browser has to authenticate API calls somehow.