Skip to content

Security Research Blog

Welcome to the Security Research Blog - a collection of in-depth technical posts covering cybersecurity research, offensive security techniques, defensive strategies, and emerging threats in cloud and AI/ML security.

What You'll Find Here

  • Vulnerability Research: Deep dives into security vulnerabilities, exploit development, and responsible disclosure
  • AI/ML Security: Research on adversarial machine learning, prompt injection, and AI system security
  • Cloud Security: Cloud platform security analysis, misconfigurations, and exploitation techniques
  • Red Team TTPs: Tactics, techniques, and procedures for offensive security operations
  • Tool Development: Custom security tools, automation scripts, and methodology guides
  • Threat Intelligence: Analysis of threat actors, campaigns, and evolving attack patterns

Recent Posts

Beyond the "AI Pentest": Why Risk Assessment Is the Real End Game

In the consulting world, clients love the word "Pentest." It's familiar. It's a known line item. They know what the report will look like, how long it takes, and that it satisfies a "Security Testing" checkbox for auditors.

But when a client asks for an "AI Pentest" or "LLM Red Teaming," they are often asking for the wrong thing.

Unlike traditional software—where a SQL Injection vulnerability is a deterministic flaw that must be fixed—AI vulnerabilities are probabilistic, inherent, and often not patchable. You can always trick a model if you try hard enough. Spending 40 hours proving that a model occasionally makes mistakes is a costly dead end.

As security professionals, our job is to pivot the conversation from: "Can you break this?" → "Does it matter if this breaks?"

This guide outlines an AI Risk Assessment methodology that delivers far more value than a traditional pentest because it focuses on Threat Modeling, Governance, Compliance, and Context.

The Golden Ticket: Why SageMaker Presigned URLs are Your New Favorite Pivot Point

Let’s be real: usually, when we talk about cloud security, we’re talking about S3 buckets left open to the world or over-permissive IAM roles attached to EC2 instances. But while everyone is watching the front door, the Data Science team is building a massive side entrance with Amazon SageMaker.

I’ve been deep-diving into SageMaker security assessments lately, specifically looking at how we access these environments. The verdict? SageMaker Presigned URLs are the "Golden Tickets" of the AWS ecosystem.

If you are a pentester or a Cloud Sec engineer, you need to understand how these URLs work because they are effectively bearer tokens that bypass your IDP, your MFA, and potentially your sanity.

Cloud Red Team TTPs: Operationalizing AWS Console Credential Extraction

For years, one of my go-to TTPs during red team engagements has been bridging the gap between AWS Console access and the CLI. We've all been there: you land on a compromised workstation, or you're stuck in a restrictive VDI environment. You have access to the AWS Console via the browser, but you're handcuffed. You can't run scripts, you can't use tools like Pacu, and you can't mass-enumerate resources efficiently.

I knew the credentials had to be somewhere. AWS doesn't use magic; the browser has to authenticate API calls somehow.