Red Teaming AI: 50 Years of Failure, But This Time, For Sure!

No ratings

Presented at RSAC 2025 by

After 50 years of pen testing, it’s still hard to build secure systems. "Penetrate + patch" never worked. Shifting left, including threat modeling, is finally getting making headway. Securing LLMs is both challenging and painful because code and data are intermingled. This session will discuss how to deliver AI that’s secure by design, via threat modeling and achievable strategies.