Expert SECURITY Analysts Reveal Shocking AI Risks
Security Unfiltered PodcastOctober 27, 202501:05:05

Expert SECURITY Analysts Reveal Shocking AI Risks

We trade hype for hard edges as we dig into how AI actually augments red teams, why task design matters more than model brand, and how zero trust applies to model outputs. We also walk through Garak’s probes for prompted XSS and template injection, plus practical ways to map findings to real controls.

• MCP framed as conventional containerized API with familiar risks
• Offensive AI Con format, focus, and key trends in automation
• LLMs for code triage, bug patterning, and limits on complex targets
• Reinforcement learning environments as scalable, shareable training
• Hallucination risks, verification loops, and provenance checks
• Garak architecture, probes for XSS and Jinja template injection
• CWE mapping for actionable remediation and controls
• Jailbreak limits, alignment realities, and output safeguards
• Career resilience: depth in niches, broad literacy, measured skepticism

Find me by searching “Eric Galinkin.” Garak: github.com/NVIDIA/garak — Discord link in the repo. It’s also on PyPI: pip install garak. Our docs are okay and getting better

Chapters

00:00 Introduction and Context Setting
02:43 Exploring Offensive AI Conference
05:16 AI in Red Teaming and Security
08:10 The Role of AI Models in Security
10:41 Challenges in AI and Security Integration
13:29 Garak: A New Vulnerability Scanner
16:00 Jailbreaking and Security Implications
18:44 Future of AI in Security and Job Market
21:20 Skills for the Future in Cybersecurity

Affiliates
➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh
➡️ OffGrid Coupon Code: JOE

➡️ Unplugged Phone: https://unplugged.com/
Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout

*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.

Tesla Referral Code: https://ts.la/joseph675128
ethical hacking, hacking tools, AI, llm, Security,