• LLMs present unique security challenges beyond prompt injection or generating harmful content
• Traditional security models focusing on component-based permissions don't work with AI systems
• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior
• Real-world examples include data exfiltration through markdown image rendering in AI interfaces
• Security "guardrails" are insufficient first-order controls for protecting AI systems
• The education gap between security professionals and actual AI threats is substantial
• Organizations must shift from component-based security to data flow security when implementing AI
• Development teams need to ensure high-trust AI systems only operate with trusted data
Watch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brauchler III) or visit the NCC Group research blog at research.nccgroup.com.
Chapters
00:00 The Rapid Evolution of AI and Machine Learning
08:47 David's Journey into Cybersecurity
19:19 The Challenges of Cybersecurity and Academia
23:57 The Challenge of Technology Choices
24:40 Inside Government Agencies: Capabilities and Limitations
28:23 Exploring AI Security: Understanding LLMs
33:27 Vulnerabilities in AI: The Importance of Data Flow
47:25 Future of AI Security: Evolving Paradigms
Afilliates
➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh
➡️ OffGrid Coupon Code: JOE
➡️ Unplugged Phone: https://unplugged.com/
Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout
*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.
