August 4-29, 2025 • London, UK • 20 Participants

The inaugural AI Security Bootcamp brought together 20 researchers and engineers for four intensive weeks of training in security fundamentals, AI infrastructure security, and AI-specific threats. The program culminated in a week of capstone projects where participants explored cutting-edge security research.
A security layer for the Model Context Protocol (MCP) that provides tool inspection, permission controls, sanitization against prompt injection, and rate limiting.
Data poisoning attacks on Support Vector Machines used in legal document discovery for Technology Assisted Review.
Catching misbehaving coding agents red-handed through local deployment of EDR to detect and mitigate suspicious command execution attempts.
Demonstrates weight extraction from Taylor series-obfuscated neural networks, undermining model exfiltration protections.
Implementation of capture-the-flag evaluations using the inspect evals framework for AI security testing.
Policy framework proposing autonomy caps for AI agents in government systems with practical usage guidelines.
Proof-of-concept pickle file payload injection and MD5 hash collision attacks on ML model distribution.
Adversarial attacks on AI monitoring systems via spurious correlations and backdoor trigger coordination.
Replication of watermark stealing attacks on machine learning models with interactive Streamlit demo.
Paper replication exploring attack scenarios in outsourced training and transfer learning.
Malware detection using static binary analysis features to identify malicious software.
Using linear probes to detect and analyze adversarial attacks on neural networks.
Adversarial attacks on OpenAI's Whisper automatic speech recognition model.
AI security research exploring worm-like behavior in AI systems.