A 7-day intensive program for security professionals shaping how we secure emerging AI systems.
This iteration of AI Security bootcamp explores the rapidly evolving threat landscape of frontier AI systems, equipping security professionals with the knowledge and hands-on skills to secure against current and emerging risks.
As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn't cover. This program is designed to fill that gap, providing an intensive, practitioner-focused curriculum that prepares participants to engage with the most pressing AI security challenges we see today, and grapple with how the risks will evolve in the future.
Participants will complete pre-work before the program to establish baseline ML fundamentals, followed by an immersive week delivered through demos, lectures, guest speakers, and hands-on red/blue exercises that build skills across the modern AI system stack.
Security professionals ready to secure frontier AI systems at all stages - from user applications, to model APIs for developers; and from infrastructure hosting the models, to governance frameworks for emerging threats.
We want our cohort to span a wide range of expertise - whether your background is offensive security, incident response, threat intelligence, infrastructure, or application security, the AI-specific threat models and techniques we cover will push what you already know into new territory.
5+ years of hands-on security experience. No prior AI or ML background needed - the pre-work will cover what's necessary.
Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.
Experience with deep learning frameworks (e.g., PyTorch) is a plus but not required. We want to make this accessible to security professionals from a variety of backgrounds, so we provide comprehensive pre-work to get everyone up to speed on the AI fundamentals needed to engage with the curriculum.
AISB Singapore runs April 20–26, overlapping with Black Hat Asia (April 21–24) and just before DEF CON (April 28–30). If you're already planning to attend DEF CON, this program fits naturally into the same trip: the bootcamp ends just before DEF CON opens.
Accommodation included. Limited travel support available—note this in your application.
Selection is competitive—we accept 10 to 12 participants. The cost is your time: full attendance for seven days and pre-reading completed before arrival.
Program Lead
Research engineer at Conjecture. Created AISB to bridge AI safety and security, leads curriculum design and program direction.
Security Lead
Head of Cyber Security at Heron AI Security Initiative. 6+ years security research specializing in IoT, robotics, malware and AI security.
Bootcamp Partner

Local execution and institutional linkage supported by the Singapore AI Safety Hub.
Applications close 15th March 2026. We review on a rolling basis - early applications are encouraged. Let us know in your application if you'd like a decision sooner.
Reach out to pranav@aisb.dev to express interest or ask questions about the program.
Apply Now