Cohort completed · April 2026
A 7-day intensive program for security professionals shaping how we secure emerging AI systems. The Singapore 2026 cohort ran April 20–26, 2026.
Past participants have been affiliated with
The Singapore iteration of AI Security Bootcamp explored the rapidly evolving threat landscape of frontier AI systems, equipping security professionals with the knowledge and hands-on skills to secure against current and emerging risks.
As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn't cover. This program filled that gap with an intensive, practitioner-focused curriculum.
Participants completed pre-work before the program to establish baseline ML fundamentals, followed by an immersive week delivered through demos, lectures, guest speakers, and hands-on red/blue exercises that built skills across the modern AI system stack.
Security professionals ready to secure frontier AI systems at all stages — from user applications, to model APIs for developers; and from infrastructure hosting the models, to governance frameworks for emerging threats.
The Singapore cohort spanned a wide range of expertise — from offensive security, incident response, and threat intelligence to infrastructure and application security — with the AI-specific threat models and techniques pushing what participants already knew into new territory.
5+ years of hands-on security experience. No prior AI or ML background needed - the pre-work will cover what's necessary.
Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.
Experience with deep learning frameworks (e.g., PyTorch) is a plus but not required. We want to make this accessible to security professionals from a variety of backgrounds, so we provide comprehensive pre-work to get everyone up to speed on the AI fundamentals needed to engage with the curriculum.
Past participants have been affiliated with: OpenAI, Google, Meta, Apple, Microsoft, AWS, Jane Street, Stanford, Oxford, CERN and other leading institutions across research, government, and industry.
AISB Singapore ran April 20–26, 2026, overlapping with Black Hat Asia (April 21–24) and just before DEF CON (April 28–30) — the bootcamp ended just before DEF CON opened, fitting naturally into the same trip for many participants.
Accommodation was included. Limited travel support was available for those who needed it.
Selection was competitive — the Singapore cohort accommodated 16 participants.

Program Lead
Research engineer at Conjecture. Created AISB to bridge AI safety and security; leads curriculum design and program direction across all editions.

Security Lead
Security lead at Conjecture. Designs AISB’s hands-on labs and capstone projects, drawing on 10+ years securing complex systems and ML infrastructure.

Curriculum
Research Manager at ERA, upskilling fellows in technical AI safety research. PhD from Columbia in systems and security; previously CTO and cofounder of cybersecurity insurance startup Elpha Secure.

Security Lead
Head of Cyber Security at Heron AI Security Initiative. 6+ years of security research specialising in IoT, robotics, malware, and AI security.

Operations
Program Manager at Singapore AI Safety Hub. Helped run operations for AISB Singapore 2026.

Project Manager
Finds, vets, and orchestrates world-class teams to tackle major AI risks. Affiliated with Singapore AI Safety Hub. Project manager for AISB Singapore 2026.
Bootcamp Partner

Local execution and institutional linkage supported by the Singapore AI Safety Hub.
The Singapore 2026 cohort has concluded. Submit an expression of interest and we'll keep you in the loop on future editions.
Reach out to pranav@aisb.dev to ask questions about the program.