Back to AISB

Cohort completed · April 2026

AI Security
Bootcamp
Singapore

A 7-day intensive program for security professionals shaping how we secure emerging AI systems. The Singapore 2026 cohort ran April 20–26, 2026.

April 20–26, 2026|Singapore|In-Person|Fully Funded

Past participants have been affiliated with

Overview

The Singapore iteration of AI Security Bootcamp explored the rapidly evolving threat landscape of frontier AI systems, equipping security professionals with the knowledge and hands-on skills to secure against current and emerging risks.

As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn't cover. This program filled that gap with an intensive, practitioner-focused curriculum.

Participants completed pre-work before the program to establish baseline ML fundamentals, followed by an immersive week delivered through demos, lectures, guest speakers, and hands-on red/blue exercises that built skills across the modern AI system stack.

The Program

What You'll Learn

  • Develop a threat model for frontier AI systems: from current deployments to the security challenges posed by increasingly capable systems
  • Build hands-on capability across the full attack surface: adversarial techniques, infrastructure exploitation, supply chain attacks, and model-level vulnerabilities
  • Understand how attacks and defenses scale with AI capability increases
  • Engage with security challenges that frontier AI organizations are actively working on—problems not yet covered in standard training curricula
  • Position yourself for high-impact roles at the frontier: AI labs, government programs, and research institutions shaping how the field develops

Who Should Attend

Security professionals ready to secure frontier AI systems at all stages — from user applications, to model APIs for developers; and from infrastructure hosting the models, to governance frameworks for emerging threats.

The Singapore cohort spanned a wide range of expertise — from offensive security, incident response, and threat intelligence to infrastructure and application security — with the AI-specific threat models and techniques pushing what participants already knew into new territory.

Prerequisites

5+ years of hands-on security experience. No prior AI or ML background needed - the pre-work will cover what's necessary.

Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.

Experience with deep learning frameworks (e.g., PyTorch) is a plus but not required. We want to make this accessible to security professionals from a variety of backgrounds, so we provide comprehensive pre-work to get everyone up to speed on the AI fundamentals needed to engage with the curriculum.

Past Participants

Past participants have been affiliated with: OpenAI, Google, Meta, Apple, Microsoft, AWS, Jane Street, Stanford, Oxford, CERN and other leading institutions across research, government, and industry.

Timing

AISB Singapore ran April 20–26, 2026, overlapping with Black Hat Asia (April 21–24) and just before DEF CON (April 28–30) — the bootcamp ended just before DEF CON opened, fitting naturally into the same trip for many participants.

Cost & Selection

Accommodation was included. Limited travel support was available for those who needed it.

Selection was competitive — the Singapore cohort accommodated 16 participants.

Team

Pranav Gade

Program Lead

Pranav Gade

Research engineer at Conjecture. Created AISB to bridge AI safety and security; leads curriculum design and program direction across all editions.

Jan Michelfeit

Security Lead

Jan Michelfeit

Security lead at Conjecture. Designs AISB’s hands-on labs and capstone projects, drawing on 10+ years securing complex systems and ML infrastructure.

David Williams-King

Curriculum

David Williams-King

Research Manager at ERA, upskilling fellows in technical AI safety research. PhD from Columbia in systems and security; previously CTO and cofounder of cybersecurity insurance startup Elpha Secure.

Nitzan Shulman

Security Lead

Nitzan Shulman

Head of Cyber Security at Heron AI Security Initiative. 6+ years of security research specialising in IoT, robotics, malware, and AI security.

Valerie Pang

Operations

Valerie Pang

Program Manager at Singapore AI Safety Hub. Helped run operations for AISB Singapore 2026.

Raymund Ed Dominic Bermejo

Project Manager

Raymund Ed Dominic Bermejo

Finds, vets, and orchestrates world-class teams to tackle major AI risks. Affiliated with Singapore AI Safety Hub. Project manager for AISB Singapore 2026.

Bootcamp Partner

Singapore AI Safety Hub (SASH)

Singapore AI Safety Hub (SASH)

Local execution and institutional linkage supported by the Singapore AI Safety Hub.

FAQs

Interested in future cohorts?

The Singapore 2026 cohort has concluded. Submit an expression of interest and we'll keep you in the loop on future editions.

Reach out to pranav@aisb.dev to ask questions about the program.