I'm a Principal Security Engineer in the Machine Learning practice at Trail of Bits working on major AI and ML risk engagements.
In the past, I've been a lead consultant for Verizon Japan working on global GRC & Penetration test engagements. I've spent some years at a Japanese financial firm where I developed trading algorithms for a short time before leading the creation of an in-house penetration testing team. I've also worked as a security auditor for a Big 4 consulting firm. I hold various certifications.
Until September 2021, I was a 2nd year PhD Student at the University of Toronto and the Vector Institute co-supervised by Prof. Nicolas Papernot and Prof. David Lie. I obtained my MSc. in Computer Science from the University of Oxford under the supervision of Prof. Samson Abramsky and my Diplôme d'Ingénieur from Télécom ParisTech.
I'm interested in topics at the intersection of ML, AI, and security. I'm especially interested in offensive, audit, and forensics methods as drivers of interpretable safety-critical AI. You can find my academic publications here and my contributions to AI Security Policy and industry standards here.
Adelin Travers. Trail of Bits blog (2024).
Our response to the US Army’s RFI on developing AIBOM tools.Michael Brown, Adelin Travers. Trail of Bits blog (2024).
Interpretability in Safety-Critical Financial Trading Systems.Gabriel Deza, Adelin Travers, Colin Rowat, Nicolas Papernot. arXiv preprint.
SoK: Machine Learning Governance.Varun Chandrasekaran*, Hengrui Jia*, Anvith Thudi*, Adelin Travers*, Mohammad Yaghini*, Nicolas Papernot. arXiv preprint.
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples.Adelin Travers*, Lorna Licollari, Guanghan Wang, Varun Chandrasekaran, Adam Dziedzic, David Lie, Nicolas Papernot. arXiv preprint.
Machine Unlearning.Lucas Bourtoule*, Varun Chandrasekaran*, Christopher Choquette-Choo*, Hengrui Jia*, Adelin Travers* , Baiwu Zhang*, David Lie, Nicolas Papernot. Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA. (2021)
Adelin Travers.
AI Village (AIV) Comments on the NIST concept paper on Artificial Intelligence Risk Management Framework (AI RMF)Adelin Travers, on behalf of the AI Village.
AI Village (AIV) Response to the NIST RFI on Artificial Intelligence Risk Management Framework (AI RMF)Adelin Travers, Anita Nikolich, Abhishek Gupta, Stella Biderman, Brian Pendleton, Erick Galinkin, Brian Martin, John Irwin, Anusha Ghosh.
Adelin Travers. LLMリスクとセキュリティ:安全性と信頼性の確保への道 by Tokyo AI (2024).
Holistic ML Threat Models, AI Village at BSidesSF & Graph the Planet 2024.Adelin Travers.
Panel: Securing genAI deployments, Graph the Planet 2024Sven Cattell, Claude Mandy, Adelin Travers, Bo Li.
AI vulnerability research panel, CERT Vendor meeting 2024.Adelin Travers, Tyler Sorensen, Kasimir Schulz.