Growing BlueDot's Impact w/ Li-Lian Ang

Download MP3
I'm joined by my good friend, Li-Lian Ang, first hire and product manager at BlueDot Impact. We discuss how BlueDot has evolved from their original course offerings to a new "defense-in-depth" approach, which focuses on three core threat models: reduced oversight in high risk scenarios (e.g. accelerated warfare), catastrophic terrorism (e.g. rogue actors with bioweapons), and the concentration of wealth and power (e.g. supercharged surveillance states). On top of that, we cover how BlueDot's strategies account for and reduce the negative impacts of common issues in AI safety, including exclusionary tendencies, elitism, and echo chambers.

2025.09.15: Learn more about how to make design effective interventions to make AI go well and potentially even get funded for it on BlueDot Impact's AGI Strategy course! BlueDot is also hiring, so if you think you’d be a good fit, I definitely recommend applying; I had a great experience when I contracted as a course facilitator. If you do end up applying, let them know you found out about the opportunity from the podcast!

Follow Li-Lian on LinkedIn, and look at more of her work on her blog!

As part of my effort to make this whole podcasting thing more sustainable, I have created a Kairos.fm Patreon which includes an extended version of this episode. Supporting gets you access to these extended cuts, as well as other perks in development.

  • (03:23) - Meeting Through the Course
  • (05:46) - Eating Your Own Dog Food
  • (13:13) - Impact Acceleration
  • (22:13) - Breaking Out of the AI Safety Mold
  • (26:06) - Bluedot’s Risk Framework
  • (41:38) - Dangers of "Frontier" Models
  • (54:06) - The Need for AI Safety Advocates
  • (01:00:11) - Hot Takes and Pet Peeves

Links
Defense-in-Depth
  • BlueDot Impact blogpost - Our vision for comprehensive AI safety training
  • Engineering for Humans blogpost - The Swiss cheese model: Designing to reduce catastrophic losses
  • Open Journal of Safety Science and Technology article - The Evolution of Defense in Depth Approach: A Cross Sectorial Analysis
X-clusion and X-risk
  • Nature article - AI Safety for Everyone
  • Ben Kuhn blogpost - On being welcoming
  • Reflective Altruism blogpost - Belonging (Part 1: That Bostrom email)
AIxBio
  • RAND report - The Operational Risks of AI in Large-Scale Biological Attacks
  • OpenAI "publication" (press release) - Building an early warning system for LLM-aided biological threat creation
  • Anthropic Frontier AI Red Team blogpost - Why do we take LLMs seriously as a potential source of biorisk?
  • Kevin Esvelt preprint - Foundation models may exhibit staged progression in novel CBRN threat disclosure
  • Anthropic press release - Activating AI Safety Level 3 protections
Persuasive AI
  • Preprint - Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models
  • Nature Human Behavior article - On the conversational persuasiveness of GPT-4
  • Preprint - Large Language Models Are More Persuasive Than Incentivized Human Persuaders
AI, Anthropomorphization, and Mental Health
  • Western News article - Expert insight: Humanlike chatbots detract from developing AI for the human good
  • AI & Society article - Anthropomorphization and beyond: conceptualizing humanwashing of AI-enabled machines
  • Artificial Ignorance article - The Chatbot Trap
  • Making Noise and Hearing Things blogpost - Large language models cannot replace mental health professionals
  • Idealogo blogpost - 4 reasons not to turn ChatGPT into your therapist
  • Journal of Medical Society Editorial - Importance of informed consent in medical practice
  • Indian Journal of Medical Research article - Consent in psychiatry - concept, application & implications
  • Media Naama article - The Risk of Humanising AI Chabots: Why ChatGPT Mimicking Feelings Can Backfire
  • Becker's Behavioral Health blogpost - OpenAI’s mental health roadmap: 5 things to know
Miscellaneous References
  • Carnegie Council blogpost - What Do We Mean When We Talk About "AI Democratization"?
  • Collective Intelligence Project policy brief - Four Approaches to Democratizing AI
  • BlueDot Impact blogpost - How Does AI Learn? A Beginner's Guide with Examples
  • BlueDot Impact blogpost - AI safety needs more public-facing advocacy
More Li-Lian Links
  • Humans of Minerva podcast website
  • Li-Lian's book - Purple is the Noblest Shroud
Relevant Podcasts from Kairos.fm

Creators and Guests

Growing BlueDot's Impact w/ Li-Lian Ang
Broadcast by