FLI celebrates a landmark moment for the AI safety movement and highlights its growing momentum

Published:

Author:

Michael Kleinman

Michael Kleinman, Head of US Policy at the Future of Life Institute, released the following statement regarding the signing of SB 53:

> We applaud Governor Newsom for signing this vital legislation. Across America, the demand for stronger AI legislation continues to grow, with large majorities of both Republicans and Democrats calling for common-sense AI safeguards, including 82% of Republicans who agree there should be limits on what AI is allowed to do, and more than 70% of voters who support the government taking action to set safety standards. > > > While more work remains, this is a landmark moment: lawmakers have finally begun establishing basic protections around advanced AI systems — the same safeguards that exist for every other industry, whether pharmaceuticals, aircraft manufacturers, or your local sandwich shop. > > > This summer, the Senate resoundingly rejected, by a 99-to-1 vote, an attempt to prevent states from taking action. Now, states are stepping up to enact the AI safeguards the American people are demanding. Unless and until there are strong federal AI safety standards to protect our children, our communities and our jobs, both blue and red states will have no choice but to continue filling the void.

This content was first published at futureoflife.org on October 3, 2025.

About the Future of Life Institute

The Future of Life Institute (FLI) is the world’s oldest and largest AI think tank, with a team of 35+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts aboutAI Policy, Statement

If you enjoyed this content, you also might also be interested in:

Image 1

10 April, 2026

Image 2

22 March, 2026

Image 3

27 February, 2026

Image 4

31 January, 2025

Some of our Policy & Research projects

See some of the projects we are working on in this area: ### Control Inversion

Why the superintelligent AI agents we are racing to create would absorb power, not grant it | The latest study from Anthony Aguirre. ### Statement on Superintelligence

A stunningly broad coalition has come out against unsafe superintelligence: AI researchers, faith leaders, business pioneers, policymakers, National Security staff, and actors stand together.

Our work

Join 70,000+ others receiving periodic updates on our work and focus areas.

Steering transformative technology towards benefiting life and away from extreme large-scale risks.

Our Mission

Focus areas