All AI Labs Business News Newsletters Research Safety Tools Sources

DeepTrendLab — Top 50 AI Sources, Research & News

AI Alignment Forum 10 articles
Preventing extinction from ASI on a $50M yearly budget
🛡️ Safety AI Alignment Forum 1 min read

ControlAI's mission is to avert the extinction risks posed by superintelligent AI. We believe that in order to do this, we must secure an international prohibition on its development. We're…

You can only build safe ASI if ASI is globally banned
🛡️ Safety AI Alignment Forum 1 min read

Sometimes people make various suggestions that we should simply build "safe" artificial Superintelligence (ASI), rather than the presumably "unsafe" kind. [1] There are various flavors of “safe” people suggest. Sometimes…

Current AIs seem pretty misaligned to me
🛡️ Safety AI Alignment Forum 1 min read

Many people—especially AI company employees [1] —believe current AI systems are well-aligned in the sense of genuinely trying to do what they're supposed to do (e.g., following their spec or…

🛡️
🛡️ Safety AI Alignment Forum 1 min read

Note: you are ineligible to complete this challenge if you’ve studied Ancient or Modern Greek, or if you natively speak Modern Greek, or if for other reasons you know what…

🛡️
🛡️ Safety AI Alignment Forum 1 min read

In this post, I'll go through some of my best guesses for the current situation in AI as of the start of April 2026. You can think of this as…

🛡️
🛡️ Safety AI Alignment Forum 1 min read

TLDR: The first in a planned series of three or more papers, which constitute the first major in-road in the compositional learning programme, and a substantial step towards bridging agent…