Robots & Artificial IntelligenceAI ‘Safety’ Tests Are So Ineffective We Might as Well Just Tell AI, “We Trust You, Please Don’t Kill Us” Posted on November 24, 2025 by Administrator A chilling new report reveals what many already feared: the so-called “AI safety tests” used by developers and governments to evaluate artificial intelligence are so weak and unreliable that they may be giving a false sense of security to the public — and to policymakers who don’t understand what they’re dealing with.