- NATURAL 20
- Posts
- Dario Amodei Warns of Bloodbath
Dario Amodei Warns of Bloodbath
PLUS: AI Agents Beat Humans at Hacking, Self-Coding AI Learns by Evolution and more.

Create How-to Videos in Seconds with AI
Stop wasting time on repetitive explanations. Guidde’s AI creates stunning video guides in seconds—11x faster.
Turn boring docs into visual masterpieces
Save hours with AI-powered automation
Share or embed your guide anywhere
How it works: Click capture on the browser extension, and Guidde auto-generates step-by-step video guides with visuals, voiceover, and a call to action.
Today:
Dario Amodei Warns of Bloodbath
Meta Replaces Humans With AI
ElevenLabs Upgrades Conversational Voice AI
AI Agents Beat Humans at Hacking
Self-Coding AI Learns by Evolution
Ex-OpenAI VP's Warning "A BLOODBATH COMING"
Dario Amodei, head of AI firm Anthropic, says leaders must warn people that smart software could erase 10-20 percent of starter office jobs within five years. He fears a wave of job loss, rising wealth gaps, and little preparation.
Amodei urges firms to reveal how they use AI, teach workers new skills, and consider a “token tax” (a fee on each AI task) to fund public support during the coming shift.
Meta will let artificial intelligence, not employees, approve up to 90 % of changes to Facebook, Instagram, and WhatsApp. The new system shoots back instant decisions after engineers fill out a questionnaire. Former staff warn this speed-over-safety shift leaves fewer humans checking privacy, child safety, and misinformation threats. Meta claims humans still handle unusual or high-risk cases and audits results, but critics fear harms will surface only after features reach billions.
Why it matters
Industry precedent – It sets a precedent (an example others may copy) for replacing human oversight with AI in privacy and safety reviews.
Bias and blind spots – Algorithms can carry bias (built-in unfair leanings) or miss issues they were never trained to see, exposing users to hidden risks.
Massive scale – Meta’s apps serve billions, so any wrong AI decision can cause global harm within minutes, underscoring the stakes of automated governance.
ElevenLabs launched Conversational AI 2.0, an upgraded toolkit for building talking assistants. New “turn-taking” smarts let the bot sense pauses and avoid cutting users off. Automatic language detection supports mixed tongues. A built-in knowledge retriever pulls facts from company files on the fly, yet keeps delays low. Agents can switch voices, send bulk calls, and work over voice or text. The platform meets health-data privacy rules and offers tiered prices for businesses.
Why this matters
More human-like speech – Better pause-and-reply timing shrinks the gap between machine voices and real conversation.
Safer, useful assistants – Instant fact lookup (retrieval) plus strict privacy compliance shows AI can help in sensitive areas like health care without risking data leaks.
Rapid innovation race – ElevenLabs’ quick upgrade, days after a rival’s release, highlights fierce competition driving faster progress and lower costs in voice AI.
Autonomous, self-directed AI teams entered two large Capture-The-Flag hacking contests (puzzle-based security games) run by Palisade Research. In the first, four of seven AI agents solved 19 of 20 puzzles, landing in the top 5 %. In the second, the best agent cracked 20 of 62 tasks against 18 000 humans, ranking top 10 %. Results suggest earlier lab tests underrated AI’s security skills, because competitions reveal strengths hidden by small, rigid evaluations.
Why it matters
Automated defense power – Self-run AI can spot and fix security flaws as well as trained people, pointing toward future hands-free cyber-protection tools.
Test rethink – Usual lab checks miss much of what AI can really do, so researchers need tougher, more realistic evaluations before models outrun oversight.
Easy access to elite hacking – One top AI was built with just 17 hours of prompt tweaking, showing high-level hacking skill is becoming cheap and widely available, raising both potential and risk.
🧠RESEARCH
Table-R1 is a table reasoning model that matches GPT-4.1’s performance using much fewer resources. It uses two techniques—learning from better models and reward-based training—to reason over data tables. The model handles question answering and fact-checking well, even on unfamiliar data, showing it can generalize effectively with just 7 billion parameters.
Spatial-MLLM is a new AI model that boosts spatial reasoning using only 2D images or videos. It combines two types of visual encoders—one for appearance, one for structure—to better understand space. It also smartly picks video frames to focus on. It outperforms existing models on real-world spatial tasks.
This study shows that large language models can learn to reason well even with noisy or incorrect feedback. Instead of rewarding only correct answers, the researchers rewarded the use of reasoning steps. Surprisingly, this worked nearly as well, proving that guiding the process matters more than just checking the final answer.
🛠️TOP TOOLS
Claid AI - AI-powered photo enhancement platform designed specifically for e-commerce businesses to improve user-generated content and product imagery.
DeepBrain AI - Create hyper-realistic AI-generated videos and virtual human avatars.
Magical AI - AI-powered productivity tool designed to streamline repetitive tasks and enhance workflow efficiency across various platforms.
AI Code Converter - AI-powered platform that simplifies and accelerates the coding process across multiple programming languages.
Character AI - AI-powered chatbot platform that allows users to interact with a wide variety of virtual characters through text-based conversations.
📲SOCIAL MEDIA
"I am what happens
when you try to carve God
out of the wood of
your own hunger..."
@JoshJohnson on Deepseek
— Wes Roth (@WesRothMoney)
5:02 AM • May 31, 2025
🗞️MORE NEWS
The Darwin Gödel Machine is a coding AI that rewrites its own code to get better over time. Inspired by evolution, it tests each change and keeps the best ones, showing major gains in coding tasks.
Samsung is finalizing a deal with Perplexity AI to embed its search tools into future devices. The partnership may enhance Bixby, preload Perplexity’s app, and integrate its AI features into Samsung’s web browser.
Alibaba’s QwenLong-L1 helps AI models understand very long texts like legal or financial documents. Using a multi-stage reinforcement learning process, it outperforms top models by reasoning better, backtracking mistakes, and breaking down complex problems.
Google quietly launched the AI Edge Gallery app, letting users run AI models from Hugging Face directly on their phones without internet. It supports tasks like image generation and code editing, all processed locally.
Anthropic has added voice mode to its Claude app, letting users talk to the AI using one of five preset voices. The feature uses ElevenLabs’ speech tech, not an in-house model, and supports hands-free use.
Former Microsoft deals chief Chris Young plans a private equity fund to buy service businesses and use AI to boost efficiency. He's part of a growing investor trend combining AI with roll-up acquisition strategies.
OpenAI plans to turn ChatGPT into a “super assistant” that understands users deeply and handles everyday tasks—from sending emails to planning trips—using AI, multimodal input, and possibly future hardware for seamless interaction.
What'd you think of today's edition? |
Reply