• NATURAL 20
  • Posts
  • Sakana’s "Red Queen" Beats 40 Years of Human Strategy in 250 Moves

Sakana’s "Red Queen" Beats 40 Years of Human Strategy in 250 Moves

PLUS: Anthropic’s New "Constitutional" Guardrails, China Admits the AI Gap Is Growing and more.

In partnership with

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

Hey everyone,

Hope your week is off to a great start. We have a massive update today from new models to nuclear power plays, let’s get right into it.

Today:

  • Sakana’s "Red Queen" Beats 40 Years of Human Strategy in 250 Moves

  • Ilya Sutskever’s $100 Billion Equity Reveal

  • Meta’s Nuclear Power Play for AI Dominance

  • DeepSeek V4 Is Coming for the Coding Crown

  • Anthropic’s New "Constitutional" Guardrails

  • China Admits the AI Gap Is Growing

"RED QUEEN" AI means "GAME OVER" for us....

Researchers at Sakana AI taught large language models to play an old programming game called Core War. In this game, tiny programs fight by overwriting each other’s code. The models trained only against themselves, a process called self-play, and rewrote their own code each round, slowly getting better. After 250 cycles they beat the best human-written strategies gathered over 40 years. They also rediscovered winning tactics humans took decades to find. 

The work shows that letting AI improve itself can outpace human teaching and may extend to fields like cyber-security. It hints at an approaching ‘intelligence explosion’—fast runaway improvement very soon.

Meta announced agreements tied to up to 6.6 gigawatts of nuclear power by 2035 which is an absolutely bonkers number in “data center math.”

A few details that jumped out:

  • Oklo: Meta-backed work on an Ohio campus that could add up to 1.2 GW and might come online as early as 2030.

  • Vistra: Meta plans to buy 2.1 GW from operating plants and support “uprates” (more output) totaling 433 MW, expected in the early 2030s.

  • TerraPower: AP reports Meta will fund two Natrium units (690 MW) and has rights tied to additional units that could total 2.1 GW by 2035.

Meta explicitly ties this to powering its AI buildout including its Prometheus supercluster in Ohio.

My read: this is the cleanest signal yet that “AI leadership” is becoming an energy procurement game. The model wars are now also power-plant wars.

DeepSeek is expected to launch its next-gen model V4 in mid-February, aimed hard at coding and that internal testing suggests it could beat Claude and GPT-series models on coding tasks.

The spicy technical nugget: The model reportedly made progress on handling extremely long coding prompts, which is exactly what you want if you’re building or refactoring real software (not just solving cute LeetCode puzzles).

My read: if V4 is real and strong, the “open-ish China lab with real teeth” narrative gets louder and the competitive pressure on coding assistants ratchets up again.

Newly disclosed messages (surfacing via the Musk v. OpenAI legal fight) indicate Ilya Sutskever had roughly $4B of vested equity in Nov 2023, when OpenAI was valued at $29B.

Using a reported $850B valuation, that stake could imply $117B on paper, with heavy caveats around dilution, liquidity, and any sales. Their bottom-line estimate calls ~$100B “plausible” under certain assumptions.

My read: take the headline number with a shaker of salt but it’s still a reminder that the AI boom is minting fortunes at a scale we usually associate with oil barons, not research scientists.

🧠RESEARCH

Training AI to chase multiple goals (like being accurate and concise) often fails because the conflicting feedback confuses the system. This paper introduces a method that keeps these targets separate, ensuring the AI doesn't sacrifice one for the other. It significantly boosts performance in complex tasks like coding and math.

Standard AI training uses rigid rules to keep the model's internal connections stable. This research suggests letting the AI control its own "volume knobs" instead. By allowing the system to self-adjust the strength of its internal signals, it learns more effectively and outperforms models that rely on the old, static limits.

Most smart video models waste time over-analyzing simple clips. This new system acts more like a human: it answers instantly if it's confident and only pauses to "think step-by-step" for difficult questions. This hybrid approach matches the accuracy of top competitors while running three times faster.

🛠️TOP TOOLS

Each listing includes a hands-on tutorial so you can get started right away, whether you’re a beginner or a pro.

Botika AI – AI Generated Models For Fashion - AI tool for clothing brands and online retailers that turns your existing product shots into photorealistic on‑model images

Botly – AI-Powered Chat Automation for OnlyFans Creators - I messaging assistant with a lightweight CRM built specifically for OnlyFans creators and agencies.

Brancher AI – No Code AI App Builder - a no‑code platform for creating shareable AI web apps by connecting popular models and configuring inputs/logic.

📲SOCIAL MEDIA

🗞️MORE NEWS

Anthropic’s New "Constitution" for AI Safety Anthropic has released a new security tool that stops its AI from breaking the rules while using almost no extra computer power. This update allows the AI to better distinguish between harmful requests and innocent questions, making it much harder to trick. The system is trained on a "constitution," which is simply a set of written values that teaches the AI right from wrong.

China’s AI Leaders Doubt They Can Catch the US Top Chinese technology leaders admit their country is unlikely to catch up to American AI giants like OpenAI anytime soon. Despite raising over a billion dollars recently, these companies are too busy meeting customer demands to focus on the experimental research needed for big breakthroughs. Experts estimate there is less than a twenty percent chance China will lead the industry in the next few years.

Terence Tao on the Future of Math Famous mathematician Terence Tao believes AI is becoming a vital tool for checking complex math proofs and finding rare research papers. He suggests that instead of replacing mathematicians, these programs will act as smart assistants that handle the tedious parts of their work. This partnership between human logic and computer speed could completely change how new math discoveries are verified.

OpenAI Wants Contractors’ Old Homework OpenAI is reportedly asking its contract workers to share actual projects and documents from their past jobs to help train its systems. The company wants this real-world data to see if its AI can perform professional tasks as well as humans can. However, asking workers to upload confidential files from other employers has raised serious legal and ethical questions.

Google Pulls Back on Medical AI Google has removed its AI-generated summaries for certain health searches after they provided dangerous and misleading medical advice. Experts discovered the AI was giving incorrect explanations for complex test results, which could lead patients to make bad decisions. To protect user safety, the company effectively turned off the feature for sensitive medical topics.

Shopping Comes to Gemini Google has partnered with Walmart and other big retailers to let people buy products directly inside its Gemini AI chatbot. This new feature allows you to ask for shopping advice and complete a purchase instantly without ever leaving the conversation. Tech and retail leaders believe this ability to shop through a chat is the next major shift in online commerce.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.