- NATURAL 20
- Posts
- OpenAI Hires OpenClaw Creator Peter Steinberger to Build Next-Gen Agents
OpenAI Hires OpenClaw Creator Peter Steinberger to Build Next-Gen Agents
PLUS: Apple Reportedly Developing Smart Glasses and AI Wearables, Figma Integrates Anthropic AI to Turn Designs into Code and more.

When Is the Right Time to Retire?
Determining when to retire is one of life’s biggest decisions, and the right time depends on your personal vision for the future. Have you considered what your retirement will look like, how long your money needs to last and what your expenses will be? Answering these questions is the first step toward building a successful retirement plan.
Our guide, When to Retire: A Quick and Easy Planning Guide, walks you through these critical steps. Learn ways to define your goals and align your investment strategy to meet them. If you have $1,000,000 or more saved, download your free guide to start planning for the retirement you’ve worked for.
Today:
OpenAI Hires OpenClaw Creator Peter Steinberger to Build Next-Gen Agents
Anthropic Releases Claude Sonnet 4.6
Meta and NVIDIA Forge Massive AI Infrastructure Partnership
Apple Reportedly Developing Smart Glasses and AI Wearables
Figma Integrates Anthropic AI to Turn Designs into Code
VisionClaw: A New Open-Source Tool for High-Fidelity Computer Vision
the end of OpenClaw
OpenAI has hired Peter Steinberger after a bidding war. Open Claw, once called Moltbot and ClaudeBot, is an open-source project that lets people run autonomous AI “agents” (self-directed programs) on their own computers. It exploded in February: 200k GitHub stars and 1.5-million agents built. Legal pressure from Anthropic over the name ClaudeBot forced quick rebrands but didn’t slow growth.
Instead of buying the code, OpenAI hired Steinberger to build safer next-generation personal agents while funding an independent, community-run Open Claw foundation. The move signals OpenAI’s push to deliver even safer user-controlled agents for worldwide daily use.

Anthropic announced Claude Sonnet 4.6, calling it their most capable Sonnet yet — and the headline for me isn’t “bigger model.” It’s “bigger range of work it can reliably do.”
What stands out:
1M token context window (beta) — enough to load an entire codebase, long contracts, or stacks of documents and actually reason across them (not just store them).
A real “full upgrade” across practical tasks: coding, long-context reasoning, agent planning, knowledge work, and design.
Better “computer use” (the model operating software like a person would, clicking and typing) — Anthropic frames this as a way to automate work that doesn’t have clean APIs or easy integrations.
Same price as the prior Sonnet: pricing “remains the same as Sonnet 4.5,” starting at $3 / $15 per million tokens (input/output).
Default model on Free and Pro in claude.ai (and their other surfaces), which matters because it’s a “you get it now” upgrade, not a niche power-user thing.
The vibe I’m getting: Sonnet 4.6 is Anthropic trying to push “frontier-ish” performance into a model you can actually justify using all day without sweating cost. And the 1M context + better computer-use combo is a pretty direct bet on agents that operate inside messy real workflows.

NVIDIA announced a multiyear, multigenerational strategic partnership with Meta to build hyperscale AI data centers for training and inference.
Key details (the stuff that hints at where “AI at Meta scale” is headed):
Large-scale deployment of NVIDIA CPUs + “millions” of GPUs — specifically Blackwell and future Rubin GPUs.
Networking becomes the story: Meta plans to scale AI workloads with Spectrum-X Ethernet for efficiency and throughput.
Confidential Computing is in the mix — NVIDIA says WhatsApp will use it for “private processing,” with more privacy-enhanced AI use cases planned across Meta’s portfolio.
There’s also explicit talk of Grace CPUs now and Vera CPUs potentially at large scale in 2027.
If you zoom out: this isn’t just “buy more GPUs.” It’s a blueprint for an AI factory that’s tuned for efficiency (performance per watt), networking, and secure processing — because the “agentic” future is going to be bottlenecked by infrastructure reality, not just model IQ.
And then there’s the “this is going to change how people use AI” rumor.
A report (sourced to Bloomberg) says Apple is working on:
AI-powered smart glasses (production reportedly aimed for December 2026, launch in 2027)
An AI pendant (AirTag-sized, worn as a pin/necklace, with iPhone doing most processing)
AirPods with cameras that use AI to interpret surroundings
What I find most interesting here is the pattern: the hardware isn’t necessarily “a new phone.” It’s more like new sensors that turn your surroundings into something an assistant can understand — quietly, continuously, and hands-free. If this rumor is real, it’s Apple trying to make “AI help” feel like ambient computing rather than an app you open.
🧠RESEARCH
This paper introduces a new way to create training data for AI. Instead of just looking at words, it analyzes the AI's internal "brain" patterns to find missing information. By generating data that fills these specific gaps, the AI learns better and faster while using much less information than before.
Researchers created a new test called SQuTR to check how well search engines understand spoken questions in noisy places. They found that background sounds make it hard for computers to find the right answers. This tool helps developers build systems that listen better in the real world.
Google DeepMind proposes a system for "Intelligent AI Delegation." This allows AI helpers to break down big jobs and assign parts to other AIs or humans. It focuses on ensuring these digital workers trust each other, clearly define roles, and handle mistakes without crashing the whole project.
📲SOCIAL MEDIA
🗞️MORE NEWS
Figma and Anthropic Partnership Figma is using Anthropic’s smart technology to help people turn their design ideas into actual computer code more easily. This update allows the software to understand what a designer wants and write the technical instructions to build it automatically. The goal is to bridge the gap between people who draw the apps and the engineers who make them work.
VisionClaw VisionClaw is a new tool that helps computers "see" and understand images or videos with much higher accuracy. It acts like a digital eye that can pick out specific details and explain what is happening in a scene. This technology makes it easier for machines to interact with the physical world by interpreting visual information in real-time.
Render’s Big Investment A company called Render, which helps businesses put their apps and websites on the internet, just raised $100 million from investors. This new funding values the company at $1.5 billion, showing that experts believe their cloud services are very important. The company plans to use the money to grow faster and compete with much larger tech giants.
Google I/O 2026 Google has officially announced the dates for its big yearly event where they show off their newest inventions. Most of the focus this year will be on "Artificial Intelligence," which are computer programs that can think, learn, and solve problems like humans. Developers and fans are waiting to see how these new tools will be added to phones and search engines.
WordPress AI Assistant The popular website-building platform WordPress has added a new digital helper to make creating content easier. This tool can change the look of your site, fix your writing, and even create brand-new images based on your descriptions. It is designed to save users time by handling the difficult design and editing work automatically.
Anthropic and Infosys Anthropic is teaming up with a massive global consulting company called Infosys to bring smart computer tools to big businesses. This partnership will help large organizations use technology to automate boring tasks and analyze complicated data more quickly. It helps older companies modernize their systems by using the latest digital breakthroughs.
What'd you think of today's edition? |


Reply