
Your agent needs more than 2 projects
You prompt. The agent builds. Then it asks for a database.
Ghost is postgres made for this. Spin one up in seconds. Fork it like a branch. Delete it when you're done. Pay nothing when it's idle.
Your agent gets full sql, mcp support, as many databases. No dashboards. No provisioning. No forgotten dev databases draining your card at month end.
Build a weekend app. Fork the schema three different ways. Throw two of them out. Ghost doesn't care. The next prompt can spin up a fresh one.
You're already vibe-coding the app. Stop wiring up the backend.
Unlimited databases. Unlimited forms. 100 compute/hrs a month. 1tb of storage. Free.
Today:
The Pentagon Brings Frontier AI Into Classified Networks
Meta’s Humanoid Robot Push Gets a New Brain
Anthropic Hunts for Cheaper Chips to Power Claude
Grok’s New Trick: Turning Your Voice Into an AI Voice
Google’s Secret COSMO Assistant Slips Onto Android

The U.S. War Department announced agreements with eight major AI and cloud companies — SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle — to deploy advanced AI tools on classified military networks.
The goal is to bring frontier AI into secure IL6 and IL7 environments, which are used for highly sensitive national-security work. In plain terms: the U.S. military wants AI systems that can help sort information, improve situational awareness, and support faster decision-making in complex operations.
The department says this is part of its push to become an “AI-first fighting force,” with AI supporting warfighting, intelligence, and enterprise operations. Its internal AI platform, GenAI.mil, has already reached major scale: over 1.3 million personnel have used it, producing tens of millions of prompts and deploying hundreds of thousands of agents in five months. The biggest takeaway is that military AI is no longer just experimental. It is becoming core infrastructure.
Meta has acquired Assured Robot Intelligence, a startup building AI models for robots, as part of its larger push into humanoid technology. The deal amount was not disclosed, but the acquired team is expected to join Meta Superintelligence Labs, Meta’s high-end AI research division.
Assured Robot Intelligence, also known as ARI, focuses on helping robots understand, predict, and adapt to human behavior in messy real-world environments. That matters because humanoid robots are not just a hardware problem. They need “brains” that can handle movement, object manipulation, physical tasks, and unpredictable human spaces.
The startup’s co-founders, Lerrel Pinto and Xiaolong Wang, bring serious robotics experience. Business Insider reports ARI is a small San Diego-based startup with about 20 employees, and its work is focused on high-precision dexterity and manipulation — basically, helping robots interact with real objects in useful ways.
Anthropic is reportedly in early talks with London-based chip startup Fractile to buy AI inference chips. Inference means the part of AI computing where a trained model actually answers users, writes code, analyzes documents, or powers agents. That is where costs can explode when millions of people use a model every day.
Fractile is developing chips built around SRAM, a type of memory placed closer to the compute itself. The idea is to reduce the costly back-and-forth movement of data between processors and separate memory chips. That movement is one of the biggest bottlenecks in running large AI models quickly and cheaply.
The chips are reportedly not expected to be commercially ready until around 2027, so this is not a short-term fix. But it shows Anthropic wants more leverage over its compute supply chain. The company already uses Nvidia GPUs, Amazon Trainium, and Google TPUs, and Fractile would add another possible source.
🧠RESEARCH
Eywa lets language AIs work with specialized scientific AIs built for tables, time-series data, formulas, and other non-text inputs. Instead of forcing everything into words, it gives expert models a language bridge. Tests across science fields show better answers, less text processing, and faster runs than language-only agents.
Edit-R1 trains image editors using a reasoning checker, not just a single score. The checker breaks an instruction into parts, judges whether each edit follows them, and rewards better results. This helps avoid unwanted changes, like editing a shirt but also changing a hat, and improves models such as FLUX.
Microsoft proposes making fake but realistic computers full of folders, files, spreadsheets, and presentations. AI agents then practice month-long office projects inside them, including searching files, coordinating with coworkers, and producing deliverables. In 1,000 simulations, the practice improved agents and could scale without using private user data.
📲SOCIAL MEDIA
🗞️MORE NEWS
Grok Can Now Speak in Your Voice: xAI launched Custom Voices, letting users clone a voice from a short recording and use it in Grok’s text-to-speech and voice-agent tools. It also added a Voice Library for teams to manage voices, with safety checks meant to stop people from cloning someone else’s voice.
A Test AI Assistant Accidentally Appears: Google briefly published COSMO, an experimental Android AI assistant, on the Play Store before removing it. The app looked like a test version of a future assistant, with tools for writing documents, suggesting calendar events, finding photos, summarizing conversations, and running some AI directly on the phone.
Companies Get a Control Center for AI Workers: Microsoft made Agent 365 generally available for business customers. It helps companies see, manage, and secure AI agents — meaning software helpers that can do tasks for people — including agents from Microsoft, AWS, Google Cloud, Zendesk, Genspark, n8n, and others.
OpenAI Makes It Easier to Move Coding Work Over: OpenAI’s Switch to Codex page shows a simple migration path for developers moving into the Codex app. It can scan an existing setup, create an import checklist, copy settings like project rules and plugins, then let users continue coding work inside Codex with fewer interruptions.



