- NATURAL 20
- Posts
- OpenAI Launches GPT-5.2 Codex
OpenAI Launches GPT-5.2 Codex
PLUS: Mistral Releases OCR 3, ChatGPT Gets an App Store and more.

Get the investor view on AI in customer experience
Customer experience is undergoing a seismic shift, and Gladly is leading the charge with The Gladly Brief.
It’s a monthly breakdown of market insights, brand data, and investor-level analysis on how AI and CX are converging.
Learn why short-term cost plays are eroding lifetime value, and how Gladly’s approach is creating compounding returns for brands and investors alike.
Join the readership of founders, analysts, and operators tracking the next phase of CX innovation.
Hey — quick note before your day gets busy.
A lot of things happened this week that feel connected: one you can use today, and one that explains why this era is so capital-hungry.
Today:
OpenAI Launches GPT-5.2 Codex
OpenAI Targets a $750 Billion Valuation
Tech Giants Join US 'Genesis' Science Mission
Mistral Releases OCR 3
ChatGPT Gets an App Store

OpenAI released GPT-5.2-Codex on December 18, 2025, positioning it as their most advanced “agentic” coding model — meaning it’s designed to act (use tools, run steps, keep going), not just answer questions.
What jumped out to me isn’t “it writes code.” It’s that the model is tuned for the stuff that usually breaks assistants:
Long-horizon work (it “compacts” context as it goes so it can keep its bearings over long sessions)
Big repo changes like refactors and migrations
More reliable tool use (think: terminal steps that don’t derail on the second command)
Better performance on Windows (small detail, big deal for real teams)
It’s also claiming state-of-the-art results on SWE-Bench Pro and Terminal-Bench 2.0, both meant to test “real terminal environment” agent performance.
Access / rollout: OpenAI says it’s live “in all Codex surfaces” for paid ChatGPT users, with API access planned in the coming weeks. They’re also running an invite-only trusted access track for vetted defensive security work.
Safety angle (important): The Codex system card addendum says it’s very capable in cybersecurity but not “High” under their Preparedness Framework, and notes additional safeguards (like sandboxing and configurable network access).
If you want a fast “is this real?” test: give it a repo task you’ve been avoiding—something like “upgrade dependency X and fix all failing tests,” or “perform this refactor across these 40 files but keep behavior identical,” then watch if it stays coherent through the boring parts.
Now the “why are they doing this” story.
A Reuters write-up of The Information report says OpenAI has held preliminary talks about raising money at around a $750B valuation, and that it could raise as much as $100B (again: talks, not a done deal).
A few grounding details from that same report:
Reuters notes it couldn’t immediately verify the report and OpenAI didn’t respond right away.
It would be about a 50% jump from a reported $500B valuation in October, tied to a deal where employees sold about $6.6B of shares.
It also mentions OpenAI laying groundwork to go public, potentially filing as early as 2H 2026, and floats a $1T IPO valuation as a possibility.
My read: even if the exact numbers shift, the signal is consistent—the appetite for compute is still the main character. The report explicitly frames this kind of raise as a response to the sector’s need for massive computing power.
What I’d watch next (because this is where “rumor” turns into “reality”):
who actually leads the round (and what strings come with it),
whether it’s paired with infrastructure commitments,
and what that implies for pricing, capacity, and product release velocity over the next 6–12 months.
Here’s the core idea: the White House order frames Genesis as a coordinated effort to build an integrated platform that pulls together federal scientific datasets, supercomputing + secure cloud, and AI agents to automate parts of research workflows and speed breakthroughs.
DOE says it’s already signed collaboration agreements with 24 organizations (including Google, NVIDIA, and OpenAI) to advance the mission, and emphasizes that outputs should be architecture-agnostic (a fancy way of saying: not locked to one vendor’s stack by default).
Now the “three announcements” you linked are basically the first wave of what each major player is bringing to the table:
Google DeepMind: They’re backing Genesis by giving scientists across all 17 DOE National Laboratories accelerated access to their AI-for-science tools, starting with an “AI co-scientist” running on Google Cloud (a multi-agent collaborator built on Gemini). The pitch is straightforward: help researchers sift huge literatures, generate hypotheses, and move faster from idea → testable proposal.
OpenAI: OpenAI and DOE signed an MOU to expand collaboration on AI + advanced computing in support of DOE initiatives (explicitly including Genesis). They also point to work already underway inside the national lab system — like a “1,000 Scientist AI Jam Session” across nine labs, deployments of advanced reasoning models on DOE lab supercomputers (including Los Alamos’ Venado), and work with Los Alamos on realistic bioscience evaluations for safe scientific use.
NVIDIA: NVIDIA says it’s joining Genesis as a private industry partner and also signed its own MOU with DOE outlining collaboration priorities. They call out focus areas like open AI science models, digital twins/simulation, robotics and autonomous labs, nuclear (fission/fusion), and quantum computing — basically the “everything that needs serious compute” bucket.
If you zoom out, what I think is genuinely notable here isn’t any single model or brand name. It’s the direction of travel:
We’re watching AI shift from “chat with a smart model” to “wire frontier models into the actual machinery of research” — national lab supercomputers, simulations, autonomous labs, and high-consequence evaluation loops. That’s a very different game than shipping a new app feature.
If you want one thing to watch next: whether this becomes a repeatable pipeline where labs can rapidly test, benchmark, and operationalize models across domains (fusion, materials, biology) without every lab reinventing the safety + tooling stack from scratch. The MOUs are the opening move; the real signal will be what turns into funded projects and running systems.
🧠RESEARCH
This paper introduces Step-GUI, a new AI system that interacts with software interfaces more effectively while drastically cutting training costs. It uses a self-correcting method to learn from its own actions and includes privacy features to keep sensitive data on-device. The authors also
Generative AI often confuses details when creating images with multiple specific characters. Scone solves this by teaching the model to explicitly distinguish between subjects before drawing them, ensuring each character retains its unique identity. This approach creates complex scenes without blending different characters' features together, outperforming current open-source models.
RoboTracer helps robots understand and move through physical space by combining visual data with precise distance measurements. Using a massive new dataset of spatial reasoning tasks, it trains robots to plan complex, multi-step paths more accurately. This system significantly beats previous models in handling dynamic, real-world environments and physical interactions.
🛠️TOP TOOLS
Each listing includes a hands-on tutorial so you can get started right away, whether you’re a beginner or a pro.
Audyo – Create Audio Like Writing a Doc - browser‑based text‑to‑speech tool that lets you create natural‑sounding voiceovers “as easy as typing.”
Augie AI – Revolutionizing Video Creation and Editing - AI‑assisted video editor that lets you turn ideas, scripts, and existing footage into polished social‑ready videos—fast.
Auto Seduction AI – AI Dating Assistant - helps you spark and steer dating‑app conversations.
📲SOCIAL MEDIA
🗞️MORE NEWS
Mistral's New Eye for Detail Mistral just released a powerful new tool that helps computers read text from images, even if it is messy handwriting. It is significantly cheaper and more accurate than older versions, which is great news for businesses drowning in paperwork. You can try it out now on their website or plug it into your own software systems.
ChatGPT Gets an App Store You can now connect apps like Spotify and Photoshop directly to ChatGPT, turning the chatbot into a central hub for your tasks. OpenAI also released a set of building blocks that lets developers create their own custom tools to work inside the chat. This move brings us closer to a future where one AI assistant handles everything for you.
Firefox’s AI Pivot Angers Fans Firefox is pivoting to become an "AI browser" over the next few years, and its privacy-focused fans are furious. The company promises you can turn these features off, but users feel this betrays the browser's reputation for keeping things simple and secure. It is a bold risk that might alienate the very people who kept the browser alive.
OpenAI Hits Pause to Fix Issues OpenAI is reportedly hitting the brakes on new experiments to fix speed and reliability issues with ChatGPT. With rivals like Google catching up, the company is worried their lead is slipping and wants to make sure their main product actually works well. This suggests the startup is facing some serious growing pains as it tries to stay on top.
What'd you think of today's edition? |


Reply