New Jobs Focus On AI Coordination

PLUS: Apple Reboots LLM-Based Siri Project, Anthropic Apologizes For Legal Error and more.

In partnership with

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

Today:

  • New Jobs Focus On AI Coordination

  • OpenAI Launches Codex Coding Assistant

  • Nvidia Eyes Investment In PsiQuantum

  • Apple Reboots LLM-Based Siri Project

  • Anthropic Apologizes For Legal Error

Controlling Agent Swarms is your ONLY job...

AI agents are getting better at doing complex tasks, but the real value lies in those who can manage and coordinate them. Like in the game Factorio, future jobs will be about designing smart systems that use AI efficiently. 

Instead of knowing how to do every task, the key skill will be turning cheap AI power and limited resources into valuable results. Orchestrating AI agents will be the next in-demand profession.

OpenAI launched Codex, a cloud-based coding helper that juggles many jobs at once today. Powered by codex-1, a model trained on real projects through trial-and-reward learning (teaching by feedback), Codex reads, edits, and tests code in a secure, isolated workspace (a locked-down container). It records every step, follows simple guide files, and starts rolling out to ChatGPT Pro, Enterprise, and Team users, with broader access and pay-as-you-go pricing coming later.

Why this matters

  1. Step toward autonomous software engineering — Codex shows that AI can handle parallel, end-to-end coding tasks, hinting at future self-managed development pipelines.

  2. Emphasis on transparency and safety — The agent logs every action and refuses malicious requests, offering a practical blueprint for responsible AI deployment.

  3. Broader ecosystem impact — With pricing, CLI support, and GitHub integration, Codex could accelerate AI adoption in everyday developer workflows, influencing tooling standards industry-wide.

Nvidia is close to funding PsiQuantum, a startup building quantum computers that use light particles. PsiQuantum seeks at least $750 million at a $6 billion value. The deal would be Nvidia’s first bet on hardware that could outpace today’s chips and follows CEO Jensen Huang’s softer view on quantum tech. PsiQuantum hopes millions of “qubits”—tiny units that can be both 0 and 1—will eventually solve chemistry and other problems faster.

Why it matters

  1. Signals a new hardware path Nvidia’s backing shows the top AI-chip maker now sees quantum machines as future partners for AI processors, hinting at mixed AI-quantum systems.

  2. Speeds up light-based quantum research Added cash could help PsiQuantum reach reliable “photonic” (light-powered) qubits sooner, unlocking faster drug-discovery and materials modeling that feed AI science.

  3. Boosts investor confidence When the leading GPU supplier invests, it encourages more funding for startups that blend quantum and AI ideas, widening the innovation pipeline.

Apple’s reboot of its troubled Apple Intelligence centers on a fresh LLM-based Siri, where LLM means a huge model trained on text. Leadership hesitated to fund AI, started late, and tried bolting new chatbot code onto old Siri, spawning bugs. Marketing over-hyped unfinished features. A Zurich team is rewriting Siri with this model, using private on-device data and web lookups, while AI chief John Giannandrea is sidelined—for now, amid hopes of recovery.

Why it matters

  1. Proof that design trumps patchwork – Apple’s move from “bolt-on” fixes to a full LLM rebuild shows large firms increasingly see clean-slate AI architecture as the only way to compete.

  2. Privacy-first training experiment – Using on-device email language to create synthetic training data could set a precedent for privacy-preserving data collection in consumer AI.

  3. Market ripple effect – Apple’s course correction may spur rivals and investors to fund deeper, model-centric assistant upgrades across the industry.

🧠RESEARCH

Researchers improved how AI models reason by training them to think in three specific ways: deduction (logic), induction (patterns), and abduction (best guesses). This approach makes the models more consistent and accurate, boosting their performance in tasks like math, coding, and science—without relying on random "aha!" moments.

Parallel Scaling is a new way to boost language model performance without increasing size or speed costs. By running multiple versions of the same model in parallel and combining their outputs, researchers achieved gains with less memory and faster results. It’s efficient, flexible, and works on existing models.

This paper proposes a new way to improve AI performance by optimizing the system prompt—the base instructions guiding an AI model—using meta-learning. Unlike previous methods focused on task-specific prompts, this approach creates general prompts that work well across many tasks, boosting adaptability and performance with fewer adjustments.

🛠️TOP TOOLS

MathGPT - AI math solver and calculator that provides instant solutions and step-by-step explanations for a wide range of mathematical problems

Imagica - No-code AI development platform that empowers users to create AI applications quickly and easily

Photosonic - AI-powered art generator developed by Writesonic that transforms text descriptions into unique digital images.

HitPaw Watermark Remover - AI-powered tool designed to efficiently remove watermarks, logos, text, and other unwanted elements from both images and videos.

v0 dev - Generative user interface system developed by Vercel Labs that leverages AI to create React code compatible with Shadcn UI and Tailwind CSS. 

📲SOCIAL MEDIA

🗞️MORE NEWS

  • Anthropic admitted its Claude AI chatbot caused a citation error in a legal filing, mistakenly altering details that made a real article seem fake. The company apologized, calling it an “embarrassing and unintentional mistake.”

  • Stability AI and Arm launched a small, open-source AI model that turns text into sound on smartphones. It creates 11-second stereo clips in 7 seconds, excels at sound effects, and runs efficiently on consumer devices.

  • Nvidia unveiled NVLink Fusion, a new tech to speed communication between AI chips, which it will sell to other chipmakers. CEO Jensen Huang also announced a Taiwan HQ and new desktop AI systems launching soon.

  • A Chinese startup, Synyi AI, has launched the world’s first AI-run clinic in Saudi Arabia. The system, called “Dr. Hua,” independently diagnoses and prescribes treatment, marking a major shift toward AI replacing human doctors for basic care.

  • China has launched the first 12 of 2,800 planned satellites to build an AI-powered supercomputer in space. Each satellite can process data on its own, reducing reliance on Earth stations and enabling real-time insights.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.