• NATURAL 20
  • Posts
  • The Mysterious AI That Just Beat the Market

The Mysterious AI That Just Beat the Market

Deep Reasoning, Automated Research, Privacy Losses, and a Pivot to Hardware.

In partnership with

A free newsletter read by 117,000 marketers

The best marketing ideas come from marketers who live it.

That’s what this newsletter delivers.

The Marketing Millennials is a look inside what’s working right now for other marketers. No theory. No fluff. Just real insights and ideas you can actually use—from marketers who’ve been there, done that, and are sharing the playbook.

Every newsletter is written by Daniel Murray, a marketer obsessed with what goes into great marketing. Expect fresh takes, hot topics, and the kind of stuff you’ll want to steal for your next campaign.

Because marketing shouldn’t feel like guesswork. And you shouldn’t have to dig for the good stuff.

Hi there!

If your week has felt like “wait… didn’t we just have a huge AI drop yesterday?”, you’re not alone. We’re seeing a shift from AI that just "chats" to AI that thinks, and a whole new way of building software that’s more about the "vibe" than the code.

Today:

  • The Mysterious AI That Just Beat the Market

  • Gemini 3 “Deep Think”: Google’s Slow-Burn Superbrain Mode

  • Google & Replit Forge the Next-Gen Dev Hub

  • Anthropic’s AI Now Interviews Humans at Scale

  • 20 Million Chat Logs Headed to the NYT

  • Trump’s 2026 Bet on Bots

New AI Might Crash The Markets

A new AI competition called Alpha Arena gave $320,000 in real capital to various large language models (LLMs) to trade actual stocks and crypto. While most models underperformed, a mysterious model likely named "Profit" outperformed them all, returning over 12% in two weeks. 

Unlike past hype, this model uses an evolutionary strategy to self-improve trading code in a feedback loop, echoing techniques from Google DeepMind and Nvidia. It's early, but if results hold, this could mark a breakthrough in AI-managed investing. Still, the creators caution: this isn’t financial advice, and transparency will be key to proving it’s not a scam.

Google just flipped on a new “Deep Think” mode for Gemini Ultra subscribers in the Gemini app, built on the Gemini 3 model. The idea: instead of answering quickly, the model spins up “advanced parallel thinking” exploring multiple possible explanations or solution paths at once before settling on an answer.

This isn’t aimed at “draft me a polite email” level tasks. Google explicitly frames it as a mode for more complex work: math-heavy reasoning, scientific problems, and gnarly coding questions. Under the hood, it’s an evolution of the earlier Gemini 2.5 Deep Think variant that recently posted strong results on an International Mathematical Olympiad-style benchmark and a major programming contest. 

A few practical details:

  • It’s currently available to Google AI Ultra customers (about $250/month in the US for the standard Ultra plan).

  • To use it, you pick “Deep Think” in the input bar and select Gemini 3 Pro inside the Gemini app.

  • The rollout clearly comes as Google is feeling competitive pressure from DeepSeek’s open-source math models and rumors of a new OpenAI reasoning model landing very soon.

What this means for you: this is another clear step into the “slow reasoning” era, where you’ll increasingly trade speed for depth. Expect modes like this to show up everywhere: code agents, research copilots, maybe even long-form content planning. We’re moving away from “chatbot that answers quickly” toward “assistant that takes a beat, thinks hard, and then shows its work.”

On the tools side, Google Cloud just locked in a multi-year partnership with Replit, the poster child of “vibe coding,” in a move explicitly framed as a challenge to Anthropic and Cursor. 

The gist of the deal:

  • Replit doubles down on Google Cloud infrastructure and will plug Google’s models (Gemini) more deeply into its platform.

  • Google gets a strong foothold in the fast-growing “vibe coding” world tools where you mostly describe what you want and let the AI do the scaffolding, boilerplate, and a big chunk of the logic.

Why this matters more than just “yet another partnership”:

  • Vibe coding is becoming a real workflow, not a toy. A single motivated dev with tools like Replit or Cursor can now ship the kind of software that used to require a small team.

  • Google vs Anthropic vs Cursor is no longer just about raw model benchmarks, it’s about owning the place where builders actually live all day. Whoever controls the “home base” for coding (VS Code + agents, Cursor, Replit, etc.) controls a lot of demand for compute and models.

  • For you (or anyone building products), it means there’s a strong chance your “IDE” in 12–18 months will look more like a chatty agentic environment than a plain text editor.

So if it feels like the AI dev-tool space is consolidating into a few “gravity wells”, you’re not imagining it. This Google–Replit alignment is one of those moves that will quietly shape where future developers spend their time.

Anthropic dropped something a bit different and honestly, pretty meta: Anthropic Interviewer, a Claude-powered system that runs large numbers of in-depth interviews with people about how they use AI and how it affects their lives.

Here’s how it works:

  • They built an automated interviewer that runs 10–15 minute adaptive interviews with participants directly in Claude.ai.

  • Under the hood there’s a three-phase workflow:

    • Planning: the AI, guided by a system prompt and hypotheses from Anthropic’s researchers, generates an interview rubric and question flow.

    • Interviewing: Claude conducts real-time interviews, asking follow-ups based on what each person says.

    • Analysis: Human researchers then collaborate with the AI to cluster themes, pull out quotes, and answer research questions at scale.

For the first study, Anthropic Interviewer talked to 1,250 professionals:

  • 1,000 from the general workforce

  • 125 scientists

  • 125 creatives (artists, writers, etc.)

A few early findings that jumped out:

  • Overall sentiment about AI at work is more positive than negative, but anxieties cluster around education, artist displacement, and security.

  • Workers want to keep the parts of their job that feel like their “real” craft while offloading repetitive, identity-less tasks to AI.

  • Creatives are using AI to boost output but feel real stigma and worry about what it means for human creativity and income.

  • Scientists like AI for “side tasks” (writing, debugging code), but still don’t trust it fully for core research like designing experiments.

Anthropic is also releasing the anonymized interview data publicly on Hugging Face so other researchers can study it, a pretty big deal if you care about the social science side of AI. 

What I love about this one: it’s AI being used not to replace people, but to listen to them at scale. It’s like giving a sociology department a thousand tireless grad students who never get bored of conducting interviews.

🧠RESEARCH

This report introduces Qwen3-VL, an advanced AI model that understands text, images, and video simultaneously. It can process massive amounts of information—like long movies—in a single session. By improving how it connects visual details with language, the model excels at complex tasks like analyzing documents and navigating computer screens.

This paper explores "Deep Research" agents—AI designed to solve open-ended problems by actively searching the web. Unlike simple chatbots, these systems plan queries, gather evidence, and verify facts before answering. The authors outline how these tools work and identify the challenges in building reliable, autonomous research assistants.

MG-Nav enables robots to find their way through unfamiliar buildings without prior practice. It builds a simplified memory map of the area to plan long paths while dodging immediate obstacles. This two-part system helps robots reach specific visual targets, like finding a specific room, even in homes they have never seen.

📲SOCIAL MEDIA

🗞️MORE NEWS

OpenAI Loses Privacy Fight A judge has ordered OpenAI to hand over 20 million chat records to the New York Times for an ongoing copyright lawsuit. The court rejected the company's privacy concerns, ruling that removing user names provides enough safety. This evidence will help determine if the AI illegally used newspaper articles to create its responses.

Trump’s 2026 Robotics Plan The Trump administration plans to launch a major initiative in 2026 to boost American robot manufacturing. This strategy shifts focus from computer software to physical machines in hopes of competing more effectively with China. Officials believe that strengthening the robotics industry is essential for rebuilding the nation's factories.

Altman Eyes Space Rivalry OpenAI leader Sam Altman explored a deal to build a rocket company that would compete with Elon Musk’s SpaceX. His goal is to eventually launch computer data centers into orbit to support the massive energy needs of artificial intelligence. While the talks have stalled, the move highlights his desire to expand into space exploration.

AI Helps Neurodiverse Workers Employees with conditions like ADHD and dyslexia report that AI tools are helping them succeed in their careers. These digital assistants handle difficult tasks like note-taking and organization, which creates a fairer work environment. Research shows that these workers often value AI help even more than their colleagues do.

Cloudflare CEO’s Warning Cloudflare CEO Matthew Prince warns that new AI tools could hurt the internet by keeping users away from original websites. He argues that if computer programs answer questions without sending traffic to creators, those writers and artists will lose money. Prince suggests that AI companies must pay for the content they use to prevent the web from dying.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.