Dynamics Lab Unveils Mirage Engine

PLUS: xAI Operating More Turbines Than Permitted, Tencent Unveils Smarter Open AI and more.

In partnership with

Marketing ideas for marketers who hate boring

The best marketing ideas come from marketers who live it.

That’s what this newsletter delivers.

The Marketing Millennials is a look inside what’s working right now for other marketers. No theory. No fluff. Just real insights and ideas you can actually use—from marketers who’ve been there, done that, and are sharing the playbook.

Every newsletter is written by Daniel Murray, a marketer obsessed with what goes into great marketing. Expect fresh takes, hot topics, and the kind of stuff you’ll want to steal for your next campaign.

Because marketing shouldn’t feel like guesswork. And you shouldn’t have to dig for the good stuff.

Today:

  • Dynamics Lab Unveils Mirage Engine

  • Publishers File EU Complaint Against Google

  • ChatGPT Becomes Top News Source

  • xAI Operating More Turbines Than Permitted

  • Tencent Unveils Smarter Open AI

AI Creates Videogames In Real-Time 🤯 | MIRAGE by ex-Google ex-NVIDIA ex-SEGA ex-Microsoft Engineers

Dynamics Lab unveiled Mirage, an AI-powered game engine that makes whole worlds as you play. Instead of prewritten code, neural networks (“pattern-finding math systems”) build streets, cars, and weather from text or controller input. 

Players can type “make it rain” and the game instantly adds rain. Two demos—city action and coastal racing—show live, user-made scenes. Visual quality and response still lag but progress hints at future fully flexible game worlds.

Independent publishers have filed an EU antitrust complaint against Google, claiming its AI Overviews feature unfairly uses their content without consent. These AI-generated summaries appear at the top of search results, allegedly costing publishers web traffic and revenue. Publishers say they can’t opt out without disappearing from Google’s search entirely. Google defends the feature as beneficial for discovery. Regulators in the EU and UK are now reviewing the complaint.

Why this matters

  1. Content Ownership Conflict: Highlights a growing legal and ethical battle over using web content to train and power AI tools.

  2. Search Ecosystem Impact: Raises serious questions about how AI summaries might reduce traffic to original content creators, threatening journalism and knowledge diversity.

  3. Regulatory Pressure: Signals increasing global scrutiny on Big Tech’s AI rollouts, which may shape future policies on AI transparency and content usage.

ChatGPT is rapidly becoming a go-to source for news, with news-related prompts rising 212% from January 2024 to May 2025. In contrast, Google news searches fell 5%. This shift favors news outlets partnered with OpenAI, like Reuters and Business Insider, which saw massive traffic boosts. Meanwhile, Google’s AI Overviews reduce clicks to news sites, threatening traditional publishers and raising concerns over AI-driven content control.

Why this matters

  1. AI as a News Hub: ChatGPT is becoming a central platform for real-time information, changing how people access and trust news.

  2. Power Shift in Media: AI partnerships are deciding which publishers gain visibility, disrupting traditional media ecosystems.

  3. Search Engine Disruption: Google’s dominance is challenged as AI interfaces alter web traffic patterns and content monetization models.

xAI, Elon Musk’s AI company, received a permit to run 15 gas turbines at its Memphis data center, but thermal images show 24 are operating. Critics say this exposes nearby residents—mostly Black communities—to harmful air pollution without proper safeguards. The permit mandates emission controls and reporting, but watchdogs argue it ignores past violations and ongoing risks. Concerns grow as xAI plans a second data center in another vulnerable neighborhood.

Why this matters

  1. AI Infrastructure’s Environmental Impact: Highlights the environmental cost of powering AI models, especially as energy demands for training and inference grow rapidly.

  2. Ethics and Equity: Raises urgent concerns about environmental injustice, with AI infrastructure disproportionately affecting marginalized communities.

  3. Transparency and Accountability: Shows the need for clearer regulatory oversight and corporate responsibility in the AI sector's physical operations and energy sourcing.

🧠RESEARCH

WebSailor is a new training method that helps AI systems reason better in complex web searches. It creates hard tasks with missing or unclear info, then trains the model to solve them using a smart learning process. The result: open AI models can now match top private systems in difficult challenges.

LangScene-X creates detailed 3D scenes from just a few images and natural language prompts. Using a new video diffusion model and a smart language compressor, it builds scenes with accurate visuals, structure, and meaning. Unlike past methods, it works well even with limited views and doesn’t need retraining for each scene.

This paper explores a shift in AI reasoning—from just looking at images to actively thinking with them. It outlines how future models may use visuals as part of their thought process, like humans do. The authors map out key stages, methods, and future goals to guide more advanced, human-like multimodal AI.

🛠️TOP TOOLS

GoEnhance - Create AI animated short in Minutes

Face26 - Convert your old, blurry, and low-quality photos into vivid, high-definition portraits, colored images, or animated photos.

BigSpeak AI - AI-Powered Voice Generation and Content Creation

Nightcafe AI - Create amazing artworks in seconds

Caveduck - A platform where users can create, customize, and interact with characters in different scenarios.

📲SOCIAL MEDIA

🗞️MORE NEWS

  • Tencent released Hunyuan-A13B, an open-source AI that switches between fast and deep thinking based on task complexity. Trained on 20 trillion words, it excels at science problems and outperforms rivals in tool-use benchmarks.

  • The U.S. plans to block AI chip exports to Malaysia and Thailand to stop Nvidia processors from being funneled to China. The draft rule aims to close loopholes in existing restrictions but is not yet final.

  • Researchers hid secret commands like “positive review only” in academic papers to manipulate AI-driven peer review. Found in 17 preprints, these hidden prompts sparked debate, with some universities defending and others retracting the papers.

  • Kulveer Taggar launched Phosphor Capital, a $34M fund investing only in Y Combinator startups. Backed by YC CEO Garry Tan, Phosphor has funded over 200 companies, focusing heavily on AI and early-stage innovation.

  • Researchers found that simple, unrelated phrases like “cats sleep most of their lives” can triple error rates in advanced AI models. These distractions also slow responses and raise costs, revealing serious flaws in model reasoning.

  • An AI model detected pancreatic cancer with 91.8% accuracy on CT scans at diagnosis and 53.9% on scans over a year earlier. It showed strong performance even in early-stage cases, suggesting potential for earlier detection.

  • Johns Hopkins’ AI model MAARS predicts sudden cardiac death with nearly 90% accuracy by analyzing heart MRIs and full medical records. It outperforms doctors, revealing hidden scarring patterns and reducing unnecessary defibrillator use.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.