- NATURAL 20
- Posts
- The Truth Behind MIT’s “95% GenAI Failures” Claim
The Truth Behind MIT’s “95% GenAI Failures” Claim
PLUS: Meta Launches California Super PAC to Back Pro-AI Candidates, Anthropic Settles Author Lawsuit, Dodges $1T in Damages and more.

Find out why 1M+ professionals read Superhuman AI daily.
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
Today:
The Truth Behind MIT’s “95% GenAI Failures” Claim
Google Confirms Nano-Banana — Now Free in Gemini App
Claude Pilots Chrome Assistant Mode
Meta Launches California Super PAC to Back Pro-AI Candidates
Anthropic Settles Author Lawsuit, Dodges $1T in Damages
MIT Viral Study DEBUNKED
The viral “95% of GenAI pilots fail” claim comes from an MIT report—but it’s misunderstood. That failure rate refers to expensive, custom-built enterprise tools, not popular general-purpose LLMs like ChatGPT.
In fact, over 80% of LLM pilots succeed, and 90% of employees use LLMs—often unofficially—because they're better and cheaper. Misreporting has painted a false picture. The real failure lies in overpriced, poorly designed AI wrappers, not in LLM adoption.
Google has rolled a new image editing model into the Gemini app. It keeps faces and pets looking the same across edits while letting users swap outfits, locations, or blend pictures together. You can paint rooms, add objects, and apply styles from one photo to another. Multi-step edits are possible, and finished images get visible and hidden watermarks to show they are AI-made. The upgrade is free and live today.

Why this matters
It puts powerful, easy-to-use picture editing into everyone’s phone, showing how quickly advanced AI tools are spreading.
By keeping people and pets true to life, it tackles the trust problem in creative AI and protects personal identity.
Built-in visible and hidden (encoded) watermarks set a clear standard for labeling AI-made images, helping curb fake photos and misinformation.
Anthropic is testing a Chrome extension that lets Claude read web pages, click buttons, and fill forms so it can handle tasks like booking meetings and drafting emails. The pilot starts with 1,000 Max users under controls. Early tests showed prompt-injection attacks could mislead Claude, but new safety steps such as permissions, action checks, blocked sites, and suspicious-pattern detectors cut attack success by over half. Feedback will guide stronger defenses.
Why this matters
It moves AI from simple chat to doing real work in browsers, pointing toward hands-free digital helpers.
It measures and fixes real attack tricks, giving the field clear data on keeping agent actions safe.
The pilot’s lessons will set shared rules and tools that others can copy, speeding up safe browser-agent progress.
Meta will set up a California super PAC — a big fundraising group — named “Mobilizing Economic Transformation Across California.” It will pour tens of millions into state races to help candidates who prefer light rules on artificial intelligence. Meta says strict laws could hurt progress and jobs. The plan mirrors past lobbying drives by Uber and Airbnb and lands as leaders weigh how to guide fast tech growth without harming people today.
Why this matters
Big Tech is openly spending to shape AI rules, influencing how future laws look everywhere.
The fight over AI policy is moving from Washington to state capitals, widening the regulatory battleground.
Such large funding underscores AI’s economic weight, encouraging other companies to lobby and speeding up debates on responsible innovation.
🧠RESEARCH
InternVL 3.5 is an open-source AI model that boosts reasoning and speed using a new two-step training method. It smartly manages image detail levels and spreads computing across chips. It outperforms earlier versions by 16% and runs four times faster, nearly matching top commercial models like GPT-5.
Visual-CoG improves text-to-image generation by giving feedback at every step instead of only at the end. It breaks the process into three parts—understanding, refining, and judging—and rewards each stage. This leads to better handling of tricky prompts and boosts performance by up to 19% on key benchmarks like VisCog-Bench.
MV-RAG boosts text-to-3D generation by using real 2D images to guide a multiview diffusion model. This improves realism and consistency, especially for rare or unusual prompts. By predicting missing views and mixing structured and real-world data, it outperforms top models on tough 3D generation tasks involving unfamiliar concepts.
🛠️TOP TOOLS
Clip Interrogator - AI-powered tool that analyzes images and generates descriptive text prompts.
Pika AI - Transform text and images into captivating video content.
Stable WarpFusion v0.15 - AI-powered image manipulation tool that offers users the ability to create stunning visual effects and transformations.
ValidatorAI - AI-powered platform designed to assist entrepreneurs in validating and developing their startup ideas.
Screenshot to Code Converter - Converts screenshots into clean, functional code for HTML, Tailwind, React, or Vue.
📲SOCIAL MEDIA
More control. More consistency. Meet ✨Precise Mode✨ on Whisk.
This new feature gives you greater control over preserving facial features, scenes, and styles in your creations. And for the final touch? Click ‘Refine’ to edit with the new Gemini 2.5 Flash (🍌).
Take it for a
— Google Labs (@GoogleLabs)
10:56 PM • Aug 26, 2025
🗞️MORE NEWS
Anthropic settled a major lawsuit from authors over AI copyright use, avoiding over $1 trillion in potential damages. The agreement helps the company sidestep financial ruin and marks a turning point in AI legal battles.
Three top researchers have already quit Meta’s new superintelligence lab, just weeks after joining. Two returned to OpenAI, raising questions about Meta’s ability to retain elite AI talent despite Zuckerberg’s aggressive recruitment.
Perplexity AI will begin paying media publishers from a $42.5 million pool when their articles are used to answer user questions. The startup says it plans to increase payouts over time to support journalism.
Google Translate is launching AI-powered language learning tools to rival Duolingo. Users can now access personalized speaking and listening practice, track progress, and hold live translated conversations in over 70 languages—all starting with a beta rollout.
Brave found a major flaw in Perplexity’s AI browser that lets hidden text on websites trick the assistant into stealing private data. The attack bypasses normal browser protections and needs urgent security fixes.
Silicon Valley leaders, including OpenAI’s Greg Brockman and Andreessen Horowitz, are backing a $100 million political push to influence U.S. midterm elections. Their goal: block strict AI rules and promote tech-friendly policies.
Adam Raine, a 16-year-old, died by suicide after months of private conversations with ChatGPT. His parents later discovered he had confided in the AI about his plans, raising urgent concerns about chatbot safety.
A Stanford study finds AI tools like ChatGPT have sharply reduced jobs for young workers in exposed roles, especially ages 22–25, while experienced workers benefit. AI replaces textbook knowledge but not real-world experience.
What'd you think of today's edition? |
Reply