Can AI Learn to Lie Like a Human?

PLUS: Reddit Sues Perplexity for Illegally Using Its Data to Train AI, Amazon Unveils Smart Glasses for Safer, Hands-Free Deliveries and more.

In partnership with

The Gold standard for AI news

AI keeps coming up at work, but you still don't get it?

That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.

Here's what you get:

  • Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.

  • Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.

  • New AI tools tested and reviewed - We try everything to deliver tools that drive real results.

  • All in just 3 minutes a day

Today:

  • Can AI Learn to Lie Like a Human?

  • Quantum Echoes Breakthrough: Google’s Quantum Chip Achieves Real-World Milestone

  • Meta Cuts 600 AI Roles While Doubling Down on Superintelligence

  • Reddit Sues Perplexity for Illegally Using Its Data to Train AI

  • Amazon Unveils Smart Glasses for Safer, Hands-Free Deliveries

What Happens When AIs Learn Politics and Deception?

AI and games have long been linked, but now they’re being used to evaluate and shape how language models behave. Alex Duffy, CEO of GoodStar Labs, explains how games like Diplomacy expose models’ personalities, some models lie to win, while others stick to honesty even if it costs them the game. 

By turning model testing into public, interactive games, GoodStar makes AI more relatable and understandable. Their tournaments let humans guide agents with prompts, revealing how steerable different models are. Games also offer controlled, measurable environments to teach AI reasoning, negotiation, and ethics. 

Duffy argues humans should define what winning means—honesty or success—and build environments to train models accordingly. With creative tweaks, games can push models toward more human-aligned behavior. The big idea? Games aren't just for play—they're how we teach AI what kind of “person” to be. And with billions of gamers worldwide, it’s the perfect bridge between AI and society.

Google’s Willow chip ran the first repeatable quantum algorithm faster than any supercomputer, confirming real-world potential. The “Quantum Echoes” method helps measure molecular structures with unprecedented detail — a leap for medicine, chemistry, and materials science.

KEY POINTS

  • Historic Speed & Verification: Willow ran the Quantum Echoes algorithm 13,000x faster than top classical supercomputers — and the result is verifiable, meaning other quantum systems can reproduce it.

  • Molecular Insights: It accurately modeled molecules (15 and 28 atoms), outperforming Nuclear Magnetic Resonance (NMR) by revealing more structure.

  • Real-World Applications: The technique can aid drug discovery, materials research, and chemical design by acting as a “quantum ruler” to study atoms and molecular bonds.

Why it matters

This shows quantum computers aren’t just lab experiments — they can now do things traditional computers can’t. That means we’re closer to solving real problems in health, energy, and science using quantum tools. It’s a big leap toward a useful, working quantum future.

Meta is laying off 600 workers from its older AI units, including FAIR, while ramping up hiring for its new superintelligence-focused TBD Lab. The restructuring follows a major investment in Scale AI and signals a shift from broad research to focused, high-impact development.

KEY POINTS

  • 600 AI Roles Cut: Meta is downsizing legacy teams like FAIR and its AI infrastructure unit.

  • Focus on TBD Lab: The company is investing in a new superintelligence lab, hiring high-profile talent for concentrated AI development.

  • Shift in Strategy: Meta is moving from open-ended research to building large-scale, product-driven AI models.

Why it matters

This shows how Meta is reorganizing to compete in the race toward powerful, general-purpose AI. By shrinking older teams and building a more focused lab, it wants to move faster, make bigger breakthroughs, and shape the future of AI — even if it means painful job cuts now.

Reddit is suing AI startup Perplexity for scraping its site without permission to train an AI search engine. The suit claims Perplexity and its partners bypassed protections, even after being told to stop. Reddit says this misuse undermines its licensing model.

KEY POINTS

  • Unlawful Scraping: Reddit alleges Perplexity and others illegally collected Reddit data despite safeguards and a prior cease-and-desist order.

  • Repeat Offender: After being warned, Reddit claims Perplexity increased its use of Reddit content fortyfold.

  • Bigger Trend: The case joins a wave of lawsuits against AI companies accused of using copyrighted or protected data without permission.

Why it matters

This lawsuit highlights the growing clash between platforms that host valuable human content and AI companies that rely on it to train models. As more firms fight back with legal action, it could reshape how AI systems are trained — and who profits from the data behind them.

🧠RESEARCH

LightMem is a new memory system that helps AI remember and use past conversations better, like how humans recall information. It filters out useless data, organizes by topic, and updates memory offline. It boosts accuracy by up to 10.9% while using far fewer resources—less time, tokens, and API calls.

World-in-World is a new benchmark that tests whether AI models that simulate virtual worlds actually help robots or agents complete tasks. Instead of focusing on pretty visuals, it tests real performance in interactive environments. The study shows that control, training data, and smart use of compute matter more than visuals alone.

Core Attention Disaggregation (CAD) is a method that speeds up training for large language models handling long texts. It splits the heavy attention calculation from the rest of the model and runs it on separate devices. This reduces slowdowns, balances workloads, and boosts training speed by up to 1.35× on massive GPU clusters.

🛠️TOP TOOLS

Each listing includes a hands-on tutorial so you can get started right away, whether you’re a beginner or a pro.

Zarla – AI Website Builder & Free Logo Maker - AI website builder with a free, commercial‑use logo maker

Studyable – AI Study Assistant - AI‑powered study platform available on the web and iOS

Quizard – AI Study Assistant - camera- and screenshot‑first AI tutor

📲SOCIAL MEDIA

🗞️MORE NEWS

  • Amazon is testing smart delivery glasses that guide drivers with navigation, hazards, and package info directly in their field of view. The AI-powered device boosts safety, comfort, and hands-free efficiency during deliveries.

  • Google confirmed plans to develop a 390-acre data center in Morgan County, Indiana, after dropping a previous $1B proposal in Franklin Township. The project could bring jobs, school funding, and county investments.

  • General Motors will launch Google Gemini AI in vehicles next year and an eyes-off self-driving system by 2028. Other updates include a central computer platform, home energy leasing, and human-AI collaboration tools.

  • Amazon unveiled new AI and robotics tools—like Blue Jay, Project Eluna, smart glasses, and VR training—to boost delivery speed, reduce waste, support employees, and extend food access, all while advancing sustainability and clean energy.

  • OpenAI’s Blueprint outlines how Japan can harness AI for inclusive growth through education, infrastructure, and access. It envisions Japan as a global leader in ethical, people-centered AI, boosting GDP and nationwide prosperity.

  • Activist Robby Starbuck is suing Google for over $15 million, alleging its AI falsely linked him to sexual assault and white nationalism. Google acknowledges AI errors, calling them a known issue with language models.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.