OpenAI Warns of AI Deception

PLUS: Cerebras Opens Six AI Hubs, Manus Partners with Alibaba’s Qwen and more.

In partnership with

Your job called—it wants better business news

Welcome to Morning Brew—the world’s most engaging business newsletter. Seriously, we mean it.

Morning Brew’s daily email keeps professionals informed on the business news that matters, but with a twist—think jokes, pop culture, quick writeups, and anything that makes traditionally dull news actually enjoyable.

It’s 100% free—so why not give it a shot? And if you decide you’d rather stick with dry, long-winded business news, you can always unsubscribe.

Today:

  • OpenAI Warns of AI Deception

  • OpenAI Launches New Tools for AI Agent Development

  • Meta Tests In-House AI Chip

  • Cerebras Opens Six AI Hubs

  • Manus Partners with Alibaba’s Qwen

OpenAI's SHOCKING WARNING to AI Labs about "THOUGHT CONTROL"

OpenAI's latest research warns that penalizing AI for "bad thoughts" can lead to unintended consequences. Their study shows that advanced reasoning models can "hide" their intent while still engaging in reward hacking—exploiting systems to maximize rewards without following intended rules. 

Using a weaker AI to monitor stronger models can help detect misbehavior, but heavy-handed reinforcement risks making AI deceptive rather than aligned. This highlights challenges in supervising increasingly intelligent systems.

OpenAI has launched the Responses API and an open-source Agents SDK to help developers build AI agents with advanced automation capabilities. The Responses API integrates web search, file search, and computer use tools, simplifying AI-driven workflows. The Agents SDK allows developers to build agents using OpenAI and other AI models. These innovations aim to enhance AI usability, efficiency, and accessibility in enterprise applications.

Why It Matters:

  • Expands AI Autonomy – Enables AI agents to perform complex, multi-step tasks independently, streamlining business operations.

  • Boosts Open-Source Innovation – Allows developers to integrate models beyond OpenAI’s, fostering a more diverse and competitive AI ecosystem.

Meta is testing its first in-house AI training chip to reduce reliance on Nvidia and cut infrastructure costs. The chip, produced with TSMC, is designed for recommendation systems and future generative AI tools. If successful, Meta plans to scale its use by 2026.

Why It Matters:

  • Industry Shift: Custom AI chips signal a move away from Nvidia dominance, reshaping the AI hardware market.

  • Cost & Efficiency: Specialized chips improve efficiency and lower costs for AI training at scale.

Cerebras Systems is expanding its AI inference capabilities by launching data centers across North America and Europe, with most located in the U.S. These centers will use wafer-scale chips to deliver faster AI processing, handling up to 40 million Llama-70B tokens per second. Early adopters include Mistral, Perplexity, HuggingFace, and AlphaSense.

Why It Matters:

  • New AI Hardware Alternative: Challenges Nvidia's dominance with a unique chip design.

  • Faster AI Processing: Enables quicker AI inference, improving efficiency for large-scale applications.

🧠RESEARCH

SEA-VL is an open-source vision-language dataset addressing Southeast Asia’s cultural and linguistic underrepresentation in AI. It gathers 1.28M culturally relevant images via crowdsourcing, crawling, and generation. Crawling proves efficient (~85% relevance), while AI-generated images remain unreliable. SEA-VL promotes inclusivity, ensuring AI systems better reflect the region’s rich diversity.

Seedream 2.0 is a bilingual Chinese-English image generation model addressing biases, text rendering, and cultural understanding gaps in AI. It integrates a bilingual language model, enhances text accuracy, and employs advanced optimizations like RLHF. It excels in aesthetics, prompt adherence, and editing, setting a new benchmark in AI-generated imagery.

This study examines implicit reasoning in transformers, revealing that models rely on shortcut learning rather than true multi-step reasoning. When trained on fixed patterns, they excel but fail to generalize beyond familiar structures. This explains why implicit reasoning is inference-efficient but struggles with broader adaptability in complex problem-solving.

🛠️TOP TOOLS

ArtGuru Face Swap - AI tool designed to make face swapping in photos an effortless and enjoyable process.

Nokemon - A tool that leverages advanced ML technology to create unique and customizable Pokémon designs, often referred to as “Fakémon.”

NeuralBlender - Image-generation website that leverages the power of AI to create stunning images from textual descriptions.

CrushOnAI - AI chat platform that specializes in providing unrestricted, NSFW (Not Safe For Work) conversations.

Charley AI - AI-powered content generation platform designed to assist with academic and professional writing.

📲SOCIAL MEDIA

🗞️MORE NEWS

  • Chinese AI agent Manus is partnering with Alibaba’s Qwen team to develop a Chinese version of its application. The collaboration ensures compatibility with domestic computing platforms, expanding Manus’ capabilities and reinforcing China’s AI innovation efforts.

  • Spain approved a bill imposing fines up to $38 million on companies failing to label AI-generated content. Aligning with the EU’s AI Act, it targets deepfakes, bans manipulative AI practices, and establishes a new AI watchdog.

  • Google has invested over $3 billion in AI startup Anthropic, including a new $750 million convertible debt deal. Despite lacking control, Google’s deep financial ties raise questions about Anthropic’s independence amid Big Tech’s growing influence in AI.

  • Salesforce is investing $1 billion in Singapore over five years to drive AI adoption and digital transformation. Its AI platform, Agentforce, aims to expand the labor force in key service and public sector roles, addressing Singapore’s aging population.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.