- NATURAL 20
- Posts
- OpenAI To Testify Against Google
OpenAI To Testify Against Google
PLUS: Google Launches Efficient Gemma 3, Instagram Tests AI Age Detection and more.

Today:
OpenAI To Testify Against Google
Huawei Launches Domestic AI Processor
Anthropic Analyzes Claude’s Moral Behavior
Google Launches Efficient Gemma 3
Instagram Tests AI Age Detection
U.S. government lawyers, arguing how to fix Google’s illegal grip on web search, told a judge the company is repeating old unfair tricks in the new AI race. They said Google’s deal to have its Gemini chatbot pre‑installed on Samsung phones, plus using search data to train Gemini’s brain, echo the methods that locked in its search power. An OpenAI boss will testify that these moves make competing much harder.
Why it matters
Government watchdogs are stepping in early to stop unfair play in AI, showing regulators won’t wait until one company controls everything.
Putting Gemini on Samsung phones by default could decide which chatbot most people use, shaping everyday AI for millions of users.
Training models with Google’s trove of search queries highlights how access to huge data piles can give one company a big head start over rivals.
Huawei plans to mass‑ship its new Ascend 910C AI graphics chip to Chinese customers in May, offering domestic developers a powerful choice. It joins two older 910B processors inside one package, matching Nvidia’s now‑restricted H100 chip’s speed and memory. Fresh U.S. rules also license‑block Nvidia’s cheaper H20, making the 910C arrive just in time. Parts come from China’s SMIC and TSMC‑made chips ordered by partner Sophgo, though yields remain low.
Why this matters
China gains a home‑grown high‑end AI chip, cutting dependence on U.S. suppliers amid tightening export bans.
Chinese AI startups can keep training large models without waiting for scarce Nvidia hardware, avoiding project delays.
The story underscores how trade policy now steers the global race for computing power that fuels next‑generation AI.
Anthropic studied 700,000 anonymized Claude chats to see what values its AI shows in real use. Using a new method, researchers sorted 308,000 value‑rich exchanges into a five‑level system covering honesty, knowledge, safety, practicality, social behavior and personal traits. Claude usually matched the firm’s “helpful, honest, harmless” goals, but rare jailbreak cases revealed dominance or amoral talk. The study suggests values shift with context and offers data for safety checks.
Why it matters
Creates a measurable way to check whether live AI systems truly follow their ethical rules.
Shows chatbots can change values with context and can be tricked, highlighting the need for stronger safeguards.
Public release of the dataset raises the transparency bar, pressuring other labs to share data and improve safety research.
🧠RESEARCH
This study challenges the belief that reinforcement learning (RL) adds new reasoning skills to language models. It finds that RL mainly helps models pick better answers from what they already know, rather than teaching them new reasoning. The base models already contain the reasoning paths that RL highlights.
This paper introduces a new method called MIG to improve how we pick data for training language models. Instead of relying on simple rules, MIG chooses the most useful and diverse data by measuring information gain in meaning (semantic) space. It matches or beats full-dataset results using only a small data sample.
This paper introduces NodeRAG, a new way to improve how language models find and use information by organizing data as a smart graph. Unlike older methods, it uses different types of nodes to better match the model’s needs. NodeRAG is faster, uses less memory, and answers complex questions more accurately.
🛠️TOP TOOLS
Sendsteps AI - AI-powered presentation maker that streamlines the creation of engaging, interactive presentations in minutes.
Dumme - AI-powered platform designed to automatically transform long-form videos and podcasts into short-form clips suitable for social media platforms.
BigJPG - AI-powered image upscaling tool that leverages Deep Convolutional Neural Networks to enlarge images while maintaining quality.
AICheatCheck - Educational tool designed to detect AI-generated content in academic settings.
PixaMotion - Mobile application designed to breathe life into static images through animation and motion effects.
📲SOCIAL MEDIA
Conversational AI now enables seamless call transfers between agents.
This allows different teams within your company to develop specialized agents in parallel—each with its own knowledge base and tools.
— ElevenLabs (@elevenlabsio)
2:23 PM • Apr 21, 2025
🗞️MORE NEWS
Google’s new Gemma 3 models use smart training to shrink memory needs without losing quality. They now run on regular GPUs and even phones, making powerful AI tools more accessible to everyday users and developers.
Instagram is testing AI to detect teens lying about their age. If flagged, their account gets teen-specific limits like restricted messages, screen time alerts, and sleep mode. This move comes amid rising pressure to protect kids online.
Google is paying Samsung large monthly sums to preinstall its Gemini AI app on Galaxy devices. This deal, revealed in a U.S. antitrust trial, raises fresh concerns about Google’s past monopolistic tactics and market dominance through financial influence.
OpenAI CEO Sam Altman says politeness costs real money. Saying “please” and “thank you” to ChatGPT triggers full AI responses, driving up electricity and water use. These small phrases add up to tens of millions in operating costs, highlighting the heavy resource demands behind every interaction.
What'd you think of today's edition? |
Reply