• NATURAL 20
  • Posts
  • Researchers Expose New LLM Backdoor Vulnerability With Minimal Data

Researchers Expose New LLM Backdoor Vulnerability With Minimal Data

PLUS: Nvidia Unveils First U.S.-Made Blackwell Chip, Sundar Confirms Gemini 3.0 and more.

In partnership with

Don’t get SaaD. Get Rippling.

Disconnected software creates what we call the execution tax: wasted time, duplicate work, and stalled momentum. From onboarding checklists to reconciling expenses, SaaD slows every team down.

Rippling is the cure. With one system of record, you can update employee data once,and it syncs everywhere: payroll, benefits, expenses, devices, and apps.

Leaders gain real-time visibility. Teams regain lost hours. Employees get the seamless experience they deserve .

That’s why companies like Barry’s and Forterra turned to Rippling – to replace sprawl with speed and clarity.

It’s time to stop paying for inefficiency.

Don’t get SaaD. Get Rippling.

Today:

  • Researchers Expose New LLM Backdoor Vulnerability With Minimal Data

  • Google DeepMind Aims for Net Energy With AI-Controlled Tokamak

  • Google Maps Now Live in the API

  • Nvidia Unveils First U.S.-Made Blackwell Chip

  • Sundar Confirms Gemini 3.0

Just 250 malicious documents can secretly corrupt large language models—no matter their size—by teaching them to misbehave when triggered. This challenges the idea that attackers need huge datasets to do damage, making poisoning easier and more dangerous than previously believed.

KEY POINTS

  • Fixed Amount, Big Impact: Even massive models (up to 13B parameters) can be compromised by as few as 250 poisoned documents.

  • Trigger-Based Vulnerabilities: Attacks use a hidden phrase (like <SUDO>) to cause gibberish or potentially dangerous outputs.

  • Security Assumptions Shattered: It’s not about percentage of data anymore; attackers only need to sneak in a few samples to succeed.

Why it matters
This research shows that AI models can be tricked using very few harmful documents—making it surprisingly easy for bad actors to create hidden flaws. These flaws could make AI unsafe or unreliable in the real world. Knowing this helps researchers build better defenses to protect future AI systems.

Google DeepMind and Commonwealth Fusion Systems are using AI to speed up the path to clean fusion energy. By combining DeepMind’s plasma control AI and simulations with CFS’s compact SPARC reactor, they aim to achieve net energy and enable safer, scalable fusion power.

KEY POINTS

  • AI + Fusion Hardware: DeepMind and CFS are teaming up to optimize SPARC, a compact tokamak aiming to reach net energy output.

  • AI Tools in Action: Tools like TORAX simulate plasma behavior and use reinforcement learning to control and optimize fusion energy in real-time.

  • Global Impact Goal: The project envisions AI at the core of future fusion plants—building toward limitless, sustainable energy for the planet.

Why it matters
This partnership could help unlock a clean energy source that doesn't rely on fossil fuels or produce dangerous waste. Using AI makes the process faster and more efficient, reducing costs and helping humanity move closer to solving energy and climate problems.

Google has launched “Grounding with Google Maps” in the Gemini API. Developers can now build AI apps that use real-time Maps data—like locations, directions, reviews—to deliver smarter, geospatial-aware experiences in travel, real estate, retail, and more, all powered by Gemini's reasoning capabilities.

KEY POINTS

  • Real-Time Geospatial Data: Gemini API now connects to Google Maps, enabling apps to access over 250 million places for location-based answers.

  • Better Personalization: AI responses can now include walking distances, business hours, user reviews, and area-specific suggestions.

  • Dual Grounding: Developers can combine Maps + Search grounding for richer, more accurate results (e.g., venue hours + event start times).

Why it matters
Apps powered by Gemini can now understand and respond to location-based questions with real-world accuracy. Whether you're planning a trip, house hunting, or finding a local café, this makes AI feel more helpful and human—because it now understands where you are and what’s nearby.

🧠RESEARCH

The PsiloQA project introduces a new dataset that helps detect false or made-up facts in AI model answers across 14 languages. It uses automated tools to highlight specific false parts in answers, making detection more accurate and affordable. Encoder-based models proved most effective in spotting these falsehoods.

AEPO is a new training method for AI agents that balances exploration without letting randomness harm learning. It uses smart techniques to manage uncertainty during both training and decision-making. Tested on 14 benchmarks, AEPO significantly outperforms other methods and helps web agents perform better with fewer training samples.

WithAnyone is a new image generation model that avoids copy-paste flaws when creating faces based on a reference. It uses a large paired dataset and a smart loss function to balance identity accuracy with natural variation. The result: realistic, controllable images that preserve the person’s look across different expressions and poses.

🛠️TOP TOOLS

Each listing includes a hands-on tutorial so you can get started right away, whether you’re a beginner or a pro.

Cat Breed Identifier – AI Image Recognition - AI‑powered cat breed recognition app

Arcads – AI Video Ads Generator / AI UGC Video Generator - AI platform that turns your script into UGC‑style video ads with AI actors

Palette FM – AI Photo Colorizer - AI photo colorizer you can use directly in your browser

📲SOCIAL MEDIA

🗞️MORE NEWS

  • Nvidia revealed its first U.S.-made Blackwell AI chip wafer, produced with TSMC in Arizona. The move supports America’s tech independence, aligns with Trump’s manufacturing goals, and reflects booming AI chip demand.

  • Google CEO Sundar Pichai confirmed Gemini 3.0 will launch later this year. It promises major advances in AI capabilities, backed by DeepMind, Google Brain, and top infrastructure, aiming to rival OpenAI and Anthropic.

  • OpenAI launched “AI for Science” to boost breakthroughs in physics and math. Led by Kevin Weil, the team includes top scientists using GPT-5 Pro, which rapidly solved problems once thought beyond AI’s reach.

  • Tesla reportedly ordered $685 million worth of parts for its Optimus humanoid robot from a Chinese supplier, signaling plans for mass production—possibly up to 180,000 units—beginning in early 2026.

  • Hugging Face launched HuggingChat Omni, an AI router that picks the best open-source model per prompt from over 100 options. Powered by Arch-Router-1.5B, it balances speed, cost, and task fit—fully open-source.

  • Ten major foundations launched Humanity AI, a $500 million initiative to ensure AI serves people, not just corporations. It funds efforts in democracy, education, labor, culture, and safety—prioritizing human values in AI’s future.

  • OpenAI falsely claimed GPT-5 solved unsolved math problems. Experts, including Demis Hassabis, debunked it, revealing GPT-5 only surfaced known work. Despite the misstep, GPT-5 shows promise as a research assistant, not a discoverer.

  • Startup Pathway unveiled BDH, a brain-inspired language model using neurons and synapses instead of Transformers. It learns faster, interprets more easily, handles unlimited context, and may offer safer, modular, and more human-like reasoning systems.

  • Meta will let parents block teens from private chats with AI chatbots after backlash over flirty interactions. New controls roll out in 2026, starting in the U.S., UK, Canada, and Australia, with PG-13 safeguards.

  • U.S. electricity bills are rising, partly due to the AI boom. Power-hungry data centers now strain the grid, with AI services driving record demand—expected to consume up to 12% of U.S. electricity by 2028.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.