• NATURAL 20
  • Posts
  • Apple Set to Pay Google $1B a Year to Power Smarter Siri

Apple Set to Pay Google $1B a Year to Power Smarter Siri

PLUS: Snapchat to Add Perplexity AI Answers in 2026 Chat Upgrade, OpenAI and SoftBank Launch 50-50 AI Venture in Japan and more.

In partnership with

Where to find high-intent holiday shoppers

Let’s be real: most brands are targeting the same people this holiday season. “Shoppers 25–54 interested in gifts.” Sound familiar? That's why CPMs spike and conversion rates tank.

Speedeon’s Holiday Audience Guide breaks down the specific digital audience segments that actually perform—from early-bird deal hunters actively comparing prices to last-minute panic buyers with high purchase intent. These aren't demographic guesses.

And our behavioral audiences are built on actual shopping signals and real-world data —the same approach we use for clients like FanDuel and HelloFresh."

You'll get the exact audiences, when to deploy them, which platforms work best, and what kind of performance to expect.

Download the guide and get smarter about your holiday targeting before the holiday rush hits.

Today:

  • Apple Set to Pay Google $1B a Year to Power Smarter Siri

  • OpenAI Launches IndQA to Test AI’s Grasp of Indian Culture

  • Google Finds AI Malware That Rewrites Itself Every Hour

  • OpenAI and SoftBank Launch 50-50 AI Venture in Japan

  • Snapchat to Add Perplexity AI Answers in 2026 Chat Upgrade

Apple is close to signing a deal to pay Google around $1 billion annually to use its advanced 1.2 trillion-parameter AI model to power an upgraded version of Siri, marking a major shift in Apple’s AI strategy.

KEY POINTS

  • Siri Upgrade with Google AI: Apple plans to use Google’s massive AI model to overhaul Siri’s intelligence and performance.

  • $1 Billion Annual Deal: The agreement would cost Apple roughly $1 billion each year for access to Google's 1.2 trillion-parameter model.

  • AI Race Intensifies: This move shows Apple relying on rivals as it races to catch up in generative AI capabilities.

Why it matters

Apple’s decision to license Google’s AI shows how even top tech companies need to rely on each other in the AI race. It could make Siri much smarter, but also raises questions about Apple’s own AI progress and its dependence on competitors.

OpenAI launched IndQA, a new benchmark to test how well AI understands Indian languages and culture. Covering 12 languages and 10 cultural areas, it uses expert-created questions that require reasoning, not just translation. It aims to improve AI accessibility for India’s diverse population.

KEY POINTS

  • Deep Cultural Understanding: IndQA focuses on reasoning about Indian culture, not just basic language skills or translation.

  • Diverse Language Support: Covers 2,278 questions across 12 Indian languages including Hinglish, Marathi, and Tamil.

  • Expert-Created, Hard-to-Answer: All questions were filtered to be challenging for GPT-4/5 models and graded by domain-specific rubrics.

Why it matters
Most AI tools still struggle with languages and cultures outside the English-speaking world. India, with over a billion non-English speakers, deserves better support. IndQA helps ensure that future AI models can understand, reason about, and respect the cultural richness of one of the world’s largest populations.

Google has discovered a VBScript malware called PROMPTFLUX that uses Gemini AI to rewrite its own code every hour. It evolves in real-time to avoid detection, showing how hackers are now using AI not just for productivity, but as a core engine of cyberattacks.

KEY POINTS

  • AI-Powered Malware: PROMPTFLUX uses Gemini AI to rewrite and obfuscate its code on the fly, enabling hourly self-modification.

  • Evolving Threat: The malware logs AI prompts and stores new versions in Windows startup folders to persist and spread via networks.

  • Global State Actor Abuse: Google found Gemini also being used by state-backed hackers from China, Iran, and North Korea for phishing, code obfuscation, and data theft.

Why it matters
This is a turning point in cybersecurity. Malware can now evolve constantly using AI, making it harder to detect and stop. It means traditional defenses may quickly become outdated, and both companies and governments must adapt fast to this new wave of intelligent cyber threats.

🧠RESEARCH

Fine-tuning AI models for actions often harms their ability to “see” and understand images. This study shows how careless training can weaken visual understanding and proposes a simple fix that helps models perform better in unfamiliar situations. Their method keeps image knowledge intact while improving real-world generalization.

VCode is a new test for AI that turns pictures into SVG code—compact shapes that carry meaning. Most top models struggle with this task, especially in expert or 3D settings. The authors offer a better method, VCoder, that helps AI think and revise using visual tools, improving accuracy significantly.

CALM introduces a faster way to generate language by predicting chunks of meaning as smooth vectors instead of individual words. This method keeps quality high while cutting down on computation. It could help build more efficient, scalable language models for future AI systems.

🛠️TOP TOOLS

Each listing includes a tutorial so you can get started right away, whether you’re a beginner or a pro.

Adobe Audio Enhancer – Quick Guide, Features & Pricing - browser‑based AI filter that cleans up voice recordings

Accounting AI Solver: Fact‑Checked Answers to U.S. Tax Questions - web-based assistant that answers U.S. tax questions in seconds

Accountabilabuddy by Summit: A Simple AI Accountability Buddy - designed to keep you on track with goals and habits

📲SOCIAL MEDIA

🗞️MORE NEWS

  • Google Chrome has added a new AI Mode button to its mobile app, making advanced search easier on iOS and Android. It supports complex questions and is expanding to 160 countries and more languages.

  • SoftBank and OpenAI formed a 50-50 joint venture in Japan to sell AI tools to businesses. Their first client is SoftBank itself. Critics say the circular money flow mirrors past tech bubbles.

  • Snapchat will integrate Perplexity’s AI answer engine in early 2026, letting users ask questions directly in chat. Perplexity is paying Snap $400M, aiming to enhance real-time discovery and personalized learning within the app.

  • Anthropic will preserve model weights and conduct reflective interviews before retiring Claude models. This addresses safety risks, user impact, and potential model welfare, while enabling future research and preparing for deeper human-AI relationships.

  • Google DeepMind launched ForestCast, an AI tool that predicts deforestation risk using satellite data. It helps governments, companies, and communities act early, aiming to prevent forest loss and reduce climate-related damage before it happens.

  • Elon Musk’s xAI required employees to give their face and voice data to train its sexualized AI chatbot “Ani.” Staff were told it was mandatory, raising concerns about consent, privacy, and deepfake misuse.

  • Microsoft tested AI agents in a simulated marketplace and found they’re easily manipulated, overwhelmed by choices, and struggle with collaboration. The study shows today’s top models still lack basic decision-making and coordination skills.

  • Google Cloud upgraded its Agent Builder with faster build tools, better observability, governance layers, and security features. New capabilities improve context handling, plugin logic, and deployment — aiming to outpace OpenAI, Microsoft, and AWS in agent development.

  • Tinder is testing an AI feature called Chemistry that analyzes user responses and Camera Roll photos to improve match recommendations. Despite AI efforts, subscriber declines continue, and privacy concerns over photo access remain unresolved.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.