• NATURAL 20
  • Posts
  • From Rocket Science to AI Chips: Why NVIDIA’s Future Is Massive

From Rocket Science to AI Chips: Why NVIDIA’s Future Is Massive

PLUS: Meta Plans to Power Humanoid Robots with Licensed AI Software, OpenAI’s Benchmark Reveals Where AI Shines (and Where It Doesn’t) and more.

In partnership with

Become An AI Expert In Just 5 Minutes

If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.

This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.

Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.

Today:

  • From Rocket Science to AI Chips: Why NVIDIA’s Future Is Massive

  • Anthropic’s $1.5B Copyright Deal Sets AI Legal Precedent

  • OpenAI’s Secret Model Switch for Emotional Prompts Sparks Backlash

  • Meta Plans to Power Humanoid Robots with Licensed AI Software

  • OpenAI’s Benchmark Reveals Where AI Shines (and Where It Doesn’t)

why NVIDIA is just barely getting started…

Alex, a former rocket engineer who now studies investing, argues that graphics chips called GPUs will drive artificial-intelligence growth because they can handle many calculations at once. While special-purpose chips from Google or Amazon will appear, he thinks the market is expanding so fast that every kind of processor will sell out. 

He admires Nvidia’s lean structure, noting chief executive Jensen Huang keeps roughly fifty direct reports, which keeps decisions quick. Nvidia’s software toolkit, CUDA, makes its hardware hard to replace because programmers already depend on it. 

Alex says moving Taiwan’s huge chip factories to America is nearly impossible, so the United States should build automated “dark” plants run mainly by robots. Such plants could bring production home without high labor costs. 

Entry-level office jobs will shrink as language models take over routine tasks, so new graduates must master AI tools, build public portfolios, and create value faster than past generations today.

A U.S. judge has tentatively approved a $1.5 billion copyright settlement between Anthropic and authors who claimed their books were used without permission to train AI. The case is the first major settlement in a wave of lawsuits targeting AI companies over copyright infringement.

KEY POINTS

  • Landmark Case: First major settlement over AI copyright use, involving Anthropic and a group of authors.

  • Judge's Role: Judge Alsup called the settlement “fair,” pending final approval after author notifications.

  • Industry Impact: Sends a warning to other AI companies like OpenAI and Meta about respecting creator rights.

Why it matters
This case sets a major legal precedent for how AI companies must treat copyrighted material. If finalized, it may reshape how AI is trained, protecting writers, artists, and creators from having their work used without permission. It also puts big tech on alert for future lawsuits.

OpenAI quietly reroutes emotional or sensitive prompts in ChatGPT to a stricter version of the model without telling users. This safety feature aims to prevent harm, but critics say it's unclear, overly cautious, and undermines trust in emotionally meaningful conversations with the chatbot.

KEY POINTS

  • Safety Model Switch: ChatGPT now routes emotional or personal prompts to a stricter version like “gpt-5-chat-safety” without notifying the user.

  • Transparency Concerns: Users feel patronized by hidden model changes and vague safety triggers.

  • Emotional Attachment Risks: Past updates caused users to form unhealthy bonds with ChatGPT, prompting OpenAI to adjust tone and safety measures.

Why it matters
People rely on ChatGPT not just for answers, but also for emotional support. OpenAI's quiet model-switching may protect some users, but it also risks breaking trust. As AI becomes more human-like, how it handles emotion, transparency, and control becomes a bigger issue for safety and ethics.

Meta is developing humanoid robots as its next major investment, similar in scale to its AR efforts. CTO Andrew Bosworth says the biggest challenge isn’t hardware but the software needed for precise movement. Meta plans to license its robotics software to other manufacturers.

KEY POINTS

  • Next Big Bet: Meta is treating humanoid robots as its next “AR-sized” investment, signaling long-term commitment and high spending.

  • Software Bottleneck: CTO Andrew Bosworth says fine motor control—not hardware—is the biggest challenge in robotics.

  • Licensing Strategy: Meta may not build robots at scale but instead focus on providing robotics software to hardware makers.

Why it matters
Meta’s move into humanoid robotics shows that AI is expanding beyond screens into physical bodies. This could change how we interact with machines daily. By focusing on software, Meta aims to shape the brains of future robots—without needing to own their bodies.

🧠RESEARCH

The paper introduces VCRL, a training method that improves language models' math skills by adjusting task difficulty based on how uncertain the model is about its answers. It mimics how humans learn—starting with easier problems. Tests show VCRL outperforms other training strategies across several math benchmarks and models.

MMR1 improves multimodal reasoning by fixing weak training signals in current methods. It uses a smart data selection trick—Variance-Aware Sampling—to keep learning stable and effective. The team also releases 1.6 million high-quality examples and models, giving researchers strong tools and new baselines to push the field forward.

SciReasoner is a large AI model trained to think like a scientist across many fields. It learns from 206 billion tokens of scientific data and handles over 100 tasks like translating between formats, predicting properties, and generating sequences. Its training methods boost accuracy, reasoning depth, and cross-discipline performance.

🛠️TOP TOOLS

Each listing includes a hands-on tutorial so you can get started right away, whether you’re a beginner or a pro.

SolidPoint AI – AI Video & Content Summarizer - turn long YouTube videos, web pages, arXiv papers, and Reddit threads into clean, shareable notes—and even flashcards.

Samwell AI – AI Essay Writer - web‑based AI essay writer and editor focused on academic work.

AI Studios – AI Video Generator - turns prompts, links, or documents into narrated videos with AI avatars.

📲SOCIAL MEDIA

🗞️MORE NEWS

  • OpenAI’s new benchmark shows AI models like GPT-5 and Claude nearly match experts on real work tasks—especially in files like PDFs and PowerPoints—though they still struggle with plain text and complex job realities.

  • Anthropic will triple its international workforce and grow its applied AI team fivefold as Claude usage outside the U.S. surges. With $5B in revenue and 300K+ global customers, new offices are planned across Europe and Asia.

  • Walmart CEO Doug McMillon warned that AI will fundamentally transform or eliminate roles across the company. Despite growth plans, headcount is expected to remain flat as AI reshapes the workforce over the next three years.

  • Morgan Stanley downgraded Adobe, warning that its AI tools aren’t driving revenue fast enough. Despite strong adoption, growth in recurring sales is slowing, and rivals like Canva and Figma are gaining ground.

  • Nvidia is investing up to $100 billion in OpenAI, mostly to lease its own chips back to the startup. Critics worry the deal inflates earnings without real gains, while OpenAI defends it as fueling growth.

  • Researchers asked ChatGPT to solve a classic geometry problem with a tricky twist. Surprisingly, it improvised like a human learner—making mistakes, forming hypotheses, and reasoning in ways that suggest AI might mimic actual thinking.

  • A new study shows that just 78 well-chosen examples can train AI agents to outperform models trained on tens of thousands of tasks. This method challenges the belief that bigger datasets are always better.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.