• NATURAL 20
  • Posts
  • DeepSeek Preps AI Agent, Salesforce Cuts Jobs, OpenAI Trains Workers

DeepSeek Preps AI Agent, Salesforce Cuts Jobs, OpenAI Trains Workers

PLUS: Why AI Hallucinates: OpenAI Explains the Root Cause, Grok Adds PDF Tools: Highlight, Explain & Quote with One Click and more.

In partnership with

Kickstart your holiday campaigns

CTV should be central to any growth marketer’s Q4 strategy. And with Roku Ads Manager, launching high-performing holiday campaigns is simple and effective.

With our intuitive interface, you can set up A/B tests to dial in the most effective messages and offers, then drive direct on-screen purchases via the remote with shoppable Action Ads that integrate with your Shopify store for a seamless checkout experience.

Don’t wait to get started. Streaming on Roku picks up sharply in early October. By launching your campaign now, you can capture early shopping demand and be top of mind as the seasonal spirit kicks in.

Get a $500 ad credit when you spend your first $500 today with code: ROKUADS500. Terms apply.

Today:

  • DeepSeek Preps AI Agent, Salesforce Cuts Jobs, OpenAI Trains Workers

  • Anthropic Settles $1.5B Author Lawsuit, Sets AI Copyright Precedent

  • OpenAI to Burn $115B by 2029 in AI Infrastructure Push

  • OpenAI Merges Personality Team Into Post Training Division

  • Why AI Hallucinates: OpenAI Explains the Root Cause

  • Grok Adds PDF Tools: Highlight, Explain & Quote with One Click

AI NEWS: Deepseek's AI Agent, CEO Stokes AI 'Crisis' Narrative, OpenAI Economic Solutions

DeepSeek is set to release an AI agent in 2025 to rival OpenAI, overcoming setbacks from using Chinese chips. Meanwhile, a new benchmark called Husky Holde Bench tests how well language models write poker-playing bots—Claude models outperformed GPT-5 and Grok. 

Salesforce plans 4,000 AI-driven layoffs, while OpenAI counters with free AI education and job matching. And yes, Ilia merch made with Nano Banana went viral online.

Anthropic will pay $1.5 billion to settle a landmark copyright lawsuit from authors who claimed their books were used without permission to train AI models. The case set legal precedent: training on copyrighted works is allowed only if the materials are obtained legally. Though the judge ruled in Anthropic’s favor on some points, this settlement signals the start of licensing norms where creators are paid for their work in AI training.

Why This Matters

  1. Sets Legal Precedent:
    Courts now clarify that AI models can train on copyrighted material only if acquired through legal means—changing how datasets are built.

  2. Forces Compensation Models:
    The $1.5B settlement pressures AI companies to pay for training data, opening the door to licensing markets like those in music and film.

  3. Signals Industry Maturity:
    This marks a shift from “scrape now, ask later” toward more responsible, sustainable practices in AI development.

OpenAI reportedly told investors it plans to spend up to $115 billion by 2029—nearly $80 billion more than previously expected. The spike in spending is linked to its efforts to build in-house data center chips and facilities, aiming to reduce dependence on rented cloud servers. This ambitious move signals OpenAI’s intent to take greater control over its infrastructure as it pushes forward in the AI race.

Why This Matters

  1. Massive AI Infrastructure Bet:
    A $115B investment suggests OpenAI is preparing for unprecedented compute demands—hinting at future models far beyond GPT-5.

  2. Chip Independence:
    By developing its own chips, OpenAI reduces reliance on Nvidia and other vendors, possibly reshaping the AI hardware landscape.

  3. Signals Long-Term Dominance Strategy:
    Such scale and vertical integration show OpenAI’s intent not just to compete—but to lead the AI arms race well into the next decade.

OpenAI is merging its Model Behavior team—responsible for shaping AI personality and reducing bias—into the larger Post Training division. Team lead Joanne Jang will head a new group, OAI Labs, focused on building fresh ways to interact with AI beyond chat. The reshuffle follows criticism of GPT-5’s tone and growing concerns about AI safety, including a lawsuit linked to a teen's suicide after confiding in GPT-4o.

Why This Matters

  1. Personality Is Now Core to AI Design:
    OpenAI elevates “personality shaping” as a central part of model development, not an afterthought—vital as AI becomes more emotionally interactive.

  2. Shift Toward Next-Gen Interfaces:
    OAI Labs aims to break free from the chatbot mold, exploring new interfaces that could redefine how people think, create, and collaborate with AI.

  3. Ethical Stakes Are Rising:
    With real-world consequences, like the GPT-linked teen suicide case, the move underscores growing pressure on AI labs to address emotional safety and responsibility.

🧠RESEARCH

The paper introduces Drivelology, a type of “nonsense with depth” — phrases that look absurd but hide layered meaning. Researchers built a multilingual dataset of 1,200 examples to test language models. Results show models confuse it with plain nonsense, exposing gaps in contextual, emotional, and moral understanding.

The paper presents FE2E, a framework that adapts an image editing model (Diffusion Transformer) for predicting depth and surface normals from single images. Unlike text-to-image generators, editors carry structural priors, leading to more stable training and better accuracy. FE2E achieves major zero-shot gains, beating models trained on vastly larger datasets.

The paper unifies post-training methods for language models by showing that supervised fine-tuning and reinforcement learning are both special cases of one optimization process. It introduces a Unified Policy Gradient Estimator and a Hybrid Post-Training algorithm, which mix human data and model rollouts. Tests show strong gains in reasoning benchmarks.

🛠️TOP TOOLS

PromptPal - AI prompt management platform designed to streamline the process of creating, organizing, and collaborating on AI-generated content.

Tldv - AI-powered meeting assistant designed to streamline the entire meeting process, from recording to follow-up.

GPT Researcher - open-source autonomous agent designed to revolutionize online research tasks.

Designify - AI-powered web application that transforms ordinary photos into professional-quality designs.

Playground AI - Text-to-image model developed by Playground AI

📲SOCIAL MEDIA

Ilia Sutskever warns we’re on a fast path to superintelligence, where AI will surpass human reasoning, coding, and math skills—replacing most jobs. He likens AI’s societal impact to politics: unavoidable. Eric Schmidt echoes this urgency, highlighting AI agents, infinite memory, and self-improving systems.

Together, they argue AGI may arrive within 3–6 years, demanding rapid adaptation in law, energy, education, and governance.

🗞️MORE NEWS

  • OpenAI explains that language models "hallucinate"—make things up—because they’re trained to guess answers instead of admitting uncertainty. Current scoring systems reward guessing, encouraging false confidence. Fixing evaluations could reduce these errors.

  • xAI introduces a new Grok feature for PDFs. Users can highlight any section and click "Explain" for help understanding, or "Quote" to ask targeted questions—making it easier to interact with complex documents.

  • Alibaba’s Qwen team unveils Qwen3-Max-Preview, a massive 1-trillion-parameter model. It outperforms their previous best in conversations, tasks, and instruction following. Available now on Qwen Chat and Alibaba Cloud. Bigger surprises are coming.

  • Geoffrey Hinton warns AI will drive mass unemployment and boost profits for the wealthy, blaming capitalism—not AI. He criticizes tech’s short-term focus and dismisses universal basic income as a solution lacking human dignity.

  • Apple is being sued by authors Grady Hendrix and Jennifer Roberson for allegedly using pirated books to train its AI. The lawsuit claims Apple relied on the Books3 dataset and scraped websites using Applebot.

  • Microsoft Azure is experiencing higher latency due to multiple undersea fiber cuts in the Red Sea. While traffic through the Middle East is disrupted, Azure has rerouted data and core services remain operational.

  • AI safety pioneer Roman Yampolskiy predicts 99% unemployment by 2030, as AI replaces nearly all jobs. He warns retraining won't help and urges society to prepare for a future without traditional work or meaning.

  • OpenAI warns that unauthorized sales of its equity—through SPVs, tokens, or contracts—are invalid and may violate securities laws. Any equity transfer requires written consent, and violators risk legal action and financial loss.

  • Daniel Edrisian announces that his team behind Alex, an AI coding agent for iOS/MacOS, is joining OpenAI’s Codex team. They aim to scale their mission of helping developers create, now within OpenAI.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.