- NATURAL 20
- Posts
- Microsoft Unveils Copilot Tuning Tools
Microsoft Unveils Copilot Tuning Tools
PLUS: Google Brings NotebookLM To Mobile, Salesforce Launches Agentforce In Slack and more.

Prompt, run, edit, and deploy full-stack web and mobile apps
Say hello to Bolt.new, your AI-powered web development agent that brings full stack coding to your browser with no setup required. Built on StackBlitz’s WebContainers, Bolt lets you build, edit, and test web applications in real time with simple, chat-based prompts. Just tell the agent what you want to build or change, and watch your code update instantly.
Bolt supports today’s most popular web languages and frameworks, so you can get started fast whether you’re building a React app, deploying with Netlify, or integrating a backend with Supabase for auth, storage, and more. Need a mobile version too? Bolt works with Expo to streamline mobile app development.
Whether you’re a solo builder or part of a product team, Bolt is your fast lane to production-ready code right from your browser.
Today:
Microsoft Unveils Copilot Tuning Tools
Nvidia Plans AI Supercomputer In Taiwan
GitHub Copilot Adds AI Agent
Google Brings NotebookLM To Mobile
Salesforce Launches Agentforce In Slack
Microsoft’s Build 2025 keynote rolled out Copilot Tuning, a simple drag-and-drop way for firms to train Copilot on their own files. It also unveiled multi-agent teamwork, letting different AI helpers share tasks under human control. Developers gain new toolkits, model choices, and security controls, while the Copilot Wave 2 apps and first reasoning agents enter general release. The goal: make tailored, secure workplace AI faster, cheaper, and easier for every business.
Why this matters to the AI world
Hands-on customization for everyone – Non-technical staff can now fine-tune (adjust) strong models with company data, speeding real-world AI use.
Rise of cooperating agents – Built-in multi-agent orchestration (helpers working together) shows a shift toward AI “teams” tackling bigger, linked tasks.
Secure, open ecosystem – Bring-your-own-model options plus Entra identity and Purview safeguards spotlight a broader push for safe, plug-and-play enterprise AI.
Nvidia will build Taiwan’s first AI supercomputer, deepening its ties with Foxconn and TSMC, the island’s manufacturing giants. Announced at Taipei’s Computex conference, the plan comes with an open server platform letting partners craft semi-custom AI hardware and ease tariff-hit supply chains. CEO Jensen Huang positioned the move as a way to anchor production, speed model training, and broaden access to Nvidia’s chip designs while keeping pace with AI demand.
Why it matters for AI
More compute, faster progress – A new supercomputer adds huge processing power in Asia, helping researchers and companies train bigger, better models without long waits.
Open, flexible hardware – By letting others tweak its server design, Nvidia encourages specialized chips and competition, potentially lowering costs and spurring fresh AI innovations.
Supply-chain resilience – Locating advanced infrastructure in Taiwan and partnering with Foxconn and TSMC diversifies production and reduces geopolitical or tariff-related bottlenecks for the global AI industry.
GitHub has added a hands-on AI helper to Copilot that can be told to fix bugs, add features, or improve docs. Once assigned, the agent spins up a private virtual machine, clones the code, and works through tasks, logging its thinking and saving progress. When done, it requests review and tweaks itself from feedback. Available to Copilot Enterprise and Pro Plus users, the agent joins rivals from Google, OpenAI, and others.
Why this matters to AI
Practical autonomy – The agent tackles real coding chores end-to-end, proving AI can manage full software tasks rather than just suggest snippets.
Platform integration – Embedding the helper inside GitHub signals a shift toward built-in, turnkey AI tools that every developer can tap without extra setup.
Impact on workflows and roles – Faster bug fixes and feature work may raise productivity but also force teams to rethink testing, oversight, and developer job scopes.
🧠RESEARCH
Qwen3 is a new family of language models that combines fast response and deep reasoning in one system. It supports smarter use of computing power with a "thinking budget" and performs well across tasks like coding and math. It now understands 119 languages and is open-source for community use.
GuardReasoner-VL is a new safety system for vision-language models (VLMs) that reasons before moderating content. It was trained on a large, diverse dataset and improved using reinforcement learning. The model balances accuracy and efficiency, outperforming others by over 19% in safety tasks. Both code and models are publicly available.
MMLongBench is a new benchmark designed to test how well vision-language models handle long inputs—up to 128,000 tokens of images and text. It includes over 13,000 examples across diverse tasks and image types. Results show current models still struggle with long-context reasoning, revealing big opportunities for improvement in this area.
🛠️TOP TOOLS
GeoSpy AI - AI-powered geolocation tool that analyzes images to determine where they were taken, without relying on metadata or GPS information.
Teach Anything - AI-powered educational platform that provides instant answers and explanations on a wide range of topics.
Cleanvoice AI - Podcast editing tool that leverages artificial intelligence to streamline the post-production process for audio and video content creators.
Huberman AI - Provide users with easy access to the wealth of information from the Huberman Lab podcast.
HeadlinesAI - Generate compelling headlines for various content platforms.
📲SOCIAL MEDIA
this professor has some strong opinions about LLMs.
I asked o3 go through it line by line and see if there's any faulty reasoning there.
below is the response.
between the professor and the o3, who shows better reasoning abilities?o3 model:
Line-by-line takedown
1. “It is
— Wes Roth (@WesRothMoney)
4:35 AM • May 19, 2025
🗞️MORE NEWS
Google launched stand-alone NotebookLM apps for Android and iOS, bringing its AI-powered note tool to mobile. Users can now access smart summaries, AI audio overviews, offline playback, and easily add web or video sources on the go.
Salesforce launched Agentforce in Slack, introducing AI “digital teammates” that handle tasks like onboarding, customer insights, and workflow automation. Unlike broad assistants, these specialized agents deliver faster results, save time, and integrate deeply with Slack — aiming to challenge Microsoft Copilot and Google’s Gemini in workplace AI.
Microsoft launched Discovery, an AI platform that helps scientists accelerate research using specialized agents and graph-based reasoning. Built on Azure, it enabled a coolant breakthrough in 200 hours, aiming to transform R&D across industries.
Nvidia unveiled humanoid robotics tools, RTX Pro Blackwell servers, and NVLink Fusion tech for custom AI infrastructure. The goal: power agentic AI and data centers, pushing toward the trillion-dollar physical AI future.
Microsoft is now hosting xAI’s Grok 3 and Grok 3 Mini models on Azure AI Foundry, offering them to customers with full Microsoft support. The move signals growing tensions with OpenAI as Microsoft backs rival models.
A Korean study found that Sybil, an AI model, can predict lung cancer risk from a single low-dose CT scan with up to 86% accuracy. It could help target screening, especially among rising non-smoker cases in Asia.
What'd you think of today's edition? |
Reply