• NATURAL 20
  • Posts
  • Google Unveils Gemini 3.1 Flash Live for Faster, More Natural AI Voice Conversations

Google Unveils Gemini 3.1 Flash Live for Faster, More Natural AI Voice Conversations

PLUS: Google Simplifies Chat History Migration from Competitors, Meta Constructs Sustainable $1.5B AI Facility and more.

In partnership with

Here’s how I use Attio to run my day.

Attio is the AI CRM with conversational AI built directly into your workspace. Every morning, Ask Attio handles my prep:

  • Surfaces insights from calls and conversations across my entire CRM

  • Update records and create tasks without manual entry

  • Answers questions about deals, accounts, and customer signals that used to take hours to find

All in seconds. No searching, no switching tabs, no manual updates.

Ready to scale faster?

Hey there!

I want to make sure the content here is actually valuable to you. To do that, I need to know who's in the room! If you have a quick minute, drop your thoughts in this brief survey about your role, industry, and passions. Help me tailor this space entirely to your interests.

Today:

  • Google Unveils Gemini 3.1 Flash Live for Faster, More Natural AI Voice Conversations

  • OpenAI Streamlines Development Workflows with New Codex Plugins

  • Meta Introduces TRIBE v2: A Groundbreaking Digital Replica of the Human Brain

  • Google Simplifies Chat History Migration from Competitors

  • Meta Constructs Sustainable $1.5B AI Facility

Google just launched Gemini 3.1 Flash Live in preview through the Gemini Live API and Google AI Studio, and the pitch is pretty clear: developers can now build agents that see, listen, respond quickly, and hold more natural conversations in real time. Google is emphasizing lower latency, better reliability in noisy environments, stronger instruction-following, and support for more than 90 languages.

Google is aiming at something much more interactive here, where the model can stay inside the flow of a live conversation, understand tone and pacing, use tools during the exchange, and react more like a system that is present instead of one that is just called on demand. That sounds small on paper, but it is a huge difference in practice.

The part that makes this feel real rather than theoretical is the early use-case direction.

Google highlighted examples like voice-based design feedback in Stitch, multilingual companionship experiences, and RPG-style interactive characters, plus integrations with partners for things like WebRTC scaling and live audio/video pipelines. In other words, this is not being framed as just “a better chatbot.” It is being framed as infrastructure for live AI products.

OpenAI’s new Codex plugins are installable bundles for reusable Codex workflows. A plugin can package skills, optional app integrations, and MCP server configurations together, which means teams can create a repeatable setup once and then reuse it across projects or across an entire organization.

Instead of every developer or team reinventing the same agent setup over and over, OpenAI wants those workflows to become portable.

You can install curated plugins in the Codex app, open the plugin surface from the CLI with /plugins, and even use the built-in @plugin-creator skill to scaffold local plugins and test them in a marketplace setup.

Once workflows, tools, approvals, and context can be bundled together cleanly, the agent stops being just an individual productivity trick and starts becoming a team asset. That is a much bigger shift.

Meta released TRIBE v2, which it describes as an AI model of human brain responses to sight, sound, and language. The demo and research materials position it as a multimodal foundation model that predicts fMRI brain responses, with Meta also releasing the paper, code, weights, and interactive demo for researchers.

The broader idea is “in-silico neuroscience,” meaning neuroscience done through computer simulation rather than only through slow, expensive lab experiments. Meta says the system is meant to act like a digital mirror of brain activity in response to images, audio, and language, which could help researchers explore hypotheses faster.

Even if this stays mostly in research for a while, it is still a meaningful signal.

AI labs are not only racing to build better assistants and better coding agents.

They are also trying to build models that explain perception itself, which could eventually matter for neuroscience, brain-computer interfaces, and how future systems are designed to better match human cognition.

🧠RESEARCH

Artificial intelligence models that try to improve their own work—like rewriting code until it works—often fail. This paper shows that success depends on hidden human choices, such as how much past feedback the AI remembers. The researchers provide a practical guide to help engineers build more reliable, self-improving AI systems.

This research introduces a fast new method for generating 3D scenes from flat 2D images. Instead of building a complex, traditional 3D model, the system uses "3D-aware" artificial intelligence to guess the depth and shape of objects. This allows computers to instantly and accurately create realistic new camera angles.

Training AI to use a computer like a human requires watching humans work. Researchers created a massive collection of 55 hours of screen recordings, capturing every mouse click and decision across 87 apps. This detailed data helps teach AI assistants how to navigate complex software and complete everyday digital tasks.

📲SOCIAL MEDIA

🗞️MORE NEWS

Google Gemini App Switch Google updated the Gemini app to let users easily import their chat history and personal details from other artificial intelligence tools. By uploading a single file or copying a prompt, users can keep their saved preferences and pick up right where they left off. This means you do not have to start from scratch to teach the AI what you like when switching to Gemini.

Meta’s El Paso Data Center Meta is building a $1.5 billion computing facility in El Paso, Texas, specifically designed to power advanced artificial intelligence. The center will run entirely on renewable electricity and use a water-free cooling system for most of the year. This setup helps Meta train complex AI systems while minimizing its environmental footprint.

Google Research TurboQuant Google researchers developed "TurboQuant," a new method that drastically shrinks the massive memory files needed to run large artificial intelligence systems. It uses advanced math to pack data tightly together without losing any accuracy or performance. This breakthrough makes AI much faster and cheaper to operate, especially for search engines and complex text generators.

Suno v5.5 Release The music generation tool Suno released its newest version, which lets users safely copy their own voice to sing the songs they generate. It also introduces custom features that learn a user's unique musical style from their previous uploads to the platform. 

Mistral AI Voxtral TTS Mistral AI launched Voxtral, a new system that turns typed text into highly realistic, human-sounding speech. It can perfectly mimic a person's voice, accent, and emotion using just a three-second audio clip of them speaking. Test listeners preferred Voxtral's natural sound and emotional range over competing voice-copying tools.

OpenAI Ads Revenue OpenAI has officially reached a massive $100 million yearly revenue pace from its new advertising tests. This means the company is quickly finding highly profitable ways to show ads to users interacting with its artificial intelligence tools. It marks a major step in the tech industry's push to turn AI chatbots into reliable money-making platforms.

Apple iOS 27 Siri Update Apple plans to completely open up Siri in the upcoming iOS 27 software update to work alongside competing artificial intelligence assistants. Users will be able to connect outside tools, like Google's Gemini or Anthropic's Claude, directly into their iPhone's main voice system. This move gives iPhone owners much more freedom to choose which AI answers their daily questions.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.