• NATURAL 20
  • Posts
  • Microsoft Custom AI Chips, Meta's AI Network, Microsoft 365 Copilot Features

Microsoft Custom AI Chips, Meta's AI Network, Microsoft 365 Copilot Features

Microsoft's strategic move in the AI industry with the introduction of Cobalt and Maia chips

Today:

Microsoft launches AI chips as developers seek Nvidia alternatives

Microsoft just showed off its own AI chips, aiming to shake up a market where Nvidia's been running the show. They're rolling out two chips next year: Cobalt, a general-purpose one, and Maia, made specially for AI. This move is big news at their Ignite shindig, especially since Nvidia's chips are in such high demand that even Microsoft had to look elsewhere to keep their AI stuff running smoothly.

Their Maia chip is tailored for OpenAI's big brain, GPT. Microsoft's been doing more DIY with their data center gear since 2016, and these chips are their latest and boldest move. It's all about making things work better and more efficiently, which could mean better deals for Azure customers and fatter margins for Microsoft.

The new chips got a thumbs up from OpenAI's head honcho, Sam Altman. He's all about how this could lead to smarter models and cheaper costs for users. Up until now, OpenAI's been relying on Nvidia's hardware. Microsoft's Maia will be part of their Bing, Microsoft 365, and Azure OpenAI services. Customers won't interact with it directly, but it'll be working its magic behind the scenes.

Rani Borkar from Microsoft says they'll eventually roll Maia out for public cloud customers, but they're starting with internal testing. Microsoft's been in the chip-making business for a while with Xbox and HoloLens, but they only got serious about Azure-specific chips in 2020.

Meta’s engineers on building network infrastructure for AI

Facebook's transition to Meta — in 3D. More 3D app icons like these are coming soon. You can find my 3D work in the collection called "3D Design".

Meta's goin' all-in on AI, rolling out some serious hardware and software to keep up. They've got this new chip called MTIA v1, built just for AI, and they're whipping up fancy AI models like Llama 2. They're also tinkering with tools for coding with AI, called Code Llama.

They've got a couple of brainy types, Jongsoo Park and Petr Lapukhov, who are sweating the details on how to get their networks ready for these monster AI models. It's not just about more power; it's about smarter networks.

Then there's Hany Morsy and Susana Contrera, who've been shifting Meta's network from old school CPUs to GPU. They're using a fancy setup with RoCE networks and CLOS topology, which is basically a super-fast way to connect all those GPUs.

Zhaodong Wang and Satyajeet Singh Ahuja are showing off Arcadia, their crystal ball for predicting how AI systems will run in the future. It's like a test track for AI, letting them see how different setups will perform before they roll them out for real. Meta's pulling out all the stops to make sure their AI can run fast, run smart, and keep growing.

Microsoft 365’s Copilot AI moves out of beta and into… everywhere

Microsoft 365's Copilot AI. It's stepping up its game, no longer just a beta thing, but becoming a big deal in Microsoft's world. They're planning to roll this out next year, making it a regular feature in Office apps like Outlook and Teams. The idea is to have this AI sidekick sort of thing in all these apps, helping out with stuff like taking notes during meetings and offering insights afterward.

This Copilot isn't just for show, it's got some serious skills. For instance, in Teams, it's gonna help turn spoken ideas into visuals on Whiteboard, and then summarize it all so you can share it easily. It's like having a super assistant who can keep track of what's said in meetings, and even make sense of it all for you.

Now, Outlook and Word are also getting in on the action. Outlook's gonna be smart enough to summarize meetings, plan follow-ups, and get a grip on everyone's schedules. And Word? It's gonna make tracking changes a breeze, especially if someone's been messing with your drafts.

Microsoft's also mixing in some of its other tech, like Loop and Power Automate, to make everything more integrated and, well, smarter. And they're not stopping there. They're bringing in all sorts of plugins and connectors to beef up Copilot's brainpower.

An AI Doctor In A Box Coming To A Mall Near You

Adrian Aoun's company, Forward, is shaking up healthcare with its AI-powered CarePods, set to pop up in malls across the country. Think of these as doctor's offices in a box, but without the actual doctor. For $100 million, they're rolling out 25 of these high-tech kiosks where you can do your own health check-ups, guided by a robotic voice. You'll handle stuff like body scans and blood pressure readings solo.

Here's the deal: unlimited pod use costs $99 a month, but you can't mix it with Forward's regular doctor visits. Speaking of which, Forward isn't exactly a small fry – they're in 17 clinics across 9 states and D.C., charging $149 a month for virtual and in-person care.

Some big names are backing this idea, like SoftBank and Samsung. They think CarePods could change how we handle routine health checks, kind of like how we maintain our cars. But, there are skeptics. Some experts worry that this tech-heavy approach might miss the human touch needed in healthcare.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA's rolling out a slick new AI foundry service, and they're teaming up with Microsoft Azure to make it happen. This service is like a one-stop shop for businesses, big or small, to create their own AI models that can do some pretty nifty things like smart searches and generating content.

The cool part? NVIDIA's packing in a bunch of their own AI tech, like their AI Foundation Models and this thing called the NeMo framework. Plus, they're using their DGX Cloud, which is basically a powerhouse for AI stuff. Companies can use all this to cook up their own AI models and then launch them with NVIDIA's AI Enterprise software.

Big names like SAP, Amdocs, and Getty Images are already on board, crafting their custom AI models with this setup. NVIDIA's CEO, Jensen Huang, is all about helping businesses use their unique data to make specialized AI models. And Microsoft's CEO, Satya Nadella, is hyped about how this partnership is bringing new AI capabilities to the cloud.

SAP's first in line to use NVIDIA's DGX Cloud on Azure. They're working on this AI copilot called Joule that's gonna make handling business tasks a breeze. Amdocs is also in the game, tuning their AI to boost services for communication companies.

Be My Eyes AI offers GPT-4-powered support for blind Microsoft customers

Microsoft's teaming up with Be My Eyes to give a boost to their customer service for folks who can't see too well. Be My Eyes has this cool AI tool, Be My AI, powered by OpenAI's GPT-4, and it's getting hitched to Microsoft's Disability Answer Desk. This means folks with visual impairments can fix tech problems or update software without needing to call a human for help. This AI is quick, solving issues in about four minutes, way faster than chatting with a human agent. Most people (90%) didn't even feel the need to talk to a real person after using the AI.

Be My AI can describe photos, like showing how to set up a new computer, using GPT-4's brainpower. It chats in a way that makes sense and gives helpful advice for different problems. Jenny Lay-Flurrie at Microsoft is stoked about this, saying it's a big step for helping people with disabilities.

OpenAI's COO, Brad Lightcap, gave Be My Eyes some props for how they're using AI to make life better for folks with vision issues. Microsoft's not stopping here, though. They're on a roll with making their stuff more user-friendly for everyone, like with their new "Accessibility Assistant" in Microsoft 365 that flags if your writing is hard for some people to get.

New partnership aims to help doctors harness AI to diagnose patients

Elsevier, a big-shot in science data, is teaming up with OpenEvidence, a startup, to make AI a game-changer for doctors. This new tool, ClinicalKey AI, lets doctors use the latest medical research on the fly. It's huge because it could make healthcare better, cheaper, and more accessible.

Daniel Nadler from OpenEvidence says this isn't a zero-sum game; it's a win-win for everyone. Doctors feed in symptoms and get info from tons of medical journals, making decisions smarter. Elsevier's already testing this with 1,000 docs and plans to roll it out big time next year.

AI in medicine isn't just about using any AI tool; it's about blending AI smarts with super-specific medical data. This cuts down on mistakes and wild guesses. Danny Tobey, a doc and lawyer, thinks this will raise the bar in healthcare, especially for folks who don't have top-tier docs.

AI System Beats Chess Puzzles With ‘Artificial Brainstorming’

So, this computer whiz Tom Zahavy got back into chess during the 2020 lockdown. He was more into chess puzzles than playing the game itself. Turns out, these puzzles are not just brain teasers for humans; they're also a tough nut to crack for chess AIs. There's this famous puzzle by Sir Roger Penrose, where it's easy for a human to see a draw, but the chess computers are like, "Nah, black's winning." This got Zahavy thinking about the limits of these AIs.

Zahavy works at Google DeepMind, where they're big on making AI smarter. They took AlphaZero, their champ chess AI, and mixed it with up to 10 different AIs, each trained in their own way. This supergroup AI was better at handling those tricky puzzles than AlphaZero alone. It's like AI teamwork - if one method hits a snag, they try another.

Experts think this team-AI approach could solve big problems outside of chess too. Zahavy and his team noticed that while AlphaZero could play chess like a boss, it sometimes missed the bigger picture. It would get stuck on weird strategies because it didn't know how to fail and try something new.

To fix this, they trained AlphaZero with different starting positions, like those in Penrose puzzles. This helped it see the game from new angles. When they put this new AI to the test, it was like a chess prodigy - trying out new moves and solving puzzles that the old AlphaZero couldn't.

Databricks’ New AI Product Adds A Chatgpt-like Interface To It’s Software

Databricks, a big deal in the tech world, just made a major move in the AI game. They scooped up MosaicML for a whopping $1.3 billion back in June, a deal that's making waves. Databricks' CEO, Ali Ghodsi, reckons it's a smart buy and would've even shelled out more cash if he had to.

Now, Databricks is rolling out a new product called the Data Intelligence Platform. It's like adding some AI muscle to their existing data storage and analysis software, known as the lakehouse. This new thing lets people without coding chops — like your regular Joe — ask questions in simple English and get smart answers about their data. Think of it as a ChatGPT-lite, using MosaicML's brainy tech.

This new tool is a big deal because now more folks in a company can make sense of data without needing a PhD in computer science. It's already getting some real-world use, like with doctors at Tufts Medicine checking out patient data. Ghodsi is betting big that this is the future for all data platforms.

Adobe is using AI to break apart messy audio

 Adobe's cooking up this nifty gadget called Project Sound Lift. It's like a magic wand for messy audio. You got a recording with clapping, chit-chat, or car honks all jumbled up? No sweat. This thing uses AI to pick apart those sounds and sort 'em out. Just toss your audio into the tool, pick what noises you wanna keep or ditch, and bam – you get neat, separated tracks.

You're not just stuck with the AI's choices either. Once it does its thing, you can mosey on over to Adobe Premiere Pro and fiddle with the tracks yourself. Crank up the volume on the stuff you care about and tone down the rest.

There are a couple of other gadgets out there that can split audio, but they're not as sharp as Adobe's when it comes to nailing specific sounds. And get this – the same tech Adobe's using? It helped bring a Beatles song back from the dead. Producers used it to clear up an old John Lennon tape, making his voice shine through over some rowdy piano tunes. Pretty cool, huh?

GPT-4V(ision) as A Social Media Analysis Engine

So, there's this new research paper about how GPT-4V (which is a fancy tech tool) can help us understand social media better. Social media's got all kinds of stuff - pictures, text, videos, you name it. The big question here is, can GPT-4V really get what's going on in these posts?

The team behind this paper, a bunch of smart folks like Hanjia Lyu and Jiebo Luo, tested GPT-4V on five key things: figuring out the mood of posts, spotting hate speech, calling out fake news, guessing who's who in terms of demographics, and pinpointing political leanings. They started with some basic number-crunching using known data and then took a deeper dive into specific examples.

Turns out, GPT-4V is pretty solid at this stuff. It's good at piecing together what's happening in pictures and text at the same time, knows a bunch about different cultures and contexts, and has a lot of general smarts. But it's not perfect. It trips up when dealing with social media in multiple languages and can't always keep up with the latest internet trends. Plus, it sometimes gets facts wrong, especially about celebs and politicians.

JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models

Traditional AI agents, even with recent advances, still struggle in three big areas. First, they have a hard time understanding and planning based on different kinds of inputs, like pictures, videos, and spoken instructions. Second, they're not the best at making long-term plans - they need to chat and think things through, which is hard for them. Lastly, they need to get better at learning continuously, figuring out new tasks on their own, and getting better over time.

Enter JARVIS-1. This AI agent is different because it's great at making plans for long tasks using various inputs, like images and language, and then turning those plans into actions in Minecraft. It uses a couple of fancy models: one called MineCLIP to understand different types of tasks and situations, and a language model to plan out actions.

JARVIS-1 doesn't just follow instructions; it can come up with its own tasks (self-instruct) and gets better over time by storing its experiences in this memory. This is huge because it means the AI can keep learning and improving by itself.

The team tested JARVIS-1 in Minecraft with over 200 different tasks. The results were impressive - it did up to 5 times better than previous attempts. It's especially good at tough tasks, like making a diamond pickaxe, which it can now do with a 12.5% success rate. The longer it plays, the better it gets at these tasks.

Training of 1-Trillion Parameter Scientific AI Begins

Argonne National Laboratory in the US is working on this huge AI project called AuroraGPT. Think of it as a super-smart AI brain, loaded up with a ton of scientific info. They're using this beast of a supercomputer called Aurora to make it happen, and it's got these fancy Intel GPUs giving it the muscle.

Intel's teaming up with folks both in the US and around the globe to make this scientific AI dream a reality. The idea is to cram all kinds of scientific texts, results, and papers into this AI so that scientists can ask it questions and get quick answers. This could be a game-changer for research in all sorts of fields, like biology, cancer, and climate change.

Training this AI, AuroraGPT, is just kicking off and it's going to be a long haul. They're starting small with 256 parts of the supercomputer and plan to ramp it up to use all 10,000 parts.

They mention how Google's doing something similar with their own big AI model, and how these giant AI projects need a ton of memory and have to break down the training across lots of GPUs. Intel's also working with Microsoft to make sure everything runs smooth and fast as they expand the training.

Underage Workers Are Training AI

Hassan, a 15-year-old kid from Pakistan, started working from his bedroom during the COVID lockdowns, but not just goofing off online – he was part of the big AI game, training algorithms for top AI companies. The deal is, humans like him label data to help machine learning algorithms learn their stuff. Hassan got into this gig through a site called Toloka, making a couple of bucks an hour – way more than the local minimum wage.

This kid's not alone. Loads of other minors are doing the same, fudging their age to get on these platforms. They're doing all sorts of tasks, from simple stuff like tagging pictures to heavy-duty content moderation, which means sorting through some pretty rough material.

Big names in tech outsource these tasks to places like Pakistan, India, Kenya – where folks are paid peanuts to do these jobs. The industry's booming, expected to hit over $17 billion by 2030. But the work's tough and the pay's lousy. Hassan calls it "digital servitude."

Some of these tasks get real personal, like uploading pics of your kid for AI training. And then there's the dark side – moderating content that's violent, explicit, or just plain disturbing. Hassan, who's now 18, still feels the weight of the stuff he saw as a minor.

Empowering the next generation for an AI-enabled world

Google DeepMind and Raspberry Pi Foundation are beefing up this program called Experience AI. It's this cool course for teachers to help kids aged 11-14 get the lowdown on AI. They're rolling this out all over the globe, aiming to get more kids prepped for an AI future.

Last year, this program hit 200,000 students, and now they're translating it into more languages because it's getting real popular. They're investing some serious cash – like a million pounds – to make this happen in more places, including training teachers.

They've got partners all over the map, like in Romania, Canada, and Kenya, each doing their own awesome stuff with tech and social change. The goal? Get more diverse voices into AI. That means scholarships, fellowships, and all sorts of programs to bring different people into the AI field.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.