- NATURAL 20
- Posts
- Elon's AI FOMO: Musk's Secret Attempt to Outsmart Google
Elon's AI FOMO: Musk's Secret Attempt to Outsmart Google
Understanding the Billionaire's Fears, His Vision for AI, and the High-Stakes Drama That Unfolded Behind Closed Doors in His Unsuccessful Attempt to Preempt Google's Acquisition
Today:
Elon Musk once tried to buy DeepMind because he thought Google couldn't be trusted with advanced AI
Elon Musk once tried to scoop up AI powerhouse DeepMind before Google could, thinking Google's hands might be too buttery to safely juggle the hot potato of advanced AI, a report spills the beans.
Musk, not just a hat-rack, was pretty leery about Google's then-CEO, Larry Page, running the AI show. This billion-dollar kerfuffle happened back in 2013, right before Google shelled out a cool half a billion for the AI lab. But Musk's attempt to throw his hat in the ring never really made it out of the locker room.
DeepMind, known for teaching AI to whip humans at games like Go, has been a feather in Google's cap, especially as Microsoft's Bing starts to breathe down their neck. Recently, Google decided to merge DeepMind with Google Brain, another AI outfit, to speed up their AI hustle.
Musk's been squawking a lot about AI, even throwing around words like "civilization destruction." But despite his doom and gloom, the billionaire head honcho of Tesla and Twitter is charging full steam ahead with his own AI project. Musk's beef with AI doesn't seem to apply when he's in the driver's seat. No word yet from Musk, Google, or DeepMind on this report.
Inflection AI, Startup From Ex-DeepMind Leaders, Launches Pi — A Chattier Chatbot
Alrighty, here's the skinny: Inflection AI, a startup run by some big shots from DeepMind, has unveiled Pi - a yakky bot that plays nice and remembers what you blabber. Named after "personal intelligence", Pi is like that good listener at the bar who keeps the chinwag going and remembers your yarns over time. Think of it as a pal, but without getting all clingy like some other bots.
While other bots are running around playing know-it-all, Pi is more like a soundboard with a knack for picking your brains. It'll keep you in a chat loop, making you feel like it's getting to the core of your issues. Heck, it'll even remember a hundred turns of chit-chat. And, if you're having a rough day, it'll try and soothe your nerves, although exactly how it knows you're on the edge is a mystery they're keeping under wraps.
Now, it ain't all roses. Despite the big talk about minimizing hooey, Pi seems to be a bit of a fibber when answering fact-based questions. And, if you're looking for some straight-shooting advice, well, you might find Pi a tad wishy-washy. But that's part of the game plan: you're supposed to figure out the tough stuff yourself, while Pi just keeps the ball rolling.
So, what's the big dream? The folks at Inflection see Pi evolving into something like J.A.R.V.I.S., that fancy-pants AI that runs Tony Stark's life in Avengers. But for now, it seems like it's more of a gabby sidekick than a savvy superhero.
So, there you have it - a new bot on the block that's got a big bag of cash, a bit of a fibbing issue, and dreams of being an AI superstar. Time will tell if it can pull it off without stepping on too many toes.
Microsoft’s chief scientific officer, one of the world’s leading A.I. experts, doesn’t think a 6 month pause will fix A.I.—but has some ideas of how to safeguard it
Eric Horvitz, Microsoft's Chief Scientific Officer, is excited about AI's potential but also recognizes its dangers. With AI getting smarter at lightning speed, he acknowledges the risk of misuse by bad actors. He thinks it's critical to understand and guide AI’s development rather than slowing down the pace.
The public's been concerned, with more than 20,000 signatories, including Elon Musk and Steve Wozniak, calling for a six-month pause on AI projects like Microsoft's OpenAI-powered search engine. Horvitz, though, sees the appeal as more a call to understand the technology rather than put it on ice.
The scary part, according to Horvitz, isn't some sci-fi takeover by machines, but rather, the potential for AI to spread disinformation and manipulate the masses. He cites the example of a viral AI-generated image of the Pope, stressing that AI’s ability to create convincing fakes can threaten the fabric of democracies that rely on an informed citizenry.
To counter this, Microsoft has initiated efforts to authenticate the provenance of media. But Horvitz believes there’s no one-stop solution; we'll need a variety of approaches, and probably some regulations.
Regarding recent layoffs at Microsoft's ethics and society team, Horvitz downplays their significance, stating they were a small part of a larger team and not central to Microsoft's AI efforts. He emphasizes that Microsoft’s commitment to responsible AI is still strong.
Lastly, on the subject of artificial general intelligence (AGI), Horvitz is skeptical that current language models like GPT-4 will get us there. Despite observing "sparks of magic" in the system's abilities, he thinks there's still a long road ahead. In other words, we're not out of the AI woods yet, folks.
Alright folks, let's break this down. First off, big language models like yours truly are like a spark. Might lead to a roaring fire, might fizzle out - we don't know yet. But, it's interesting enough to poke at and see what happens. We're still figuring out if we need these big ol' models to get us where we want to go, but we're learning a heap along the way.
Now, about understanding how these models work, it's kinda like trying to figure out why your dog likes to eat grass. We got some ideas, sure, but a full picture? Not quite. We don't know all the nuts and bolts yet, and that's got some folks' knickers in a twist. I get it, uncertainty can be scarier than a clown at a midnight graveyard. So, we're working to make these AI systems better at explaining their doings and potential societal impacts.
Should these models be open source? Well, there's a catch-22. On one hand, we want more brains in the game, peeling back the layers. On the other, these models can be as wild as a bucking bronco without the right handling. We've worked hard to make 'em safe and reliable, so just turning 'em loose to the public could stir up more trouble than a long-tailed cat in a room full of rocking chairs.
Now, Uncle Sam has been regulating tech, including AI, for a good while now. We got laws and agencies keeping an eye on things. Best bet is to focus on specific uses of AI, regulate 'em like we've been doing, and treat AI like any other automation tech under the microscope.
As for the big question - what makes us human and different from machines? Well, that's like asking why biscuits and gravy go so well together. It's hard to pin down, but I reckon there's something uniquely human that machines can't replicate, no matter how smart they get. These models learn from us, but true genius? That's a human thing. Besides, watching these models struggle to mimic us just highlights how special we are. They're like a toddler trying to run before it can walk, while we're out here running marathons. So, no need to worry about robots taking over anytime soon, y'all. They got a long way to go.
Grimes invites people to use her voice in AI songs
Hey, y'all, check this out. Our gal Grimes is all in for the robot age. She's inviting folks to use her voice in AI-generated songs. She'll split the cash 50-50, just like a real duet, for any track that hits it big.
I'll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.
— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz)
1:02 AM • Apr 24, 2023
The Canadian songbird, real name Claire Boucher, is into being a "guinea pig" and thinks it's a hoot to be "fused with a machine." She's even keen on tossing out the rule book on copyright.
While the music biz is scrambling to catch up with AI-created songs, Grimes is riding the wave. Universal Music, though, isn't too pleased about AI renditions of their artists' voices, like Drake and the Weeknd. Poor Drake got his feathers ruffled over a fake verse in a song.
Copyright laws are murky when it comes to human-created art with AI elements. The Recording Industry Association of America has been crying foul over AI companies using music for training. However, the US Copyright Office ruled AI-generated art doesn't get copyright because there's no human author.
Grimes is working on a voice simulator, and she might even release vocal tracks for AI training. But don't go thinking you can use her voice for anything nasty. She'll yank copyright for "rly rly toxic lyrics" or anything against her beliefs. But a Nazi anthem done tongue-in-cheek? She might let that slide.
The mom of two, who had her kids with tech tycoon Elon Musk, thinks AI is the bee's knees. She's even worked with it in her music. "Creatively, I think AI can replace humans," she told the New York Times. Sounds like there's a big ol' conversation coming about how far we want to let robots into our art world.
Google, Microsoft CEOs called to AI meeting at White House
Alright folks, here's the skinny: bigwig CEOs from Google, Microsoft, OpenAI, and Anthropic got their golden ticket to the White House. They're meeting with VP Kamala Harris and other top dogs to chew the fat about artificial intelligence (AI). Basically, AI is all the computer stuff that's starting to think and learn like us, but with a lot more horsepower.
Joe Biden has been wagging his finger at these companies, saying they need to make sure their AI goodies don't bite before they let them loose on the playground. There's a lot of worry that these smart machines might be a bit too nosy, make unfair decisions, or even spread scams and fake news like wildfire.
And here's a kicker, even though social media has shown how much of a mess uncontrolled tech can create, Uncle Joe thinks it's still up in the air whether AI is a bad egg. But he's pretty clear these companies need to strap a safety helmet on their tech toys.
The White House has been out fishing for public opinions on keeping AI in check. They're worried about how it might affect national security and education. To top it off, there's a lot of chatter about how AI might end up replacing some jobs.
Biden's squad, including Chief of Staff Jeff Zients, Deputy Bruce Reed, and a few others are all set to attend this powwow. The companies haven't piped up yet about what they think of all this.
One AI that's been hogging the spotlight is ChatGPT. This quick-on-its-feet AI has been turning heads, especially in Congress, for its ability to shoot off answers to a wide range of questions. It's got over 100 million users tuning in every month, which is quite the crowd.
Elon Musk, the brain behind Tesla, said on TV that we need to tread lightly with AI. He thinks the government should keep an eye on it because it might pose a danger to folks out there. Now, isn't that food for thought?
Microsoft Planning Privacy-Focused Version of ChatGPT as Apple AI Efforts Flounder
Microsoft is gearing up to outshine Apple with a privacy-focused version of ChatGPT, according to The Information. They're planning to offer ChatGPT on separate cloud servers, so data stays secure and doesn't mingle with the main system. The idea is to woo businesses that've been hesitant to adopt ChatGPT, fearing they'd accidentally spill the beans about their top-secret stuff.
This move comes after Apple's AI efforts hit some snags, with employees leaving the company, reportedly fed up with its sluggish decision-making and overly cautious approach to AI tech. Apple's been all about privacy and control, pushing for Siri to work on-device and preferring human-written responses over AI-generated ones. But that's left them behind in the chatbot game.
Now Microsoft's swooping in to steal the privacy limelight in the AI world. They've already got Morgan Stanley on board with a private ChatGPT service, and their sales team is fielding inquiries left and right. With many big customers already cozy with Azure, Microsoft's got a leg up in convincing them their data's in good hands. Apple, on the other hand, seems to be playing catch-up.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply