• NATURAL 20
  • Posts
  • BOARD RESIGNS! Sam Altman back inside OpenAI. Satya Nadella appears. Sam building NVDA competitor?

BOARD RESIGNS! Sam Altman back inside OpenAI. Satya Nadella appears. Sam building NVDA competitor?

Sam Altman's reinstatement, including insider leaks, boardroom politics, and the rumored negotiations with a Middle East fund for AI chip development

Today:

BOARD RESIGNS! Sam Altman back inside OpenAI. Satya Nadella appears. Sam building NVDA competitor?

Sam Alman, the big cheese at OpenAI, got booted out but now he's back in the game. The board, who gave him the boot, agreed to step down. Sam's at OpenAI's digs, and Satya Nadella from Microsoft's running the show for these talks. Sam hasn't said "yes" to coming back yet, but if the deal's sweet, he might be back as CEO by Monday.

Jimmy Apples, who seems to know what's cooking before anyone else. Folks think he's an insider at OpenAI, leaking stuff for kicks. Back in October, he hinted at some bad vibes at OpenAI, and it turns out he was spot on. He even called Satya Nadella getting involved.

Altman Sought Billions For Chip Venture Before OpenAI Ouster

The Computer Chip

Sam Altman, the guy who recently got the boot as the head honcho of OpenAI, is cooking up something new in the AI world. He's been chatting up some investors about launching a fresh AI venture. Word on the street is that Greg Brockman, another big name from OpenAI, might join him in this new gig. But, here's the kicker: nobody's really sure what this new project is all about yet.

He's been rubbing elbows with some top brass in the semiconductor industry, including the folks at Arm. They're brainstorming some new chip designs that could make it cheaper for companies like OpenAI to run their big language models. But don't hold your breath; this kind of tech magic takes years to pull off. It's a bit fuzzy whether Altman's doing this for OpenAI or his new venture.

Elon Musk says the risk of advanced AI is so high that the public needs to know why OpenAI fired Sam Altman

Elon Musk is making waves again. He's calling out OpenAI, a top AI company, for firing their CEO, Sam Altman, without giving a clear reason. Musk, who used to be on OpenAI's board but left in 2018, is worried about the dangers AI might bring. He thinks the public deserves to know why Altman got the boot, especially with AI being such a big deal.

OpenAI didn't say much when they let Altman go. They just mentioned losing trust in his leadership. But there's a lot going on behind the scenes. Some folks think the firing might be because of a split in the company. Some leaders, like Altman, wanted to push AI tech fast and hard. Others, like Ilya Sutskever, another big name at OpenAI, wanted to pump the brakes, worried about AI's risks.

Sutskever even set up a special team, the "Super Alignment" squad, to make sure their new AI stuff, like GPT-4, won't cause trouble. With all this drama, Musk's own AI ventures might end up benefiting. He's hinting that things aren't all rosy at OpenAI and that we should all keep an eye on what's happening there.

Billionaire ‘techno-optimist’ Marc Andreessen is tweeting up a storm about Sam Altman and OpenAI and he’s furious at the AI ‘doomer’ movement

Marc Andreessen, a big-time tech investor, was all over Twitter the weekend before Thanksgiving, basically saying "I knew it!" about Sam Altman getting booted from OpenAI. He's been a huge cheerleader for fast-moving tech, what some folks call "accelerationism," for a long time. Back in October, he even wrote this massive piece, like a poem almost, slamming anyone who's scared of tech taking jobs or messing up society.

Over that weekend, Andreessen was all about "effective accelerationism," a fancy term for pushing innovation and capitalism to the max, even if it shakes things up now. He retweeted some posts that basically said the doomer crowd had their chance and blew it, and now they should be kicked out of the serious business talks.

This whole drama reminded folks of when Silicon Valley Bank went belly-up earlier in the year. Another big venture capitalist, David Sacks, who's buddies with Musk, said Altman should get his job back, the board should be overhauled, and OpenAI should switch from a non-profit to a regular company. And he thinks Elon Musk should get a piece of the pie for being an early investor.

What Sam Altman said about AI at a CEO summit the day before OpenAI ousted him as CEO

Altman's been a key player in this AI boom, especially since OpenAI dropped ChatGPT. He's been jet-setting, chatting with world leaders about AI's good and bad sides. He's pretty much become the AI poster guy.

Before he got shown the door, Altman's last gig as CEO was at the APEC CEO summit in San Francisco, which is where OpenAI calls home. On stage, he chatted with Laurene Powell Jobs, Steve Jobs' widow, about balancing AI regulation and keeping up with tech changes.

Altman shared a dinner story with Yuval Noah Harari, the guy who wrote Sapiens. Harari's pretty wary of AI, even suggesting jail time for tech heads who let AI trick people. Altman gets the concern, saying AI's progress has been a wild ride.

Chatting with ChatGPT, according to Altman, feels like talking to a creature, not just a tool, but people get used to it. He's thinking about how to make AI safe as it gets stronger. He admits today's AI isn't super strong, but it's on a roll. The big questions are about setting limits, deciding who's in charge, and making it work worldwide.

Altman's been wrestling with these issues, believing the world will step up and do the right thing. He thinks today's AI doesn't need heavy regulation, but down the line, when AI can match a company, a country, or the world, we might need some global oversight.

Meta disbanded its Responsible AI team

Meta, the big tech company, has ditched its Responsible AI team. Most folks from this team are now joining Meta's squad working on creating new AI stuff, while some are helping build Meta's AI backbone.

Meta's always been talking up a big game about making AI the right way, with a focus on being accountable, clear, safe, and keeping things private. But now, they're shaking things up by spreading this Responsible AI team across different parts of the company. Jon Carvill, speaking for Meta, says they're still all about safe and responsible AI, but they're just mixing up how they do it.

Earlier this year, there was already some shuffling and job cuts in this team, leaving it pretty thinned out. This Responsible AI team was supposed to make sure Meta's AI wasn't messing up, like making sure it wasn't just learning from a narrow set of info and avoiding goofs like bad translations or dodgy content showing up on its platforms.

A Stability executive quit after saying that generative AI 'exploits creators.' It could spell trouble for AI companies.

Ed Newton-Rex, the big audio boss at Stability AI, just up and quit. He's got beef with how the company wants to use other people's stuff—like art and music—for free to train their AI. This is a hot issue in the tech world right now, with a lot of arguing over whether companies should cough up cash for using copyrighted material to make their AI smarter.

Stability AI and other tech heavyweights are scared they'll lose big bucks if they have to pay up. These AI models, like ChatGPT and Stable Diffusion, get their smarts from a ton of info grabbed from the web, including creative works. Some artists are ticked off, saying their work's being used without permission and it's even replacing them. They're taking companies like Stability and OpenAI to court over it.

Newton-Rex, who's been in the AI game for 13 years, says he's all for AI, but not the kind that rips off creators. He even built an AI music-maker at Stability using legally okay music. But now he's out, saying he can't back a system that uses people's work without asking.

New Tool for Building and Fixing Roads and Bridges: Artificial Intelligence

Pennsylvania is getting smart with fixing their bridges and roads by using artificial intelligence, or A.I. for short. You know, those computer brains that are all the rage these days. A big chunk of their bridges are falling apart – like 13% of them. So, they're using A.I. to whip up lighter concrete blocks to build better bridges. Plus, they've got this other cool idea to make highway walls that suck up car noise and some of the bad stuff cars pump out into the air.

The U.S. government is pouring billions into fixing up old bridges, tunnels, and roads, but that's just a drop in the bucket compared to what's really needed. Enter A.I., the new handy tool for engineers to build stronger stuff for less cash. Amir Alavi, a big brain at the University of Pittsburgh, is part of the team working on this. He says using A.I. is a game-changer for saving materials and money.

Now, here's why it's a big deal: making cement is a major pollution problem. It's responsible for at least 8% of the world's carbon emissions. And we're talking about using 30 billion tons of concrete every year globally. So, if A.I. can make concrete production more efficient, it's not just good for our wallets, it's huge for the planet too.

AI to See “Major Second Wave,” NVIDIA CEO Says in Fireside Chat With iliad Group Exec

NVIDIA's big cheese, Jensen Huang, dished out during a cozy chat with Aude Durand from iliad Group. He's jazzed about AI's big second wave, especially for Europe's startup scene. Huang's all about each country having its own AI mojo and seeing AI shake things up in different industries, from language to digital biology to manufacturing.

France is getting some high-fives for its AI game, pouring millions into AI research and development. Huang gives a shout-out to NVIDIA's European startup buddies, over 4,000 of them, with 400 in France alone. He's nudging them to beef up their computing power, and that's where Scaleway comes in. They're dishing out cloud credits and flaunting a beast of a supercomputer with NVIDIA's tech.

Scaleway's not just about raw power; they're keeping it real with EU data laws, which is a big deal for businesses in Europe. NVIDIA's Inception program is also in the mix, helping startups with NVIDIA's fancy AI software.

NVIDIA's not just talking the talk; they're walking the walk with some serious supercomputing muscle. They've got this Nabu supercomputer and a new service on Microsoft Azure for cooking up custom AI apps. They're targeting a wide audience, from big names like SAP to Getty Images.

Hypotheses devised by AI could find ‘blind spots’ in research

A bunch of smart folks, including a Nobel Prize winner, got together in Stockholm and chatted about how artificial intelligence (AI) could really shake things up in science. They even talked about giving awards to AI or AI-human teams that make big scientific discoveries. They're aiming for AI systems that can figure out stuff worth a Nobel Prize by 2050.

AI's already helping researchers sift through studies, collect data, crunch numbers, and even start writing papers. The big challenge? Getting AI to come up with smart questions – the kind that lead to big discoveries. This type of work has been super exciting for some researchers.

This idea isn't brand new. Back in the 1980s, a guy named Don Swanson was already using computers to dig up hidden connections in scientific papers. For instance, his program guessed that fish oil might help with a certain health problem, and it turned out to be right.

Today, AI's helping find new drugs and figure out what genes do. It's looking at networks of data to spot connections we haven't noticed yet. In medicine, AI might help us think up totally new ideas by making "alien" hypotheses – stuff humans wouldn't normally think of. There's this cool study showing AI can predict future discoveries, like new uses for drugs, years before humans do.

This Company Is Building AI For African Languages

Jade Abbott, a computer scientist in Johannesburg, tested ChatGPT in isiZulu and it flopped big time. Turns out, AI's pretty lousy with African languages. Enter Lelapa AI, co-founded by Abbott and Pelonomi Moiloa. They're shaking things up with Vulavula, a new AI that understands and writes in isiZulu, Afrikaans, Sesotho, and English, aiming to add more African tongues.

Vulavula, meaning "speak" in Xitsonga, is a big deal 'cause it can plug into things like ChatGPT, making these tools more useful for folks who speak African languages. Moiloa, the CEO, is all about giving Africans the tools they need to get ahead in the AI world.

These startups, like Lelapa AI, Lesan, and others, are doing the heavy lifting 'cause big tech's not paying enough attention to African languages. Lelapa AI's approach is cool 'cause they're working with linguists and local communities, not just relying on internet data. This way, they make sure their AI is culturally tuned in and useful.

Who is Ilya Sutskever, the man at the center of OpenAI’s leadership shakeup—and why is he so worried about AI superintelligence going rogue? 

Ilya Sutskever, OpenAI's big-brain guy and board member, played a major role in booting CEO Sam Altman, apparently for not being straight with them. Sutskever, a media-shy fella, recently chatted with MIT Technology Review about his focus on keeping AI from going haywire.

Born in Soviet Russia and raised in Jerusalem, Sutskever is a brainy student of Geoffrey Hinton, an AI hotshot. Hinton, worried about AI's risky path, recently left Google, concerned about its potential misuse.

Sutskever, Hinton, and another student created AlexNet in 2021, a game-changer in AI recognizing stuff in pictures. Google, impressed, scooped up the project and hired Sutskever. There, he showed how this tech works not just for images but also words.

Elon Musk, wary of AI dangers and Google's AI clout, convinced Sutskever to jump ship from Google in 2015 to help start OpenAI, aiming to balance the AI power field.

At OpenAI, Sutskever's been a whiz, developing stuff like GPT-2, GPT-3, and DALL-E. ChatGPT, his recent hit, blew up with 100 million users in just months but also raised eyebrows with some of its goofs.

Lately, Sutskever's sweating over AI superintelligence—smarter-than-human AI—which he thinks could pop up in 10 years. He's all about "alignment," making sure AI does what we want, especially when it comes to this super-smart AI. He's warned about the big risks, like human extinction, if AI goes off the rails.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.