- NATURAL 20
- Posts
- SAM IS BACK! BOARD IS FIRED!
SAM IS BACK! BOARD IS FIRED!
Sam Alman's unexpected return and the introduction of new board members
Today:
SAM IS BACK! BOARD IS FIRED! We now know what ACTUALLY happened at OpenAI
Sam Altman's back in the saddle at OpenAI, and he's got a brand-new crew on the board. Some familiar faces from the old board are sticking around too. Looks like we've got a clearer picture now of who was stirring the pot in the recent shake-up and what their game plan was. There's a heap to unpack, so let's dive in.
The board now includes big names like Brett Taylor and Larry Summers, bringing a mix of tech and economics expertise. Ilia, a major brain behind OpenAI's tech, isn't on the board, but hopefully, he's still in the game.
Introducing Stable Video Diffusion
Stability AI's got this tool called Stable Video Diffusion Image-to-Video. It's pretty neat – you give it a picture, and it turns that picture into a short video, about 4 seconds long. It's like taking a snapshot and bringing it to life!
They trained this thing to make videos that are 25 frames long and look pretty sharp (576x1024 resolution). It's an upgrade from an earlier version that did 14 frames. They also tweaked another part of it, the f8-decoder, to make sure the video looks smooth and not choppy.
Stability AI both made and paid for this. It's a generative model, meaning it's all about creating new stuff, like videos from images. They suggest checking out their GitHub for more techy details and other cool projects.
ChatGPT suffers widespread outage as company chaos unfolds
ChatGPT's been on the fritz big time. Users have been hitting brick walls with error messages left and right. The brains at OpenAI are on it, saying they've pinpointed the glitch and are hustling to fix it. This ain't their first rodeo with outages, though. Just a bit ago, some cyber troublemakers slammed them with a DDOS attack, basically overwhelming their system with bogus traffic.
Hey folks: We're aware of elevated error rates in ChatGPT and for some API users.
The team is working on a fix. Should be resolved shortly, you can follow along on:
— Logan.GPT (@OfficialLoganK)
11:17 PM • Nov 21, 2023
The trouble started popping up around 2 p.m. Pacific Time, according to Down Detector, which keeps tabs on this kind of thing. OpenAI's own status page fessed up to the problem, saying it's messing with more than just the chatbot, but some API stuff too. They're blaming it on some hiccups with their database replicas.
A company spokesperson pointed to a tweet from one of their team members, basically saying, "Yeah, we know it's busted. Hang tight, we're on it. Turns out, ChatGPT often hits snags, especially when OpenAI rolls out new bells and whistles. Speaking of new, they just launched a version of ChatGPT that you can talk to around 3 p.m. Eastern Time.
Anthropic’s Claude 2.1 release shows the competition isn’t rubbernecking the OpenAI disaster
Anthropic, a competitor of OpenAI, is not just sitting around watching OpenAI's current troubles. They've rolled out a new version of their language model, Claude 2.1, which keeps up the heat in the AI race. This new Claude can handle a whopping 200,000 tokens at once - that's like remembering whole books or massive financial reports. They're ahead of OpenAI's last big reveal of a 128,000-token window.
Claude 2.1's got a few key upgrades: it's better at understanding a ton of data at once, it's more on the nose with its answers, and it's got some neat tricks like using web tools if that's the smartest way to answer a question. Like, if it doesn't know the best car to suggest, it can now look it up online or check out a database that's got the info.
One cool thing about Claude 2.1 is it's less likely to make stuff up and more likely to admit when it doesn't know something. But how good that is in real life is still up in the air.
Greg Brockman is still announcing OpenAI products for some reason
Greg Brockman, who used to be a big shot at OpenAI but isn't working there now, is still posting updates about OpenAI stuff. Today, he's talking about a cool new feature for ChatGPT that lets it talk with a human-like voice. This feature is now available for everyone using ChatGPT.
ChatGPT Voice rolled out for all free users. Give it a try — totally changes the ChatGPT experience:
— Greg Brockman (@gdb)
8:51 PM • Nov 21, 2023
It's kinda odd that Brockman is still talking about OpenAI things since he quit last Friday to back up Sam Altman, who was the CEO until the board gave him the boot. Nobody's really sure what's up with Brockman – maybe he's just proud of his old team, or maybe he's got something cooking with OpenAI's board to come back.
Putting that drama aside, the voice feature in ChatGPT is pretty neat. They announced it back in September, and some folks who pay for ChatGPT got to try it a few weeks ago. It works by turning text into speech that sounds like a real person. OpenAI worked with some voice actors to make five different voices. They also use this tool called Whisper to turn spoken words into text.
If you want to try it out, just go to the settings in the ChatGPT app on your phone and hit the "headphones" icon. Voila! ChatGPT will start talking to you.
Nvidia’s Sales Surge, With No End In Sight for AI Boom
Nvidia's Q3 in fiscal 2024 was off the charts! They raked in $18.12 billion in revenue, a whopping 206% jump from last year and 34% more than the previous quarter. Their earnings per share shot up to $3.71 from just $0.27 last year. The big moneymaker? Their Data Center biz, pulling in $14.51 billion.
They've got their eyes on about $20 billion in revenue for Q4, with solid margins expected. Their financial health is rock solid, with over $18 billion in cash and assets worth $54 billion.
Their CEO, Jensen Huang, is pumped about the shift to accelerated computing and generative AI. It's like they've hit the jackpot with their GPUs, CPUs, and AI services, and they're not slowing down anytime soon.
Off/Script launches an app to create and buy AI-designed fashion
Jonathan Brun and Justine Massicotte are shaking things up with their new app, Off/Script. This thing lets anyone, pro or no-pro, design cool stuff like clothes, bags, furniture, you name it. People vote on their favorite designs, and if your idea gets 100 thumbs up, Off/Script might just make it real and sell it. They're using some fancy AI, Stable Diffusion, to help folks whip up these designs.
You don't need to be a design whiz, either. There's a bunch of ready-to-tweak templates for all sorts of gear. They've even got connections with over a thousand manufacturers to actually make this stuff. And if you're the brains behind a winning design, you pocket 20% of the sales and a sweet $500 upfront.
Off/Script's not just about making money, though. They're big on supporting local economies and letting creators sell in their own backyards. They've already got 30 products up for grabs, ranging from everyday wear to fancy items, all thanks to some talented designers and even a famous MMA coach.
MediaTek Joins Qualcomm in Bringing ChatGPT-Like AI to More Affordable Phones
MediaTek, a big chip maker, just rolled out a new chip called the Dimensity 8300. This bad boy is set to bring fancy AI features to more wallet-friendly Android phones. It's like a follow-up to their high-end Dimensity 9300, but for the regular Joe's phone.
MediaTek's keeping mum on exactly what this new chip can do in terms of AI. Qualcomm's been more chatty, bragging about smarter assistant suggestions and some cool photo and video tricks with their chips. But here's the kicker: both MediaTek and Qualcomm are big on doing this AI stuff right on the phone itself, instead of relying on some far-off data center. That means quicker responses and better privacy since it's not broadcasting your data all over.
As for the nitty-gritty, MediaTek's new chip is a step down from their Dimensity 9300, but it's still no slouch. It's got the chops to handle some heavy-duty AI tasks, just not as fast as the 9300. On the plus side, this chip also promises better battery life and smoother connections with stuff like earbuds and gaming gear.
Tom Siebel’s C3.ai taps Amazon AWS to scale its business
Tom Siebel's company, C3.ai, is shaking things up by partnering with Amazon's AWS to sell its AI software. This big move means they're now offering their tech, usually sold to big shots like corporations and the government, to smaller fish like startups and individual workers. Their new product, C3 Generative AI: AWS Marketplace Edition, is hitting the shelves next week with a 14-day free trial, then it's $6,000 a month for 10,000 queries.
Siebel's making sure this isn't your average AI tool. It's loaded with neat features like data source citations, tight data access control, and it won't make up stuff when it's stumped. It's user-friendly too, with a search and chat interface to dig into business data. In a demo, Siebel showed off how it can spot issues like bad suppliers or offices where folks are quitting a lot.
Teaming up with Amazon means C3.ai can reach a ton of people, just like Snowflake did. They used to add just a few big customers at a time, but now they're aiming for thousands. Siebel's also keen on flexibility, saying their tool works with various AI models from OpenAI, Anthropic, and others. If one AI model goes belly up, they can switch to another without skipping a beat.
NVIDIA Collaborates With Genentech to Accelerate Drug Discovery Using Generative AI
NVIDIA, the big shots in tech, are teaming up with Genentech, a major player in biotech. They're aiming to make finding new medicines faster and smarter. Genentech's bringing its brainy algorithms to the table, while NVIDIA's got this supercomputer setup in the cloud to speed things up.
They're gonna use this thing called NVIDIA BioNeMo. It's like a custom toolkit for biotech folks to build their own AI models for drug discovery. BioNeMo's now up and running, making it easier to train AI to find new drugs.
The focus at first is to get Genentech's AI smarter and faster in their "lab in a loop" system. This is all about understanding the complex stuff in our bodies to make better drugs quicker. Genentech's got a track record of being ahead of the game in using tech for medicine, and they're betting big on AI to be the next game-changer.
Finding new drugs has always been a tough and pricey business. AI could really shake things up by finding drug targets and interactions more quickly. Genentech's already got some runs on the board using AI to get ahead in the drug game.
Screenshots show xAI’s chatbot Grok on X’s web app
Elon Musk's getting ready to spice up his X app with a new AI chatbot named Grok, part of the priciest subscription level, X Premium+. This isn't your regular chatbot; it's got sass, modeled after "The Hitchhiker's Guide to the Galaxy," and won't shy away from the edgy questions other AIs dodge. Plus, it's got web browsing skills for up-to-the-minute info.
The @xai Grok AI assistant will be provided as part of 𝕏 Premium+, so I recommend signing up for that.
Just $16/month via web.
— Elon Musk (@elonmusk)
4:24 PM • Nov 4, 2023
Another image that shows how the Premium+ subscribers will be able to chat with @grok!
— Nima Owji (@nima_owji)
10:56 PM • Nov 20, 2023
Grok's already being tested and will be available to all the big spenders on X Premium+. Musk's keeping mum on when Grok's hitting the streets for everyone, but a peek at the X app code shows they're already fitting Grok in there. The chatbot will be easy to spot on the app with its standout icon, and only the Premium+ folks will get to chat with it.
The World Press Photo Contest’s updated AI rules help define what a modern photograph is
The World Press Photo Contest just changed their rules to kick out AI-made images, right after they said they were cool with them in their Open Format category. This flip-flop happened because real-deal photojournalists got real mad, saying AI snaps have no place in a contest about capturing true-life moments. The contest folks said "thanks but no thanks" to AI after hearing from the crowd, making it clear that AI-generated stuff is a no-go in all their categories, including Singles, Stories, and Long-Term Projects.
They also got more specific about what counts as cheating with AI in regular photos. Simple tweaks like fixing noise, color, and contrast with AI are okay, but using fancy AI tricks to add new stuff or make the image super sharp is a big no-no.
The whole idea is to keep photos legit and not let AI mess with what's real. They're working with big shots in photojournalism to set up rules that keep photos honest and don't fool people.
Students pitch transformative ideas in generative AI at MIT Ignite competition
MIT threw open its doors to its brainy students and postdocs for their first big Generative AI Entrepreneurship Competition, called MIT Ignite. More than 100 teams jumped in with startup ideas using fancy AI tech for stuff like health, climate, education, and work.
The event was part of a bigger push by Kornbluth on generative AI at MIT. Students and researchers have been diving into AI, finding new uses, ironing out kinks, and making sure it's good for society. The MIT-IBM Watson AI Lab and the Martin Trust Center for MIT Entrepreneurship ran the show, with help from MIT's School of Engineering and the Sloan School of Management.
Post-competition, all 12 teams got invited to a networking gig to turn their prototypes into reality. They also got a chance to develop their ideas further with the Trust Center and the MIT-IBM Watson AI Lab.
AI is supercharging child surveillance and the school-to-prison pipeline
he White House dropped the ball on tackling how AI is messing with the rights of Black communities in the U.S. Public schools are using some high-tech surveillance gear like facial recognition and even drones. Over 88% of schools keep tabs on kids’ devices, and a third use facial recognition. Bad news is, these gizmos are hitting Black and minority kids harder, and it's not cool.
There's this messed-up story from Florida where a sheriff's office used AI to tag kids they thought might turn to crime. They checked out private stuff like grades and even child abuse histories. In Philly, they're planning to use drones to watch over schools, and in places like New York and L.A., they're using tech to make big lists of supposed gang members, mostly targeting Black and Hispanic kids.
The solution? Stop using federal cash to buy this tech for schools. New York State already told schools to cut out facial recognition. The feds have the power to stop schools from using tech that stomps on kids' rights.
Australia to force social media companies to crack down on ‘emerging harms’ of AI deep fakes and hate speech
Australia's cracking down on the bad stuff happening on social media, like fake videos and hate speech. The government's updating their online safety rules, putting pressure on big tech and social media to get their act together. They're worried about AI tools being used to make harmful content, like fake explicit images or messages that spread hate.
The main player here, Communications Minister Michelle Rowland, is saying it's getting tough to tell real from fake online, and while AI's got some cool uses, it's also being used in nasty ways. She's planning to review online safety laws and tweak the guidelines for tech companies.
These new rules will make companies watch their algorithms, so they don't end up showing users harmful content. They're especially concerned about stuff like racism and other forms of hate speech getting more airtime. The eSafety Commissioner's gonna make sure companies report on their progress.
As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits
We're looking at a future that feels like a sci-fi movie, with killer robots flying in for the kill without needing a human to say "go." The big dogs like the U.S., China, and others are pushing hard on this tech, making drones that think for themselves. It's got a bunch of countries spooked, so they're hollering for new rules at the U.N. to keep a leash on these lethal autonomous weapons.
But here's the rub: the U.S., Russia, and some other heavy hitters don't want new laws. They're cool with the old rules, and China's playing it sly, pushing for rules so tight they won't make a dent. This whole mess is stuck in a big ol' bureaucratic tangle with not much hope for real change soon.
Meanwhile, the big debate's heating up. The U.S. is like, "We're already playing nice with AI," pointing to their own rules and asking others to follow suit. But smaller countries are sweating bullets, worried these death-dealing drones will hit the battlefield with no playbook.
Michigan deploys AI to detect guns at state Capitol
Michigan's rolling out some high-tech AI at their state Capitol to spot guns. With things getting rough politically, they're the first state to use this tech by a company called ZeroEyes. The big cheese at the Michigan State Capitol Commission, Rob Blackshaw, is saying it's key to stay sharp and keep the place safe.
This AI thing will work with the cameras they already got, scanning a ton of images super fast. If it spots a gun, it lets the cops and security know right away. ZeroEyes says they're the only ones with this kind of AI gun-spotting tech that's got the thumbs up from the Department of Homeland Security. They're really aiming to make the Capitol a safe spot, no matter what's happening outside.
Why you should care about this week’s AI drama
Sam Altman, the big cheese at OpenAI, got the boot, but then there was talk about him starting a new gig and even Microsoft wanting to snag him. It's been a wild ride with lots of twists and turns, and it's not just juicy gossip – it's big news for folks with money in the game, especially if you've got stock in Microsoft.
Microsoft, which had already buddied up with OpenAI, saw its stock take a hit when rumors flew about Altman leaving. But now, they're bringing Altman and another top dog from OpenAI, Greg Brockman, directly into their team. This is a big win for Microsoft, putting them ahead in the AI race and maybe leaving rivals like Alphabet and Meta playing catch-up.
The real meat of the story, though, is what all this means for the future of AI. There's been a lot of chatter about how fast to push this tech without risking some kind of sci-fi nightmare. Big names in the AI world, including Altman himself, have been flagging these dangers.
Now, with Altman at a company that's all about making dough for shareholders, there's a big question mark: Will he keep sounding the alarm on AI's risks, or will the chase for profits win out?
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply