- NATURAL 20
- Posts
- GPT Store | DeepMind AI Rips Atoms | OpenAI Kills Capped Profit Rules | Sam Altman and UAE Funds.
GPT Store | DeepMind AI Rips Atoms | OpenAI Kills Capped Profit Rules | Sam Altman and UAE Funds.
OpenAI postpones its much-anticipated GPT store launch to 2024, while reshaping investor profit models and heightening the stakes in AI's ever-evolving landscape
Today:
GPT Store | DeepMind AI Rips Atoms | OpenAI Kills Capped Profit Rules | Sam Altman and UAE Funds.
OpenAI's got some news. They're delaying the launch of their GPT store, pushing it to early 2024.
OpenAI's investors were capped at making 100 times their investment. But hold on, starting 2025, this cap’s gonna grow 20% each year. Anything over the cap goes to OpenAI's nonprofit side. This setup's a bit of a head-scratcher, mixing nonprofit vibes with big money moves.
Predicting long-term outcomes of kidney transplantation in the era of artificial intelligence
So, doctors have a tough time figuring out if a new kidney will last a long time in a patient. It's tricky because there are a bunch of things that can affect it. But, AI's stepping in to help. The researchers took a look at 407 kidney transplants. They split them into two groups: ones that lasted over 5 years and ones that didn't do so well.
First, they did the usual number-crunching, then they cranked it up with some machine learning magic. They found out that younger donors and a drug called Mycophenolate Mofetil (MMF) seemed to help the new kidney last longer. Also, how well the kidney's working 3 months after the transplant mattered. If patients had to go back to the hospital a lot in the first year, that wasn’t a good sign.
They also saw that some early problems like delayed kidney function and rejection were bad news for long-term success. Now, here's where the AI flexes its muscles: they used 35 AI models and the best one was pretty sharp – almost 90% accurate. It looked at ten key things like high blood pressure and if the patient ever had a blood transfusion.
The idea is that this AI model could help doctors make better decisions early on by catching problems before they get worse. They also talked about the big picture – like how many people need kidney transplants and how we haven't really gotten better at making them last longer.
Artificial intelligence in agriculture: a revolution in the making
Farming's a big deal, worth a whopping $5 trillion, and it's key for the world's economy. But with more mouths to feed every year, old-school farming can't keep up. Enter AI, the new game-changer.
AI's helping farmers grow more food without trashing the planet. It's like a high-tech helper, keeping an eye on crops, tackling pests, and even making sense of all that weather and soil data. It's not about replacing farmers but giving them a tech boost to work smarter.
So, what's the big win with AI in farming? For starters, it's all about making smart decisions with data. Think of it as high-tech farming—machines that do their own thing and smart ways to use water and other stuff. This tech even helps farmers outsmart bad weather and bugs.
AI's not just about fancy gadgets, though. It's about precision. We're talking robots that do everything from planting seeds to picking your veggies. And this precision farming thing? It's all about using tech to grow food better, like custom watering and planting plans, and smarter ways to run the whole farm.
Taste-Driven AI Algorithms Enhance Wine Selections
So, these wine apps, Vivino and Hello Vino, are getting super smart. They're using AI to help folks pick the right wine. The cool part? Now they're adding what people think about the taste into the mix. This means the apps are getting really good at guessing what wine you'll like.
The brains behind this are from the University of Copenhagen, Technical University of Denmark, and Caltech. They had people taste different wines and then sort them on a paper by how similar they tasted. This setup let them gather data on what flavors people think are alike.
They combined this taste data with loads of wine labels and reviews from Vivino. Then, bam! They created an algorithm that can tell you, "Hey, if you liked this wine, you'll probably dig this one too."
And it's not stopping at wine. This method could help recommend beers, coffees, and even personalized food choices. The research team put all their findings out there for free, hoping others will jump in and add more to it. They think this could even help in healthcare, like making meals that taste good and are healthy.
Making an image with generative AI uses as much energy as charging your phone
Every time we make an AI image or shoot a text, it's like plugging in your phone. Researchers from Hugging Face and Carnegie Mellon say making one AI pic eats up as much juice as a full phone charge. But, typing out texts with AI? Way less power-hungry. A thousand texts? Just 16% of your phone's battery.
These brainiacs also found that the real energy hog isn't just making these giant AI brains; it's using them. They crunched the numbers on different AI tasks like answering questions, sorting pictures, you name it. The biggest energy guzzler? Making images. For example, creating a thousand pics with some beefy AI model is like driving 4.1 miles in a regular car. Text generation is way lighter, like barely moving your car.
They're sucking up serious energy. And the bigger the AI, the bigger the energy bite. There's a push to think twice before using these all-in-one AI giants for everything. Maybe go for smaller, specialized models that don't chomp as much power.
What's wild is the everyday use of AI is racking up more carbon than even training these big models. Think about it: a model like ChatGPT, with millions of daily users, can surpass its training carbon cost in just weeks.
Amazon’s Q has ‘severe hallucinations’ and leaks confidential data in public preview, employees warn
Amazon just rolled out their new AI chatbot, Q, and it's already causing headaches. This thing is spewing out confidential info left and right - stuff like where Amazon keeps its data centers, special deals they got going on, even features that aren't out yet. Employees are freaking out, saying it's like Q's having some major hallucinations.
The situation's so bad that one worker flagged it as a big-time emergency, the kind that gets engineers up in the middle of the night to fix. Meanwhile, Amazon's trying to keep up with the big dogs in tech, pouring billions into AI startups and making a big show of Q at their annual shindig.
But when people started pointing fingers about these security blips, Amazon played it cool. They're like, "No biggie, just the usual employee feedback." Even after the story hit the news, they doubled down, saying Q didn't leak anything important.
Q's supposed to be this beefed-up, super-secure version of those AI chatbots everyone's talking about, like ChatGPT. Amazon's selling it as a smarter, safer choice for businesses worried about privacy. They even told the New York Times it's built to be tighter than your average chatbot.
Why Won’t OpenAI Say What the Q* Algorithm Is?
OpenAI, the big brains behind ChatGPT, got everyone talking last week. Their head honcho, Sam Altman, got the boot, then got his job back amidst a whole lot of drama. But that's just the surface. The real mystery is this Q* (say "Q-star") algorithm. It's like a whiz kid at math, solving problems it's never seen before. Sounds simple, but some think this is a big step towards making AI super smart, able to think and solve new problems on its own.
However, not everyone's buying it. Some folks inside OpenAI think Q* is no big deal, while others are sweating over the potential risks. This debate isn't new in AI. Remember the transformer algorithm? Back in 2017, it was cool but not earth-shattering. Now, it's the backbone of stuff like ChatGPT. It's all about perspective and timing.
In the AI world, the big players with the loudest voices often call the shots. Companies like Meta, Google, OpenAI, Microsoft, and Anthropic dominate the scene. And as AI becomes a money-making machine, these companies are keeping their cards close to the chest, poaching talent from universities and locking down their research.
OpenAI says they're secretive for two reasons: first, to avoid speeding up the arrival of super smart AI, which could be risky for humanity. Second, they want to keep their edge in the competitive tech world. There's a lot of guesswork about Q*, with some wondering if it's related to known techniques like Q-learning or A*. But without more info or a chance for others to check it out, we're all just shooting in the dark about how big of a deal Q* really is.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply