• NATURAL 20
  • Posts
  • Q* Did OpenAI Achieve AGI? OpenAI Researchers Warn Board of Q-Star

Q* Did OpenAI Achieve AGI? OpenAI Researchers Warn Board of Q-Star

OpenAI's quest for Artificial General Intelligence are pushing the boundaries of possibility and shaping the future of global industries, healthcare, and digital landscapes

Today:

Q* Did OpenAI Achieve AGI? OpenAI Researchers Warn Board of Q-Star | Caused Sam Altman to be Fired?

OpenAI's got some wild stuff brewing with their AI, and it's freaking some folks out. Recently, they've been pushing the envelope with something they're calling "Q-star." This thing might be a massive leap toward what they call "Artificial General Intelligence" (AGI) – basically, a smart-as-humans AI.

Everyone's trying to guess what Q star really is. Some think it's about Q learning, a way AI learns from its environment. Others think it's about letting AI figure stuff out on its own. Either way, it's a big step from AI needing our help to it doing its own thing.

Formula One trials AI to tackle track limits breaches

Long Beach Grand Prix

Formula One's big shots are testing out some smart computer stuff, called AI or "Computer Vision," at the Abu Dhabi Grand Prix. This tech is all about catching drivers who cheat by going off the track. It looks at how many parts of their car cross the track's edge.

This is a big deal 'cause it's gonna make things easier for the race officials. They've been swamped with checking a ton of possible cheats – like over a thousand at one race! And sometimes, they even missed some rule-breakers.

The goal? To cut down on the number of potential cheats the officials have to look at, and only pass the serious ones for more checking. The head at FIA, Tim Malyon, is all about using new tech to get ahead. He thinks that even though humans are still better in some ways, eventually, these smart systems will take over.

Bill Gates teases the possibility of a 3-day work week where ‘machines can make all the food and stuff’

"Bill Gates as a God" AI Art by Me + Midjourney

Bill Gates is jazzed about a three-day work week, all thanks to AI. He's thinking, with robots doing the heavy lifting, we won't have to grind as much. On Trevor Noah's podcast, he said AI's making the boring stuff disappear, which could mean less work time for us.

But not everyone's singing the same tune. Jamie Dimon from JPMorgan's betting on a 3.5-day week and a longer life, thanks to tech. Elon Musk, the head at Tesla and X, even thinks AI might make working optional, offering a "universal high income". He told the UK's PM, Rishi Sunak, we could work just for kicks.

On the flip side, Goldman Sachs is worried AI could snatch up 300 million jobs. But IBM's big guy, Arvind Krishna, says chill out – it's like when the internet popped up and suddenly we needed web designers. Carl Eschenbach from Workday and Suumit Shah from Duukan think if AI does the work, we'll need fewer people on the payroll. Shah even canned 90% of his crew for AI.

Researchers develop AI model that uses satellite images to detect plastic in oceans

A bunch of smart folks from Wageningen University and EPFL cooked up a new AI that's really good at spotting plastic trash in the ocean using satellite pics. Normally, it's tough to see this junk, especially when clouds or haze mess things up. But this new tech, it's on another level.

We're talking about a huge deal of plastic getting dumped in the oceans, and it's not slowing down. This trash ends up floating around, mixing with stuff like driftwood and seaweed. The AI these researchers made can sift through tons of satellite images and pinpoint where this garbage is. It's smart enough to learn from examples and get better at finding the trash.

What's cool is that this AI isn't just good when the weather's clear. It can spot the junk even when it's cloudy or hazy, which is when a lot of this trash ends up in the ocean, like after big storms. Remember the 2019 Durban floods? Tons of litter got washed out to sea then.

The AI doesn't just find the trash; it can also figure out which way it's drifting, thanks to images taken just minutes apart. This is big for keeping tabs on ocean trash and maybe even cleaning it up more efficiently.

AI model predicts lung cancer risk in never-smokers using routine chest X-rays

There's this new AI tool, kinda like a super-smart computer program, that can look at regular chest X-rays – yeah, the kind you get when you're feeling under the weather – and figure out if someone who's never really smoked is at risk for lung cancer. This is a big deal because lung cancer is a major killer, and a chunk of folks who get it never even touched a cigarette.

The brain behind this study, led by a med student named Anika S. Walia, are saying, "Hey, we've got a lot of folks getting lung cancer who've never smoked, and our current screening guidelines are missing them." Their AI tool, called "CXR-Lung-Risk," learned to spot trouble by looking at a boatload of chest X-rays from both smokers and non-smokers.

They tested this out and found that about 28% of folks who never smoked were at high risk, and a good number of them actually ended up with lung cancer. This AI thing could be a game-changer for catching lung cancer early in people who don't smoke, just by using the X-rays that are already in their medical records.

ChatGPT generates fake data set to support scientific hypothesis

ChatGPT

Some brainy folks used ChatGPT's GPT-4, to cook up a bunch of fake medical data. They pretended to compare two eye surgeries, claiming one was better, but it was all made up. They did this to show how easy it is to fake data that looks real. This has got a lot of smart people in the science world worried because it's getting tough to tell what's real and what's AI-made-up. This could lead to a lot of bogus science.

In detail, they focused on an eye problem called keratoconus. They told the AI to make up stuff showing that one treatment, DALK, was better than another, PK. But real studies show both treatments are pretty much the same. Some experts checked out this fake data and found some fishy things, like mismatched names and sexes, and weird patterns in ages and test scores.

The big worry is that this fake data looks legit at first glance. It's like a magic trick in the science world. Journals and peer reviewers might not catch these fakes unless they really dig deep. So, there's a race to create tools to spot these AI-made fibs, but the AI is getting better at fooling us. It's like a high-tech game of cat and mouse.

Michigan deploys AI to detect guns at state Capitol

Michigan's rolling out some high-tech AI at their state Capitol to spot guns. With things getting rough politically, they're the first state to use this tech by a company called ZeroEyes. The big cheese at the Michigan State Capitol Commission, Rob Blackshaw, is saying it's key to stay sharp and keep the place safe. 

This AI thing will work with the cameras they already got, scanning a ton of images super fast. If it spots a gun, it lets the cops and security know right away. ZeroEyes says they're the only ones with this kind of AI gun-spotting tech that's got the thumbs up from the Department of Homeland Security. They're really aiming to make the Capitol a safe spot, no matter what's happening outside.

Equivalence Between Policy Gradients and Soft Q-Learning

This paper's all about comparing two big-shot methods in the world of teaching computers to learn by themselves, called "policy gradient methods" and "soft Q-learning." These are ways to get computers to make decisions without a human telling them what to do each step of the way. The cool part? The researchers found out that these two methods are pretty much two sides of the same coin when you throw in this thing called "entropy regularization" — it's like a fancy way of making sure the computer doesn't get stuck in a rut and keeps trying new things.

They start by breaking down a simple problem, the "bandit setting," to show how these methods work and then get into more complex scenarios. They also tried out their ideas with some experiments to see if their theory holds up in practice. Turns out, it does, which is pretty neat because it means we can mix and match these methods in different ways to make computers learn better.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.