Global ChatGPT Credentials Breach

Crucial Action Steps to Safeguard Your ChatGPT Account from Recent Massive Data Breach Affecting Over 100,000 Users Worldwide

Learn AI in 5 Minutes a Day

We'll teach you how to save time and earn more with AI. Join 90,000+ free daily readers for trending tools, productivity-boosting prompts, the latest news, and more.

Today:

Massive Leak Of ChatGPT Credentials: Over 100,000 Accounts Affected

Over 100,000 OpenAI ChatGPT accounts have had their credentials spill onto the dark web like a clumsy waiter dropping a tray of drinks. All this happened in just one year, from June 2022 to May 2023.

This mother lode of stolen goodies was discovered by Group-IB, hidden within logs of pinched info, now up for grabs in the internet's shadiest corners.

The victim pool spans across the globe, with India at the top of the list with a whopping 12,632, likely because they've been really cozying up to ChatGPT. Other countries hit hard include Pakistan, Brazil, Vietnam, Egypt, and let's not forget our own U.S.A., among others. Looks like this AI's fan club reaches pretty far and wide.

So, who's to blame? Info stealers, a cybercriminal's best friend. These baddies snatch passwords, credit card details, and other important stuff from your browsers and crypto wallets. The dark web's the local pawn shop where they trade their stolen goods. Despite the law's best efforts, these transactions slip through their fingers like a greased pig.

Dmitry Shestakov, big cheese at Group-IB, notes businesses are incorporating ChatGPT into their operations more and more. That's great for progress but brings its share of dangers. Dmitry's advice? Protect your accounts with two-factor authentication (2FA), which is kind of like needing both a key and a secret handshake to get in.

RoboCat: A self-improving robotic agent

Move over Schrodinger, we've got a new kitty on the block: RoboCat. Developed by the New Foundation, it's not just about handling a single job. This cool cat can juggle multiple tasks with different robot arms, and what's more? It learns on the job!

Most robots are like workers with a single job description. Not with RoboCat, though. This is a general-purpose robot with a quick mind and a keen eye. We're not talking about a slow learner either, this cat's got spring-loaded learning curves.

Building on the Gato model, RoboCat learns from a potluck of data, be it images, language, or actions. It's a blend of tried-and-tested tasks and fresh challenges. Give it a task, and it'll whip up a spin-off agent that can practice and perfect the job.

RoboCat ain't just good for a day's work. It goes beyond the call of duty, spending extra hours perfecting its tasks. As it trains, its data potluck grows, making it even more capable.It was given a robotic arm as complicated as a Rubik's cube - an arm with a three-fingered gripper and twice as many controls. After just 1000 human-guided demos, our feline friend was nailing gear pick-ups 86% of the time. Not bad for a day's work, eh?

Parrot, an AI-powered transcription platform that turns speech into text, raises $11M Series A

Founded by a lawyer named Eric Baum, and some tech wizards, this clever bird just pocketed $11 million in funding. They've seen there's a hitch in the giddy-up when it comes to getting down what's said in legal and insurance talks. So, they've taught their Parrot to do the job quick as a hiccup.

Parrot wants to help you gather and understand info faster, like a dog on a bone. Lawyers used to have to wait a country minute for the old way of getting court transcripts, but with Parrot, they can get a quick and accurate rough draft. Even better, it's all stored in that mystical thing called the cloud, safe as houses.

So, what's the long and short of it? Well, Parrot's trying to make things easier for the suits. It's like switching from an old dumb phone to one of them smart ones, according to a satisfied customer. And if you're in the biz of needing words written down from talks, you might just find Parrot's your new best friend.

Harness releases generative AI assistant to help increase developer efficiency

Harness has rolled out a new tool: AIDA, the AI Development Assistant. This ain't just a code generator, it's a Swiss army knife for developers. Harness, five years in the business, has always been about making developers' lives a smidge easier. The launch of AIDA is another feather in their cap. Harness CEO Jyoti Bansal believes AIDA will do more than just pump out code. He's betting that AIDA will boost productivity by a whopping 30% to 50% by being an all-rounder in the software development lifecycle.

The way AIDA does this ain't rocket science. First off, it's got your back when things go sideways. You know when you make a change and suddenly everything's up in flames? Well, AIDA can help figure out what broke and how to fix it. You still have the final say, but it's nice to have an AI sidekick for the detective work.

Next, it's got a nose for security hiccups. It sniffs out issues and suggests fixes, with you giving the green light. Lastly, it's got an eye on your cloud costs and can suggest savings in plain English.

There's a new gen AI tool to help workers spot malicious emails

IRONSCALES is back with a fancy new AI tool that they're saying is going to be the bee's knees for spotting mean-spirited emails. Named Themis Co-pilot, it's been whipped up for Microsoft Outlook users and it's got some pretty snazzy features thanks to OpenAI's GPT models.

Turns out, we're all sitting ducks for things like Business Email Compromise (BEC) and phishing attacks. That's a fancy way of saying bad guys are tricking us into revealing sensitive info via email. These attacks have been skyrocketing and are expected to jump another 43% this year.

According to IRONSCALES, we're the weakest link when it comes to these phishing attacks. So, Themis Co-pilot is here to turn us all into email sleuths. If something seems fishy, you can chat with this AI tool for real-time insights and to report anything hinky.

Besides turning us all into cyber detectives, Themis Co-pilot is also expected to cut down on 'crying wolf' scenarios, which should give our poor, overworked security teams a breather. Plus, the more it's used, the smarter it gets at spotting similar threats down the line.

These ‘A.I. humans’ are letting gamers modify their voices in real time

Voicemod has created 20 "AI humans" to give gamers the power to change their voices in video games on the fly. Imagine this – one minute you're a twenty-something woman, the next you're an old man! Sound nuts? Well, that's the magic of modern tech.

Voicemod's technology works like a virtual microphone, letting gamers mess around with their vocal persona while gaming. The company has a staff of tech wizards from Valencia and Barcelona, specializing in sound and music technology.

Over 40 million users have adopted Voicemod's tech, using it for lighthearted fun or to boost their social confidence. In fact, even shy gamers found their voice, quite literally!

However, CEO Jaime Bosch isn't oblivious to potential misuse, like impersonating big shots or scamming people. He's losing sleep over it every day, folks! To counter this, Voicemod's near completion of a "watermarking" solution to track if voices have been artificially altered. They're also in talks for standardizing voice-changing tech to keep things on the up and up.

AI Leader Proposes a New Kind of Turing Test for Chatbots

In a thought-provoking book, Mustafa Suleyman challenges the relevance of the Turing test in determining artificial intelligence (AI) capabilities. The Turing test, proposed by Alan Turing in 1950, aims to assess whether a computer can mimic human-like intelligence.

However, with advancements in generative AI tools like ChatGPT and Google's LaMDA, machines are getting closer to passing this test. Despite this achievement, Suleyman argues that passing the Turing test no longer indicates true AI capabilities, such as understanding complex concepts or engaging in long-term planning.

In his book, "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma," Suleyman proposes replacing the Turing test with a new benchmark called artificial capable intelligence (ACI). ACI focuses on machines' ability to set goals and accomplish complex tasks with minimal human intervention.

Suleyman suggests a "modern Turing test" where an AI is given $100,000 to turn into $1 million through researching, manufacturing, and selling a product online. He anticipates that AI will achieve this practical benchmark within the next two years, which he believes will have profound implications for the global economy.

AI models fall short of draft EU rules

Big tech's fancy new artificial intelligence (AI) models may be about to step on some European toes. Companies like OpenAI, Google, and Meta are throwing down serious cash on AI, but Stanford researchers warn they might be headed for a showdown with international rulemakers.

See, the EU's itching to put some brakes on AI and has a fresh set of draft rules to do just that. Main issue? Copyright. If AI models are busy generating content, they should keep tabs on what data they've trained on that's got a copyright slapped on it. So far, they're not doing great on that front, says Stanford AI brain Rishi Bommasani.

The EU's proposed AI Act wants developers of generative AI tools to spill the beans on AI-generated content and to give a summary of copyrighted data used in training, ensuring original creators get paid their dues.

Bommasani and team took ten AI models and put them through the EU draft rule wringer. Spoiler alert: all of them fell short, with six scoring less than half points. Closed models like OpenAI's ChatGPT and Google's PaLM 2 were particularly sketchy about copyrighted data, while open-source rivals were more transparent but tough to rein in.

Despite this, the Stanford study gives rule-makers worldwide some handy insights for handling this game-changing tech. But it also throws a spotlight on the tussle between speedy and responsible development. As Bommasani puts it, companies need to offer up more transparency for effective regulation. However, with enforcing these draft rules already looking tricky, he expects a lobby frenzy in Brussels and Washington as the rules get nailed down. So hold on to your hats, this could get bumpy.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.