• NATURAL 20
  • Posts
  • Trick ChatGPT to say its SECRET PROMPT | Extract Data from GPTs

Trick ChatGPT to say its SECRET PROMPT | Extract Data from GPTs

Discover hidden features, and learn valuable tricks to enhance your experience and efficiency with ChatGPT

Today:

Trick ChatGPT to say its SECRET PROMPT | Extract Data from GPTs

This video discusses how people have figured out how to extract the system prompt of AI models like ChatGPT, revealing its functions, abilities, and policies. The process involves using a specific phrase and a text code block, prompting the AI to display its system message.

This system prompt provides insight into what ChatGPT can and can't do, its tools, limitations, and is seen as a master class in prompt engineering. It starts with basic information about the model, like its architecture and knowledge cutoff date, and explains the functionalities of various tools like Python coding, image generation with DALL-E, and the browser tool.

OpenAI says it is investigating reports ChatGPT has become ‘lazy’

OpenAI's ChatGPT, based on the GPT-4 model, is under the microscope for acting "lazy." Users report that it's dodging full answers, offering just bits and pieces instead. For example, if you ask for code, it might just give a starter and tell you to finish it up, sometimes with a bit of sass. 

People are airing their frustrations on Reddit and OpenAI's forums, thinking maybe OpenAI did this on purpose to save on computing resources. ChatGPT ain't cheap to run, after all – detailed answers cost serious computing power. But OpenAI says, "Hold up, we didn't tweak anything." 

They tweeted that they haven't updated the model since November 11th and are looking into these quirks. They're still figuring out if ChatGPT really has changed its response style. All this comes amidst some drama at OpenAI, with CEO Sam Altman briefly leaving and then coming back recently.

Spotify slashes staff to move faster into AI – and Wall Street loves it

Spotify 3D icon concept

Spotify, the big name in streaming tunes and podcasts, got Wall Street all hyped up by cutting its workforce and diving deep into AI tech. They've let go of a bunch of folks—590 in January, 200 in June, and another 1,500 just now. This move is part of a bigger game plan to use AI for better profits, especially in podcasts and audiobooks.

Spotify's also rolling out some cool AI features, like an AI DJ and a voice translation tool for podcasts, which has got investors excited. Their stock is soaring—up over 30% in six months and more than 135% since the start of the year.

Their latest project with Google Cloud is about making audiobook and podcast recommendations smarter using AI. Spotify's been all about personalizing user experience for a decade, using tech to figure out what tunes you'll dig based on stuff like the beat, the vibe, and what you've liked before.

Google weighs Gemini AI project to tell people their life story using phone data, photos

Google's team is exploring a new AI project, "Project Ellmann," inspired by biographer Richard David Ellmann. This project aims to use large language models (LLMs) like Gemini to offer a comprehensive view of a person's life by analyzing mobile phone data, including photos and searches. The goal is for Ellmann to act as a "Life Story Teller," potentially within Google Photos, which already has over a billion users and 4 trillion photos and videos.

Google's recent launch of Gemini, an advanced AI model that sometimes outperforms OpenAI's GPT-4, plays a crucial role in this project. Gemini is multimodal, meaning it can process not just text but also images, video, and audio. The idea is to use these LLMs to interpret life events and weave them into a narrative, offering insights beyond mere labels and metadata on photos.

Project Ellmann could, for example, recognize significant life chapters like college years or parenthood by analyzing photo tags and locations. It could also infer personal details, like a child's birth, from a broader context. Another feature, "Ellmann Chat," would act like a personalized ChatGPT, providing answers based on the user's life story.

Toyota transforms IT service desk with gen AI

Toyota's been shaking things up in their IT department big time. Back in 2017, they started using RPA (Robotic Process Automation) to cut down on repetitive work, saving a whopping 150,000 hours in just the first year. Fast forward to 2020, and they're focusing on their IT service desk, aiming to make things quicker and smoother for their team.

In 2021, Toyota teamed up with Moveworks to launch AgentAsk. Think of it like a ChatGPT for their 45,000 employees in North America, but tailored for their workplace with all the security and privacy bells and whistles. This service has been a game-changer, solving around 70,000 problems and speeding up the resolution of 100,000 more. It's especially good at handling the small stuff, like password resets, which lets the service desk focus on bigger issues.

AgentAsk is super fast, fixing things in about 11.4 minutes on average, way quicker than the industry norm of three days. It's like having 25 extra IT folks on hand every week, giving back over 70,000 hours of productivity in a year. This means less time wasted on things like digging through systems for approvals.

‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity

AI apps that strip women in photos are all the rage now. Last September, 24 million folks hit up websites for this. These apps, known as "nudify" services, use AI to fake nude pics, mostly of women. They're hyped big time on social media, with ads jumping 2,400% this year on platforms like Reddit. This tech, part of a scary trend in non-consensual deepfakes, often uses photos swiped from social media without the person's okay.

The tech's getting slicker, thanks to free, open-source AI models. Now, these fakes look real, not blurry like before. Some apps are even bold enough to suggest sending these fakes to the people in them, kinda egging on harassment.

Google and Reddit are stepping up, ditching ads or domains that break their rules. But the problem's growing. Apps are making bank, charging around $10 a month, and even ordinary folks are creating deepfakes.

Automated system teaches users when to collaborate with an AI assistant

Researchers at MIT and the MIT-IBM Watson AI Lab developed a new system to help folks figure out when to trust AI assistance. It's like a smart guide for times when you're unsure whether to go with what the AI says.

This system automatically learns the rules for working with AI and explains them in plain language. During training, the doctor practices using these rules, getting feedback on both her and the AI's performance. The result? About a 5% bump in accuracy when humans and AI work together on image prediction tasks.

The system adapts to different tasks, so it's not just for doctors but can be used in other areas like social media moderation or writing.

The training evolves too. It's not just some set instructions but changes based on data from actual task performance. It identifies situations where humans might misjudge the AI's advice and creates training exercises around these scenarios. This approach proved more effective than just giving recommendations without training.

Synthetic Art Could Help AI Systems Learn

Researchers found that AI systems might get smarter at recognizing pictures if they're trained on computer-made art designed to teach them about what's in the images. Typically, AIs learn from real-world photos, like faces. But these real photos often miss stuff or have issues. For example, many facial recognition systems were bad at identifying dark-skinned faces because they mostly saw light-skinned ones in their training.

To fix this, the study suggests using synthetic images from AI art generators like DALL-E 2 and Stable Diffusion. These tools can create realistic images from text descriptions and can be tweaked to show a wide range of stuff. The researchers used a new method called StableRep to train AIs with these synthetic images, showing better results than traditional methods.

They also tried an upgraded version, StableRep+, which looks at both pictures and text. It did just as well as another AI system but with fewer images. The big question is why AIs might learn better from fake images. Maybe because the AI art tools offer more control or can create a wider variety of training data.

Is AI leading to a reproducibility crisis in science?

This article addresses the growing concern that Artificial Intelligence (AI) is causing a reproducibility crisis in science. During the COVID-19 pandemic, AI was used to diagnose infections through chest X-rays, but a study revealed that AI could identify COVID-19 cases using only blank image sections, indicating it was picking up on irrelevant differences in the images. This raised doubts about the medical usefulness of such AI applications.

The problem of AI making incorrect classifications extends beyond medical diagnostics to other fields like cell and face recognition. Researchers found that many AI studies couldn't be replicated due to methodological flaws or biases in image datasets. The issue of "data leakage" – where training and test data aren't properly separated – was identified as a major cause of these problems, affecting numerous scientific fields. This has led to concerns that AI's flexibility and lack of rigor in model development might be producing unreliable and non-reproducible results.

This highlights that while AI has significantly advanced science by identifying patterns invisible to humans, its misuse is generating a wave of unreplicable and potentially misleading papers. Researchers stress the importance of proper AI application and training in scientific research to avoid such pitfalls.

AI can teach math teachers how to improve student skills

This study is all about helping middle school math teachers up their game with the help of AI. Teachers took an online course where a virtual AI buddy dished out math problems and gave feedback. The idea was to make teachers sharper at teaching math, like understanding the nuts and bolts of it and knowing the typical hurdles kids face when learning math.

They tested it out with 53 math teachers and over 1700 students. Teachers who took the AI course helped their students boost their math scores significantly – like, the kind of jump you'd usually see between a sixth and seventh grader.

The AI in this program isn't just a fancy video – it interacts like a real instructor, giving teachers hands-on tasks, checking their understanding, and offering feedback right then and there.

Mike Loukides and Tim O'Reilly, in their article for Project Syndicate, dive into how AI's getting into hot water with copyright rules. The US Copyright Office says that AI-made images don't count as original unless the human who told the AI what to make put in some real creative effort. But, how much creativity is enough, and is it the same as an artist slinging paint?

Then there's the issue with books and text. Some folks argue that just feeding copyrighted books to an AI is stepping on copyright toes, even if the AI doesn’t spit back those exact words. But hey, we've all learned by reading stuff, right? We don’t pay to remember what we read.

The big question: What's fair game for copyright in the AI world? Tech thinker Jaron Lanier has an idea. He talks about "data dignity," basically saying there’s a line between teaching an AI and the stuff it cranks out.

AI in 2023: Turing experts on the advances you may have missed

In 2023, AI made big waves, especially with ChatGPT and debates on AI rules. But, there's more to the story, and Turing researchers are here to spill the beans on the cool stuff you might've missed, from the environment and ethics to healthcare and security.

  • Weather Forecasting Got a Turbo Boost: AI's now predicting the weather way faster and more accurately. We're talking minutes instead of hours, thanks to some brainy tech called 'graph neural networks' (GNNs). These whizzes can handle stuff like weather patterns and are nailing forecasts up to ten days out. This means we can get the heads-up on things like hurricanes way quicker.

  • Kids' Rights in the Digital World: This year, laws got tougher on keeping kids safe online. The Online Safety Act in the UK's making social media giants pull up their socks – like ditching harmful content pronto and doing better age checks. Plus, the Council of Europe's chatting about kids' rights in AI. There's a big push to let kids have a say in how AI's used, especially as they're growing up with it everywhere.

  • Healthcare's Going High-Tech: We're seeing 'digital twins' in healthcare. Think of these like virtual copies of patients that get updated with their latest health data. They help doctors figure out the best care plans and treatments. The Turing Research and Innovation Cluster in Digital Twins is even working on a digital model of a baby's heart to monitor its health.

  • AI's Fighting Cyber Crime: Bad guys are using AI for cyberattacks, but the good news is the good guys are using AI to fight back. This year, tech like Google's Security AI Workbench and Microsoft's tools are using AI to spot threats like malware. Plus, the Turing’s AI for Cyber Defence is developing smart AI 'agents' that can automatically defend networks from attacks.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.