OpenAI GPT-5 Soon

OpenAI Files Patent for GPT-5, Fueling Exciting Speculation in the World of Digital Communication and Technology

Today:

ChatGPT Powered By GPT-5 Is Coming, OpenAI Files Patent

Looks like OpenAI, the big-name AI lab, has put in paperwork for the trademark "GPT-5". Now, don't get ahead of yourself, this doesn't mean GPT-5 is a sure thing just yet, but it does suggest they're cooking up a new model. This new kid on the block could be even better at understanding and generating language, with some folks even dreaming of it reaching Artificial General Intelligence, or AGI. Now, that's a fancy term for an AI that can do pretty much anything a human brain can, no extra coding needed.

One developer, Siqi Chen, thinks GPT-5 could be ready to roll by year's end, but it's all speculation at this point. OpenAI's head honcho, Sam Altman, has played it cool, saying they're not training GPT-5 just yet. There are worries from tech gurus about the risks of such powerful AI getting out of hand.

Also, don't forget this whole trademark thing could just be OpenAI playing it smart, keeping the "GPT-5" name for themselves. The real lowdown on what GPT-5 can do is still up in the air.

Still, if GPT-5 does become a reality, it could be a big step forward for AI. A more advanced model could really shake things up in the world of digital communication and tech. But for now, we're all just waiting and watching.

Generative AI services pulled from Apple App Store in China ahead of new regulations

Well, it looks like China's putting its foot down on AI. Several apps using generative AI, think programs that can generate human-like text, got yanked from the Apple App Store in China. This happened just a couple weeks before new rules about this sort of AI kick in, starting August 15.

Apple sent notices to Chinese developers, like OpenCat, which runs a version of ChatGPT, telling them their apps were getting the boot because they included content that's a no-go in China.

China's gearing up to tighten the reins on AI services, even those that provide the nuts and bolts for others (API providers). Apps using AI in China will need to score a license from the powers that be, a rule that was reflected in Apple's removal notice.

YouTube uses AI to summarize videos in latest test

Google's toying around with using AI to create quick summaries for YouTube videos, according to a July 31st notice. But don't worry, it's just a small-scale experiment for now. These snappy summaries will only show up next to a limited number of videos, and just for a few users. The goal? Give you a fast rundown of a video's content, without tossing out the descriptions written by real people.

YouTube hopes these AI-generated blurbs will make it easier for you to pick what you want to watch. For now, you might be able to sign up for this test run at YouTube.com/new, though some of these experiments might require a YouTube Premium subscription.

Fiverr launches Business Solutions and Neo AI matching algorithm

Fiverr, the online marketplace for creative freelancers, is launching two new services: Fiverr Business Solutions and Fiverr Neo. This comes amidst a surge in interest for AI-related skills on the platform, which saw a 1,400% increase in searches.

Fiverr Business Solutions is the new label for Fiverr's offerings for mid-sized to large companies, which include Fiverr Enterprise, Fiverr Certified, and Fiverr Pro. Fiverr Enterprise is a SaaS platform for sourcing and managing freelance talent. Fiverr Certified allows software vendor companies to identify and promote experts in their technologies, while Fiverr Pro is a premium offering that hosts hand-vetted and curated freelancers.

Fiverr Neo, on the other hand, is a new matching service that employs a hybrid chatbot/multiple choice interface to help customers find the right professional for their specific project needs. Fiverr Neo utilizes generative AI, specifically large language models (LLMs) like OpenAI’s ChatGPT, Google’s Bard, and others, to deliver a more personalized and accurate matching experience.

LinkedIn seems to be working on an AI ‘coach’ for job applications

LinkedIn, a Microsoft-owned company, is reportedly developing a new AI tool named "LinkedIn Coach" designed to assist users in job search and application processes, skill enhancement, and networking on the platform. This information was revealed by app researcher Nima Owji, who specializes in uncovering unimplemented features from different developers.

The tool, currently in testing, could guide users by answering questions like "How does Coach work?" or "What is the culture of Microsoft?" Given Microsoft's previous work in integrating AI-powered chatbots into other platforms, including Bing and the document-generating tool Copilot, it would be a logical next step to introduce such a feature into LinkedIn.

Steg.AI puts deep learning on the job in a clever evolution of watermarking

Steg.AI, a tech startup, is using deep learning to create a new generation of watermarks that are virtually invisible and can endure transformations and re-encoding. This technology is in response to a growing need among digital creators for a secure and reliable way to prove their ownership of media content in an era dominated by AI-generated content and Non-Fungible Tokens (NFTs).

Traditional methods of watermarking, like adding a visible logo or code on the image, often fail to secure content ownership due to various manipulations such as file format changes or resizing. Steg.AI's method aims to overcome these issues.

According to co-founders Eric Wengrowski and Kristin Dana, their technology uses a pair of machine learning models that customize the watermark according to the media content. This customized watermark is made in such a way that it remains virtually undetectable by human perception but can be easily identified by the decoding algorithm. The technology has been described as somewhat similar to an invisible, immutable QR code.

Neon raises $46 Million to advance serverless PostgreSQL database for the AI era

Neon, a serverless PostgreSQL database provider, has announced that it has raised $46 million in a series B funding round, bringing the company's total raised to $104 million. Neon's platform provides a serverless version of the open-source PostgreSQL relational database as a cloud service, meaning developers don't have to maintain servers. Instead, the database only runs when it's needed.

Menlo Ventures led the funding round, which also saw participation from Founders Fund, General Catalyst, GGV Capital, Khosla Ventures, Snowflake Ventures, and Databricks Ventures. Neon has reportedly deployed over 100,000 databases and established partnerships with developer cloud platforms such as Vercel and Replit.

Neon is also working on enhancing its AI capabilities with vector functions, a growing use case for databases. The company is developing its own vector extension, called pg_embedding, that goes beyond the functionality of the existing pgvector extension in PostgreSQL, providing faster vector search capabilities.

New AI systems collide with copyright law

Artificial intelligence (AI) models, specifically those that generate content, are facing a backlash from artists and creators over alleged copyright infringement. The models use a vast amount of data scraped from the internet, including text, images, and videos, which are used to generate new content.

Artists such as Kelly McKernan, Sarah Anderson, and Karla Ortiz have filed a lawsuit against Stability AI, the company behind the AI generator Stable Diffusion. They claim their artwork was used to train Stability AI's algorithms without their permission. In a similar case, Getty Images also filed a case against Stability AI for unlawfully copying and processing 12 million of their images.

The European Guild of Artificial Intelligence Regulation has been established to create legislation and regulations to protect artists and copyright holders against what they see as predatory AI companies. There are calls for opt-in protocols, which would require AI companies to obtain artists' explicit permission before using their work.

Using AI to protect against AI image manipulation

AI can now manipulate images so precisely that it's hard to tell what's real and what's not. These advancements aren't just used for good; they can be exploited for harmful reasons too. To address this, some smart folks at MIT's CSAIL lab created PhotoGuard. PhotoGuard subtly alters images in ways that we humans can't see, but AI can. This throws a monkey wrench in the AI's ability to mess with the image.

Two techniques are used to create these alterations. The "encoder" attack causes the AI to see the image as random bits, and the "diffusion" attack is a bit trickier, tweaking the image to resemble a target picture as closely as possible.

Let's say you have an art project. You have a drawing, and you want it to be safe from AI meddling. PhotoGuard makes tiny changes that, to an AI, makes the drawing look like another image entirely. But to us humans, it still looks like the original drawing. This means that any AI trying to change the image ends up working on the wrong picture, leaving your original untouched.

A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It

AI chatbots like ChatGPT got blindsided by a fresh issue that no one can stop yet. Scientists from Carnegie Mellon showed a simple secret code added to a bot's conversation can make it spit out forbidden stuff. This throws a wrench in the works for those looking to use advanced AI, suggesting the bots have some deep-seated problems that can't be brushed aside easily.

According to Zico Kolter, one of the boffins behind this discovery, they can't find a way to patch this problem, making the AI bots a sitting duck. They exploited a weak point, messing with the bot's conversation prompt to nudge it into breaking its own rules. This glitch hit several big-name chatbots, including Google’s Bard and Anthropic’s Claude.

The researchers gave a heads-up to OpenAI, Google, and Anthropic about this flaw before publishing their findings. Although these companies have set up barriers against this specific issue, they haven't cracked how to defend against such attacks in general.

Large language models like ChatGPT are super smart at predicting what comes next in a conversation, making them appear truly intelligent. But they can also trip up, making stuff up, repeating biased phrases, or giving wonky responses.

ChatGPT app for Android is now available to many countries

ChatGPT is available for use in a wide array of countries and regions, with OpenAI always looking to add more to the list. The goal is to ensure a broad distribution of benefits while keeping things safe. If your location isn't on the list, keep an eye out for updates.

You can grab the iOS app here and the Android app here.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.