- NATURAL 20
- Posts
- OpenAI's GPT-4: Now Generally Available!
OpenAI's GPT-4: Now Generally Available!
OpenAI Launches GPT-4 API – Offering Unparalleled Text Generation, Inclusive of Image Inputs and Enhanced Professional Benchmarks – Propel Your AI Solutions into the Future
Learn AI in 5 Minutes a Day
We'll teach you how to save time and earn more with AI. Join 90,000+ free daily readers for trending tools, productivity-boosting prompts, the latest news, and more.
Today:
OpenAI makes GPT-4 generally available
OpenAI has rolled out its latest word-making machine, GPT-4, for general use through its API. If you've been paying your bills on time, you'll get first dibs starting this afternoon. Newbies get to join the fun later this month. And, if they've got the computing juice, they'll invite more folks to the party.
Since they announced GPT-4 in March, there's been a ton of requests for access. Every day more and more cool tools using GPT-4 are popping up. OpenAI's vision is for these chat-based models to be able to tackle any job.
Compared to its older brother, GPT-3.5, GPT-4 isn't just a words guy - it can take images and text as inputs and even generate code. It's been put through its paces on a bunch of professional and academic tests and it's holding its own. But, it ain't perfect. Sometimes it makes stuff up, and sometimes it makes mistakes with too much confidence. It doesn't learn from its mistakes and could even slip up and create security risks in the code it writes.
OpenAI plans to let developers fine-tune GPT-4, and a less fancy model called GPT-3.5 Turbo, using their own data. They expect to have this ready by the end of the year.
OpenAI also announced that it's releasing two other models, DALL-E 2, which creates images, and Whisper, which turns speech into text, for general use. They also plan to retire some of their old models, specifically GPT-3 and its offshoots. Folks using these will need to switch to the new "base GPT-3" models, which are presumably more efficient. OpenAI has promised to lend a hand to help users make the switch smoothly. They'll be reaching out to developers soon with more info.
Amazon CEO explains how the company will compete against Microsoft, Google in AI race
Andy Jassy, the big cheese at Amazon, ain't throwing in the towel in the battle of the brains, AI style, against Microsoft and Google. He argues we're seeing more fluff than fact in AI's current 'hype cycle.'
He's got his sights set on AI's potential to jazz up every customer's experience and isn't shy about throwing some of Amazon's hard-earned cash into this venture. He's particularly keen on Amazon Web Services (AWS) riding the AI wave.
AWS has launched Bedrock, a tool that lets clients create their own chatbots and image services using Amazon's language models and those of other budding businesses. They're also whipping up their own AI chips - Inferentia and Trainium - to rival Nvidia's tech and make running hefty AI language models a walk in the park for developers.
Jassy's confident these moves give Amazon a leg up in the AI game, despite recent cuts at the company due to sales hitting the brakes and a dreary economic forecast. He's been tightening the purse strings, leading to the largest layoffs in Amazon's history and pausing expansion on a few of its ventures. But in the realm of AI, he's willing to bet the farm.
Google to explore alternatives to robots.txt in wake of generative AI and other emerging technologies
Google is looking for fresh ways to handle web crawling and indexing, the process that allows sites to pop up in your search results. They're trying to go beyond the robots.txt protocol, a standard method that's been around for 30 years. The reason? Emerging technologies, including AI, need more complex tools to decide what they can and can't access on the web.
Google's not going at it alone. They're bringing in experts from all over -- web publishers, academics, and more -- for a chat about a new approach. This isn't a quick fix; these discussions will take place over the coming months, so don't expect any big changes right away.
One big issue that's pushing this change is the problem of paywalled content. Open AI recently had to stop a feature in ChatGPT that let it peek at content behind a paywall without permission from the site. This is one of the reasons Google might be looking for new ways around using robots.txt.
We're all used to allowing bots on our sites using robots.txt and similar methods. But we might have to start learning new tricks. No one knows what these new methods will look like yet, but the conversation is starting.
ChatGPT sees its first monthly drop in traffic since launch
Since its launch in November 2022, OpenAI's ChatGPT took the AI world by storm, rapidly growing to 100 million active users in just two months. However, for the first time, this chatbot sensation saw a dip in traffic in June 2023. A Similarweb report shows a 9.7% worldwide drop and a 10.3% drop in the US. People are also spending less time yakking with the bot, with an 8.5% decrease in time spent on the site.
Seems the initial excitement over the chatbot might be cooling off. According to David F. Carr from Similarweb, the chatbot honeymoon phase might be over, and now they've got to roll up their sleeves and show their worth.
Sure, there was chatter about AI chatbots like ChatGPT potentially replacing search engines, but Google still dwarfs ChatGPT with its 84 billion monthly visitors compared to ChatGPT's 1.8 billion. For perspective, Bing, holding a tiny 3% of the search engine market, has over 1.1 billion users.
After OpenAI's head honcho, Sam Altman, mentioned that running the free platform was costing an arm and a leg, the company introduced the ChatGPT Plus subscription. The technology also found takers like Microsoft, who have hitched the AI to Bing Chat to create a smarter search engine. Interestingly, there's a silver lining: traffic to OpenAI's developer site rose by 3.1% from May to June.
DigitalOcean acquires cloud computing startup Paperspace for $111M in cash
Cloud hosting bigwig DigitalOcean just shelled out $111 million for Paperspace, a New York-based AI startup. The big idea? This purchase should give DigitalOcean customers a smoother ride when it comes to testing, creating, and using AI applications. Those using Paperspace, on the other hand, get to enjoy all the cloud services DigitalOcean has on tap – think storage, app hosting, and more. But if you're a Paperspace user, don't fret – there won't be any immediate changes to your service. In fact, Paperspace will still run its own show as part of DigitalOcean.
This buy is part of DigitalOcean's plan to level up their cloud service game and compete with the big guns in the market. They hope to reach more folks, especially those on a tight budget, looking to explore AI and machine learning apps.
As for Paperspace, joining forces with DigitalOcean means they can help more developers and businesses make the most of AI and machine learning. This is the first acquisition for DigitalOcean since they bought Cloudways, a Pakistani cloud hosting service, in 2022, and their fourth since going public in 2021.
Observers think this is a smart move for DigitalOcean. While its profits have grown, it's fallen short of expectations. As more tech giants bank on AI to boost their revenues, buying Paperspace seems like a good way to keep pace and grab a piece of the predicted $600 billion cloud spending pie this year.
Inflection AI Develops Supercomputer Equipped With 22,000 NVIDIA H100 AI GPUs
AI startup Inflection AI is whipping up a monster of a supercomputer, jam-packed with a whopping 22,000 NVIDIA H100 GPUs. That's a whole lot of computing horsepower!
Inflection AI, the folks who brought us the Inflection-1 AI model for the Pi chatbot, are turning heads in the industry. Even though Inflection-1 isn't quite on par with heavy-hitters like ChatGPT or Google's LaMDA models yet, it's a solid performer for everyday tasks - the sort of stuff you'd want a personal assistant to handle.
This new supercomputer they're piecing together looks set to be one of the biggest in the business, hot on the heels of AMD's Frontier. And here's the kicker: it's loaded with about 700 four-node racks of Intel Xeon CPUs, sucking down a whopping 31 Mega-Watts of power.
Scoring 22,000 NVIDIA H100 GPUs is a big deal. They're hot property, and it's tough to get your mitts on even one these days, thanks to a surge in demand. Inflection AI had an inside track here, though, since NVIDIA is considering investing in the company.
Inflection AI has already banked around $1.5 billion in investments and has a total value of about $4 billion. The new supercomputer is expected to seriously boost the performance of the Inflection-1 AI model, especially when it comes to coding tasks.
Shutterstock continues generative AI push with legal protection for enterprise customers
Shutterstock is now providing complete protection to their big business customers using AI-created images, covering them against any claims tied to their use. This comes on the heels of Adobe's similar move, showing a growing trend of providing safe and ethical AI content. This protection works just like it does for other content under Shutterstock's licensing agreement, and protects against copyright infringement, violations of privacy or publicity rights, and any other legal issues.
Shutterstock has been integrating AI into its services since 2022, first partnering with OpenAI and then launching its own AI image creator. Unlike its rival Getty Images, which took legal action against a text-to-image generator creator for copyright issues, Shutterstock's AI image creator was developed using lots of ethically-sourced content, including its own.
Shutterstock is also focusing on looking out for its contributors. The company's Contributor Fund, started in 2022, has paid out a good deal of money to many artists whose work was used to train its AI technology. Shutterstock plans to make more payments to many more artists in the future. Contributors also keep ownership of their work, and can choose if they want their content to be used in training new AI models.
Daniel Ek’s Neko Health raises $65M for preventative healthcare through full-body scans
Spotify co-founder Daniel Ek's new venture, Neko Health, just bagged a cool $65 million in their first big funding round. The Swedish startup, co-founded with Hjalmar Nilsonne, is promising to take a fresh crack at healthcare, using AI-backed full-body scans to catch nasty stuff like skin cancer and diabetes early on.
Here's the deal: you go in, get a ten-minute scan for about $280, then chat with a doc about what they find. Neko's crew, made up of 35 healthcare pros, is already chugging away at a waiting list from their first clinic in Stockholm.
Nilsonne, the CEO, believes prevention is the best medicine and hopes to take a load off the strained healthcare system. However, Neko's been pretty hush-hush, giving Theranos vibes, and didn't give media much chance to poke and prod.
Despite the secrecy, some big names, including Lakestar, Atomico, and General Catalyst, put their money behind Neko. They've backed hotshots like Spotify, Airbnb, Snap, and HubSpot in the past. The new dough will go towards spreading clinics around Europe, plus investing in research and recruiting.
Elon Musk praises China’s ‘very strong’ A.I. credentials
Elon Musk, the CEO of Tesla, praised China’s “very strong” AI credentials at the 2023 World Artificial Intelligence Conference in Shanghai. Musk believes that China is in a strong position when it comes to the development of artificial intelligence and that the country will be “great at anything it puts its mind to.”
He also praised the “tremendous number of very smart, very talented people in China” and predicted that China will have “very strong AI capability.” Musk has significant business interests in China, including selling Tesla’s electric cars and running a major factory in Shanghai.
His comments come against a backdrop of tensions between the U.S. and China over technology, including Washington’s enactment of export restrictions on key chips and semiconductor equipment to China in 2022. Interest in AI and its potential effects on society has heightened in recent times, with chatbots like OpenAI’s ChatGPT garnering publicity and discussions about whether AI poses a wider threat to humanity taking place.
Air Force colonel says it took AI 10 minutes to complete a task that can take humans days
Our military's giving artificial intelligence (AI) a test run, according to Bloomberg. Imagine a chore that takes us hours or days to complete, and AI can get it done in a matter of minutes. That's what Air Force Colonel Matthew Strohmeyer reported. He said the technology they used, called a large language model, aced the task, and quick too.
Now, the Pentagon's being tight-lipped about which models they're testing, but it seems they're feeding them some top-secret info to check how they handle the heat in real time.
The Defense Department might use AI to help make decisions in the future. They're doing an eight-week exercise now that wraps up on July 26. They're trying to figure out how to use AI ethically and effectively to keep our defenses strong.
The State Department's also set some ground rules for testing and using AI. They're stressing the importance of human oversight and control in AI tasks to avoid accidents and biases. They're making it clear that humans should be the ones calling the shots, especially when it comes to nuclear weapons decisions.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply