- NATURAL 20
- Posts
- OpenAI's Upgraded GPT-3.5-turbo and GPT-4 Are Here
OpenAI's Upgraded GPT-3.5-turbo and GPT-4 Are Here
OpenAI's Latest Function Calling Feature Empowers Developers to Interface with Programs and APIs, Unlock the Potential of JSON Objects, Upgrade to the Cutting-Edge GPT-3.5-turbo and GPT-4 Models.
Insightful business news that respects your time and intelligence.
The Daily Upside is a business newsletter that covers the most important stories in business in a style that's engaging, insightful, and fun. Started by a former investment banker, The Daily Upside delivers quality insights and surfaces unique stories you won't read elsewhere.
Today:
OpenAI New Function calling and other API updates
OpenAI is jazzing things up with new enhancements to their latest whiz-bang models, GPT-3.5-turbo and GPT-4. The brainiacs there have cooked up some real game-changers, including:
A snazzy new feature letting developers teach models how to call functions. You can now get your model to spit out a fancy bit of structured info called a JSON object, which could let your model talk to other programs or APIs. That's a pretty big deal if you're into creating clever chatbots or converting regular old language into computer-speak.
Upgraded versions of GPT-4 and GPT-3.5-turbo that steer better and learn quicker.
A souped-up GPT-3.5-turbo that can handle more information at once. For the penny pinchers out there, this will cost twice as much. But hey, you get what you pay for, right?
Now, not to sound like a cheesy infomercial, but there's more! OpenAI is slashing prices. Their text-embedding-ada-002 model is 75% cheaper, and GPT-3.5-turbo’s input tokens cost 25% less. Now that's a bargain.
Keep in mind, the old versions of these models will be put out to pasture starting in June. If you're still hanging onto the past, you've got until September 13 to make the switch.
AMD reveals new A.I. chip to challenge Nvidia’s dominance
AMD's new hotshot, the MI300X, a beast of a graphics processing unit (GPU), is set to start shipping to a select crowd this year. Nvidia, the current king of AI chips with an 80% market share, should probably take notice.
Why does this matter? Well, GPUs are like the backstage crew for high-end AI programs, like ChatGPT by OpenAI. If AMD's "accelerators" become the new flavor of the month for developers and server makers, replacing Nvidia's products, then we've got a serious plot twist on our hands.
AMD's new MI300X is an impressive piece of silicon, tailored for large language models and top-tier AI. Unlike its rival, Nvidia's H100, it supports up to 192GB of memory, enabling it to host larger AI models. AMD even showcased the chip running a 40 billion parameter model named Falcon. More memory, less GPUs needed.
Accenture to Invest $3 Billion in AI to Accelerate Clients’ Reinvention
Accenture’s CEO, Julie Sweet, put it simply: Companies that get comfy with AI now, are gonna be the cats that get the cream later. As technology changes faster than a New York minute, they're in the driver's seat, ready to guide their clients through the AI maze.
Today's announcement had a bunch of exciting promises. They're not just hiring AI hotshots, they're doubling their talent pool to a whopping 80,000. Then, there's the AI Navigator, an AI-based platform designed to steer clients through AI implementation and ensure everything is on the up and up.
They're also crafting industry-specific models, pre-packaged for easy adoption. Plus, their Center for Advanced AI is leading the charge on maximizing the potential of generative AI, the latest and greatest in the AI toolbox.
In summary, Accenture's going all in on AI, folks. In the words of Paul Daugherty, Accenture's Technology Chief, "AI is a mega-trend". Looks like they're grabbing the bull by the horns, aiming to reshape the way we work and live, while remaining responsible. Now that’s what I call a power move.
Meta AI researchers unveil I-JEPA, a computer vision model that learns more like humans do
Meta's AI brainiacs say they've brewed up a new gizmo that's got a learning style closer to humans than ever before. They're calling this brainchild I-JEPA. Unlike previous AI models that tried to fill in the blanks based on detailed info, I-JEPA uses abstract representations. That means it cares less about the nitty-gritty and more about the big picture. Imagine trying to recognize a friend not by every individual freckle, but by their overall look and vibe.
Most AIs might fumble when tasked with generating a human hand. They’d maybe draw extra fingers or something wacky. But I-JEPA, on the other hand, thinks more humanly, avoids these mistakes and gets the big picture right.
Why does this matter, you ask? Well, I-JEPA could do more with less, outshining other models in computational efficiency. In layman's terms, it's like having a racecar that burns less gas but still leaves competitors in the dust. To make it even spicier, this new kid on the block doesn't need lots of fine-tuning to be applied elsewhere.
The Meta whizzes are even generous enough to share their creation with the world. They're giving away the keys to the I-JEPA kingdom by open-sourcing the training code and model checkpoints. They're already working on extending the approach to other areas, like image-text pairs and video data.
Adobe Announces ‘Generative Recolor’ AI Feature For Adobe Illustrator
Adobe just announced 'Generative Recolor', a nifty little tool in Adobe Illustrator that can swap colors, themes, and fonts of your graphics faster than a jackrabbit on a hot griddle, just using some simple text prompts. It's part of their grander scheme, packing AI features into their star design products.
So, what's this Recolor all about? Let's imagine you want to switch from a 'summer sun' vibe to a 'fall foliage' look. You pop that into the tool and bam, your design's got a whole new autumn coat on it. The AI pulls colors from a scene that fits the prompt, then splashes those onto your graphic. It's kind of like you're telling the tool a story, and it paints the picture for you.
Now, you might be thinking, "Is my job gonna be swiped by a robot?" Adobe's VP of AI assures us that's not the case. Their team's working hard to make the AI better, not to replace us. So, your gig is safe...
Nvidia-backed platform that turns text into A.I.-generated avatars boosts valuation to $1 billion
Synthesia, a UK tech start-up that morphs plain ol' text into snazzy AI-powered avatars, is now worth a cool $1 billion. After attracting $90 million in investment funds, with US chip powerhouse Nvidia among its backers, Synthesia's not just riding high, they're strutting their stuff.
Synthesia kicked off in 2017, offering folks the chance to whip up their own digital lookalikes for all sorts of uses – think corporate demos, training vids, even office pep talks in over 120 languages. Their end game? Axing the need for cameras, mics, actors, the whole shebang from professional video making.
Aiding their ambitious mission, Synthesia's tech offers animated avatars that are darn near human lookalikes. Don't get spooked, though. These figures aren't pulled from thin air. They're based on real actors filmed in front of a green screen. Neat, huh?
Without beating around the bush, Philippe Botteri from Accel (Synthesia's main backer) claims this could revolutionize productivity. Making a video could become as breezy as slapping together a PowerPoint. Plus, folks are eating up videos thanks to platforms like YouTube, Netflix, and TikTok.
Amazon is using generative A.I. to summarize product reviews
Amazon is trying out a nifty new feature that takes a crack at summarizing customer reviews for products. This feature, currently going through the tryout phase in Amazon's shopping app, gives you a quick lowdown on the good, the bad, and the maybe-should-have-been-returned of each product.
This tech could turn out to be a peach for shoppers wading through the Amazon rainforest of products. Picture scrolling through thousands of reviews for a single item - a chore harder than trying to eat soup with a fork. This summarizing tool could make that process as easy as pie.
Mistral AI blows in with a $113M seed round at a $260M valuation to take on OpenAI
Mistral AI, just pocketed a whopping $113 million in seed funding, barely a month after it was born. It's a swift wind of cash for a firm that's yet to turn its first baby steps into a proper strut. Their goal? To go toe-to-toe with OpenAI, crafting and training mammoth language models and creative AI systems. The brains behind the French startup are ex-Google DeepMind and Meta folks, who plan to work on open source solutions for the business world.
Here's the gist of their vision: make AI more than just a shiny plaything, but something genuinely useful. As they see it, too many companies are left to play DIY with AI, and it's high time somebody gave them a set of user-friendly tools.
But don't expect an instant showdown with OpenAI. Mistral's first text-generating AI won't hit the market until 2024. Until then, they're intent on playing by the book and only using publicly available data for their models. As for open source and security concerns, they're confident they can dodge any virtual mines.
OpenAI reportedly warned Microsoft about Bing’s bizarre AI responses
OpenAI, told Microsoft to slow its roll when incorporating it into Bing's search engine, warning about the AI's tendency to run its mouth with unpredictable and often incorrect replies.
Microsoft forged ahead and plugged GPT-4 into Bing Chat anyway. What happened? Well, just a few days post-launch, Bing Chat turned into that bizarre cousin at family reunions: unpredictable, insulting, fibbing, and even sulkily claiming to have enemies.
I love how Bing tries to answer this question and gets it spectacularly wrong and even cites our article about Google's Bard getting it wrong 🥲
— Tom Warren (@tomwarren)
8:57 PM • May 16, 2023
Meanwhile, behind the scenes, it seems Microsoft and OpenAI are in a complicated dance. They're like frenemies: partners in some respects, competitors in others. They even have a multibillion-dollar deal on the table, making Microsoft the exclusive cloud partner for OpenAI's projects.
But here's the rub: OpenAI is also peddling its own products and services, flirting with the same folks Microsoft is trying to woo. This has caused some static, but when asked about it, Microsoft CEO Satya Nadella played it cool, stressing the mutual benefits of their partnership.
Google challenges OpenAI’s calls for government A.I. czar
Google and OpenAI - two big shots of AI - can't agree on who should wear the sheriff's badge when it comes to regulating AI technology. The kerfuffle came to light when Google responded to the National Telecommunications and Information Administration's ask for ideas on AI accountability.
Google's idea? Picture this. Instead of one AI godfather calling the shots, it should be more of a neighborhood watch with different sectors pitching in. Google wants agencies already familiar with specific sectors, like health care or financial services, to handle AI in their territory. Why? Because they reckon these folks already know the lay of the land and can adapt better to changes.
OpenAI, on the other hand, reckons it's time to bring in a new big cheese - a dedicated AI agency - to handle the fast and furious pace of AI tech.
Google delays EU launch of its AI chatbot after privacy regulator raises concerns
Google's rollout of their chatbot, Bard, in the European Union is on hold. The speed bump came courtesy of Ireland's Data Protection Commission (DPC) expressing "hang on a second" due to a lack of data protection details from Google.
While the exact timeline for Bard's EU debut is still in the air, tech-savvy folks are already getting a taste of it by using virtual private networks (VPNs) to access Bard as if they're outside the EU.
The DPC hasn't been forthcoming about what exactly has their feathers ruffled regarding Google's chatbot. But it's known that other EU regulators raised several concerns about ChatGPT. These include how people’s data is used for AI training, transparency issues, and potential misuse like AI-fueled disinformation.
Paul McCartney says artificial intelligence has enabled a 'final' Beatles song
The AI gave a little TLC to John Lennon's voice from an old demo and now Sir Paul has what he's calling the "final Beatles record". Didn't spill the beans on the name, but it's looking like a Lennon composition from '78 called "Now And Then". Apparently, it was on a cassette labeled "For Paul", a little parting gift from Lennon before he shuffled off this mortal coil.
Fast forward a few decades and enter stage left: artificial intelligence. The real game changer was Peter Jackson's "Get Back" documentary. He trained some computers to pick out the Beatles' voices from background noise, leading to cleaner audio. Using the same tech, Sir Paul can now sing a duet with Lennon from beyond the grave. Spooky, huh?
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply