- NATURAL 20
- Posts
- Anthropic's $15B Valuation
Anthropic's $15B Valuation
Anthropic's groundbreaking journey as it raises a massive $750 million, spearheaded by Menlo Ventures, propelling its valuation towards an astounding $15 billion
Today:
Anthropic in Talks to Raise $750 Million, Led by Menlo Ventures
AI startup Anthropic is reportedly in the midst of raising a whopping $750 million in a funding round spearheaded by Menlo Ventures. This hefty investment is projected to ramp up the company's valuation to an eye-popping $15 billion, with potential to push even beyond $18 billion. Founded by former OpenAI researchers in 2020, Anthropic's main gig is crafting cutting-edge generative AI software. Their latest brainchild, Claude 2, is designed to go toe-to-toe with OpenAI's GPT-4, flaunting abilities like crafting marketing copy, solving math problems, and turning conversational prompts into software code.
Anthropic's funding journey has been nothing short of impressive. Starting with a $124 million Series A in May 2021, the company has since attracted a series of hefty investments, including $580 million in Series B in April 2022, $300 million from Google in March, and a whopping $450 million in Series C in May 2023. On top of these, the company secured commitments exceeding $1 billion from Sapphire Ventures in June, $100 million from South Korean telecom giant SK Telecom in August, and up to $4 billion from Amazon in September, followed by another $2 billion from Google in October.
Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
LAION, a big dog in AI, had to yank its LAION-5B dataset used by many AI tools, including Stable Diffusion, after a Stanford study found it packed with over 3,000 possible child abuse images, 1,000 of which were no doubt shady. This dataset, a whopping five billion image links from the web, was training top AI models. Stanford's brainiacs used fancy hash-based tech to spot these images. LAION said they're pulling the plug temporarily to clean things up.
Turns out, LAION knew about this risk since 2021 but didn't quite nail fixing it. The US law folks can't directly check these images, so they use tech like PhotoDNA to spot and report them. LAION could've done more from the get-go, but they dropped the ball. And now, other big players in AI are also scrambling to make sure their hands are clean.
Using AI, MIT researchers identify a new class of antibiotic candidates
MIT's AI experts have hit a home run, finding a new batch of potential antibiotics. They've trained a deep learning AI to spot drug candidates that could take down MRSA, a nasty bug causing over 10,000 deaths yearly in the U.S. Their latest study, published in Nature, shows these new compounds can knock out MRSA in lab dishes and mice, without harming human cells much.
What's super cool is they've cracked open the AI's "brain" to understand how it picks these winners. This peek inside helps them design even better drugs. James Collins and his team at MIT are the brains behind this. They're part of the Antibiotics-AI Project, aiming to discover new antibiotics against seven deadly bacteria types over seven years.
Their method? Train AI on thousands of compounds, letting it learn what makes a good antibiotic. The AI then sifts through millions more, picking out the best candidates. They've also made the AI's decision-making clearer, which is a big step forward.
By testing these AI-chosen compounds, they've already found two promising ones that put MRSA in its place in mice. These new drugs work by messing with the bacteria's cell membranes, similar to how another MIT-found antibiotic, halicin, works, but on different types of bacteria. The team's now sharing this gold with Phare Bio, a nonprofit, to dig deeper into these discoveries and design more drugs.
Medical Text Written By Artificial Intelligence Outperforms Doctors
Artificial intelligence (AI) might soon be answering your health questions, easing the workload for docs. A study in "Nature" by Dr. Cheng Peng's team at the University of Florida tested their AI, GatorTronGPT, against real doctors.
This AI, built on ChatGPT's framework but trained on heaps of clinical text, matched doctors in making sense and being relevant medically. They checked its readability and relevance with special tests, including a doc's version of the Turing test, where it fooled docs more than half the time into thinking its writing was human.
Japan startup Preferred Networks designs own AI chips to beat bottleneck
The Japanese startup Preferred Networks is making waves by creating its own AI chips to stay ahead in the tech game. Backed by big names like Toyota and Fanuc, they started this journey in 2016 to power their supercomputers.
Their latest chip, designed for AI tasks, boasts lower energy use and better performance, shifting some functions from hardware to software. They plan to unleash this tech for things like language models and drug discovery soon, and aim to sell pure computing power by 2027. It's a bold move in a world where companies like Amazon and Microsoft are also in-house designing AI chips.
Singapore's Atomionics taps gravity, AI in hunt for critical minerals
Atomionics, a Singapore-based startup, is revolutionizing mineral hunting using a mix of gravity and AI. They've developed a "virtual drill" called Gravio, which identifies ore bodies more precisely and quickly than traditional methods. This tech is a game-changer for the mining industry, drastically reducing exploration costs and time.
Atomionics has already inked deals with major mining companies, deploying in Australia and the U.S. They aim to halve the number of failed drill samples, focusing on critical minerals like copper, nickel, and zinc. This innovation represents a big leap forward in efficient and cost-effective mineral exploration.
Sacramento State launches National Institute for Artificial Intelligence in Education
Sacramento State's stepping up big time in the AI game, launching the National Institute for Artificial Intelligence in Education. They're one of the first U.S. colleges diving into using AI in learning, but they're not just about the tech – they're big on doing it right and ethically.
They've got Alexander M. "Sasha" Sidorkin, a real brainiac in AI and education, leading the charge as the new AI boss. He's writing a book on using chatbots in colleges and is big on closing the gap in student success with AI. Plus, he's setting up programs to teach students the right way to use AI.
They're hiring seven new computer whizzes focused on AI and quantum computing to create cool tools and figure out how to use this tech to help students learn better. In short, Sac State's making big moves to be at the forefront of AI in education, making sure it's used smartly and for the good of all.
NVIDIA to Reveal New AI Innovations at CES 2024
NVIDIA's gearing up to wow at the CES 2024 in Vegas next month, showcasing their latest AI and tech goodies. They've got a big event on Jan. 8, streaming live for everyone. They're diving into consumer tech, robots, and more. Their agenda's packed with 14 sessions, covering cool stuff like retail AI and smart cars.
Develop Your First AI Agent: Deep Q-Learning
This article guides you step-by-step to build your own AI agent using Deep Q-Learning. It's aimed at folks with some Python know-how, but you don't need to be an AI whiz. The project? A reinforcement learning gym made from scratch. It’s not just theory; you'll get your hands dirty making an environment, an agent, and setting up the learning process.
First, it breaks down reinforcement learning basics – how agents learn from actions to achieve goals, and introduces Deep Q-Learning. Next, it dives into creating your gym. You'll set up the environment, where your AI will live and learn. The agent's brain, a neural network, comes next, teaching it to make smart moves. Then, it's all about training: your agent learns from experiences, improving over time. You'll also fiddle with learning rates and explore how tweaking them impacts learning.
The 8 Major AI Moments That Defined 2023
In 2023, AI became a big deal in our lives, thanks to tools like ChatGPT. Here are the standout moments:
GPT-4 Hits ChatGPT: This update in March made ChatGPT smarter, with abilities to search the web and make images.
Pope's Fake Puffer Jacket: A fake photo of the Pope in a huge jacket showed how AI can trick us.
New AI Laws in Europe and China: These aimed to make AI safe and ethical.
Google's AlphaMissense: A big step in using AI to find bad gene mutations causing diseases.
Drama at OpenAI: CEO Sam Altman got fired, then rehired, stirring up talk about how AI companies are run.
Pause AI Development Petition: A big petition, with famous names, made people think more about AI’s future.
ChatGPT in Microsoft Bing: ChatGPT got added to Bing, kicking off a trend of AI in many apps and tools.
The Bletchley Declaration: 28 countries, including the US and China, agreed on rules for future super-powerful AI, though it wasn’t perfect.
These moments show AI’s big impact on how we live, work, and think about tech.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply