- NATURAL 20
- Posts
- AI News is Getting WEIRD Human Brain Matter in Chips
AI News is Getting WEIRD Human Brain Matter in Chips
An Exploration of DishBrain - A Biocomputer Chip that Blends Neuronal Structures with Electronics for Next-Level AI
Today:
AI News is Getting WEIRD Human Brain Matter in Chips. OpenAI tutorial. Amazon unleashed it's AI.
This video talks about the exciting advancements in AI technology, like a computer chip made from human brain material developed by scientists at Monash University. This "biocomputer" chip learned to play Pong and got a big grant for further development. It works by connecting human and mouse brain cells, using a reward system to reinforce correct predictions.
It also explains how neurons work and how their connections strengthen or weaken based on activity. This concept, called "synaptic plasticity," is essential in AI.
The goal is to blend biological computing with AI, potentially outperforming current silicon-based hardware. Applications range from robotics to drug discovery, and some even speculate about training the tech to play Doom.
DeepMind Builds A Precise Mathematical Foundation of Continual Reinforcement Learning
DeepMind's latest research is shaking up the world of artificial intelligence (AI) by redefining how machines learn. Normally, AI's learn a task and call it a day. But now, DeepMind is pushing the concept of Continual Reinforcement Learning (CRL), where AI's keep learning, not just stop when they've got the job done.
The DeepMind team has laid down some fancy math to explain this concept. They've made a list of conditions that define this "never-stop-learning" state of AI. One of the key ideas is that an AI, like our imaginary dog, will always be searching for new behaviors, either forever or until it just can't learn anymore.
This is big news, because it changes the way we think about AI. Instead of creating a machine that's all set once it solves a problem, DeepMind is suggesting we build AI's that keep tweaking their behaviors based on what they experience - they never stop learning.
This research paper is available on arXiv for those who fancy a deep dive into the details.
GPT-4 passes first Harvard semester in humanities and social sciences experiment
AI just bagged a semester's worth of good grades at Harvard! Maya Bodnick, a student there, decided to check if GPT-4, an AI model, could pass first-year humanities and social sciences. She gave GPT-4 some essay topics, like economic concepts and Latin American politics, and let it do its thing. The essays were then handed over to professors for grading, without revealing that GPT-4 had written them all.
The AI managed to score an impressive GPA of 3.57 with grades like A, A-, and B. Bodnick did have to stitch together multiple answers to meet the word limit since GPT-4 can only generate 750 words at a go. And she had the reviewers overlook the missing citations, which GPT-4 can't provide.
Feedback from professors was generally positive, praising the AI's structured approach and level of detail. There was some criticism on its flowery language and ignoring certain aspects in some essays.
Stack Overflow jumps into the generative AI world with OverflowAI
Stack Overflow has announced its venture into generative AI with the launch of OverflowAI, a series of initiatives aimed at improving search, aiding knowledge discovery, and making the platform more accessible for users of all skill levels. The new OverflowAI offerings follow the company's recent developer survey that found most developers want to use AI tools, but only 40% trust them.
OverflowAI, which includes updated AI search on both the public and enterprise platforms, aims to make it easier for users to ask conversational questions and receive generative responses drawn from the existing 58 million questions and answers on Stack Overflow.
In addition, OverflowAI includes a Visual Studio Code extension and a Slack integration for Stack Overflow for Teams. These tools aim to further integrate Stack Overflow into developers' workflows, allowing for direct querying and generation of code in the development environment.
The OverflowAI capabilities are set to be launched as alpha releases in August 2023.
ImageNet-1K Compressed 20x with Exceptional 60.8% Accuracy by MBZUAI & CMU’s Data Condensation Method
Mohamed bin Zayed University of AI and Carnegie Mellon University have come up with a new way to slim down large-scale, high-res data sets. They've named it SRe^2L, which stands for Squeeze, Recover and Relabel. Imagine this - they've shrunk the original ImageNet-1K dataset, which had 1.2 million samples, by 20 times. Even then, they nailed a 60.8% accuracy. That's big stuff!
This method's got three steps. They squeeze the important info from the original data, make synthetic data in the recovery process, and then tag the synthetic data with the real labels. This keeps the important stuff from the original data without needing a whole lot of memory.
In their test run, they tried out SRe^2L on Tiny-ImageNet and ImageNet-1K datasets. They scored big, outdoing previous top-notch results by 14.5% and 32.9% respectively. Their method is way more efficient and needs less time and memory, making it a pretty sweet deal.
Text-to-Video is improving rapidly, here are some examples created with Runway Gen-2
Runway Gen-2 is shaking things up in the world of AI video generation. It's a new AI model that can turn text or images into short videos. This puppy was launched in June, and it's already getting a lot of attention.
You can play with it on your browser or your iOS device, and it gives you 125 credits a month for free, with each credit equals to a second of video. Don't worry about wasting credits on bum results. There's a "Preview" button that shows four frames of what the AI might spit out based on your prompt.
Videos you generate can be up to four seconds long, and they get saved in your personal library. The resolution isn't sky high - it's 896 x 512 pixels, but it's still pretty neat.
How AI Is Powering the Future of Clean Energy
AI technology is helping us take big leaps towards a cleaner, more sustainable future. Innovations powered by AI are optimizing solar and wind farms, simulating climate and weather, improving power grid reliability and resilience, and advancing carbon capture and fusion breakthroughs. Tech giant NVIDIA and its partners are leading the charge.
While renewable energy sources like sunlight, wind, and water are ramping up, they pose challenges to old-school power grids designed for traditional one-way power flow. This is where AI and accelerated computing come in, helping energy companies and utilities balance power supply and demand in real time, manage distributed energy resources, and reduce consumer costs.
One way AI is being used is to improve maintenance of renewable power-generation sites. DroneDeploy is using AI to optimize solar farm layouts and automatically monitor the health of solar panels. Siemens Gamesa and NVIDIA are working together to apply AI to enhance the efficiency of offshore wind farms. The companies are using NVIDIA's Omniverse and Modulus platforms to speed up high-resolution wake simulation by a whopping 4,000x, from 40 days to just 15 minutes.
More than 70% of companies are experimenting with generative AI
Over half of companies are playing around with generative AI (creating new content or predictions from scratch), but they're not all ready to bust open the piggy bank just yet. According to a recent poll by VentureBeat, about 55% of firms are dabbling in the tech, and just 18% are putting it to work. However, when it comes to coughing up more green for it in the coming year, that same 18% is all we got. The survey points out a couple of speed bumps: tight budgets and convincing decision-makers to back gen AI.
Even with some big names like Bill Gates waxing poetic about the transformative power of AI, companies are taking it slow and steady. More than a third say they don't have the right talent or resources, while about 18% say they're getting the cold shoulder from the top dogs.
The poll also shed light on what companies are using gen AI for in their first steps. The biggest use (46%) is for natural language processing tasks (think chat and messaging), followed by content creation (32%).
AI-Generated Data Can Poison Future AI Models
This article raises concerns about the increasing use of AI-generated content in training new AI models, as it could introduce errors that build up with each subsequent generation. This phenomenon is being referred to as "model collapse." The concern stems from the observation that a training diet of AI-generated text, even in small quantities, could be "poisonous" to the model being trained.
The analogy provided compares this situation to the contamination of newly-made steel with atmospheric radioactive fallout following nuclear testing. As the air with elevated radiation levels entered the steel-making process, it rendered the steel unsuitable for radiation-sensitive applications. Similarly, the use of AI-generated content for training new models could lead to a diminishing quality of AI outputs.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply