- NATURAL 20
- Posts
- ORCA 2 | Microsoft's BREAKTHROUGH in Open Source LLMs - AI training the next gen AI 🔥
ORCA 2 | Microsoft's BREAKTHROUGH in Open Source LLMs - AI training the next gen AI 🔥
Orca 2 is surpassing its predecessors in AI reasoning and learning, signaling a major shift in AI development focused on quality and progressive learning strategies
Today:
ORCA 2 | Microsoft's BREAKTHROUGH in Open Source LLMs - AI training the next gen AI 🔥
Microsoft just dropped a bombshell with their new Orca 2 research. Orca 2 focuses on quality over quantity. It's not about how big the AI model is, but the kind of data it chews on. Orca 2 suggests that AI-generated data could sometimes be even better than what humans create. That's huge because it means these models could keep getting smarter on their own, without needing endless human input.
Microsoft's Orca 2 is a big deal in the AI world. It's showing that we don't just need giant, expensive AI models to make progress. Smaller, smarter, and more accessible models can do some serious heavy lifting too. This could be a game-changer in spreading AI power to more people, not just those with deep pockets.
Share sale set to test financial impact of OpenAI’s leadership turmoil
OpenAI’s about to sell some shares, and everyone's watching to see if their recent leadership drama is gonna cost them big. Their CEO, Sam Altman, got the boot last week, but they're still aiming to hit a whopping $86 billion value.
Despite the shakeup, the big-money folks like Thrive Capital and Sequoia Capital are still betting on OpenAI to triple its value since Microsoft pumped in $10 billion earlier this year. But there's a catch. Vinod Khosla, one of the early backers, says the company's worth depends a lot on what investors think, especially after this week's mess.
Some experts reckon OpenAI's value took a hit with rivals like Google and Amazon breathing down their necks. Anat Alon-Beck, a hotshot professor, thinks OpenAI needs to straighten up and fly right if they wanna keep climbing in value.
Neuralink, Elon Musk’s brain implant startup, quietly raises an additional $43M
Elon Musk's brain chip company, Neuralink, just bagged an extra $43 million. They're making these high-tech implants that can read your brain activity. The cash boost bumps their total to $323 million, thanks to 32 investors, including big names like Peter Thiel.
Neuralink's been kinda hush-hush about what they're worth, but word on the street back in June was about $5 billion. They started in 2016 and are working on a gadget that sews super thin threads into your brain. These threads hook up to a special chip that listens in on your brain cells.
Now, this idea of brain implants isn't new, but Neuralink's trying to go wireless and pack more electrodes into the mix. They got the green light from the FDA for human testing, which they started recruiting for not too long ago.
Artificial Intelligence Comes To Cybersecurity
AI's been a big deal in tech since Open AI rolled out Chat GPT a year back. But recently, some cyber troublemakers hit Open AI with a cyber-attack right after they updated Chat GPT. Richard Stiennon, a cybersecurity whiz, says these hackers are now using AI to break into accounts, even those with fancy fingerprint or face locks.
Richard Stiennon spills the beans on what's up with Chat GPT and its new competitors on this show, MITech TV, hosted by Mike Brennan and Matt Roush.
India seeks to regulate deepfakes amid ethical concerns
ndia's getting serious about handling deepfakes – those super-realistic fake videos. The big cheese of India's IT, Ashwini Vaishnaw, says they've been chatting with major social media folks, tech industry leaders, and smart academic types about how these deepfakes are bad news. They all agree it's time to lay down some rules to stop these fakes from spreading like wildfire.
Vaishnaw's team is hustling to come up with a plan in just 10 days. They're even thinking about hitting rule-breakers with fines and holding the creators of these deepfakes accountable. There's a lot of concern because these fake videos can really mess with people's heads and spread lies super fast. Just last week, one of these videos tried to trick folks into voting for the wrong party!
The new rules aren't just about slapping wrists; they're also about making it easier for people to report these shady videos and pushing social media companies to act fast before any real damage is done. The IT ministry is on high alert, especially after India's Prime Minister, Narendra Modi, raised the red flag about these deepfakes.
DeepMind Defines Artificial General Intelligence and Ranks Today’s Leading Chatbots
DeepMind is shaking things up by trying to pin down what exactly Artificial General Intelligence (AGI) means. They're pushing for a fresh take on AGI, not as some far-off dream but as something with different levels, kind of like steps on a ladder.
They've rolled out a new rating system, called "Levels of AGI," that sorts AI based on how well they do stuff and how many different things they can handle. It's like a leaderboard, from "emerging" to "superhuman". They even say some AI, like DeepMind's own AlphaFold, are already at the superhuman level. And get this, they think top chatbots like ChatGPT and Google's Bard might just be the new kids on the AGI block.
Some experts say it's a good start to get our heads around AGI, but there's a lot of back-and-forth about how to rank these smartypants programs. DeepMind's hoping this will get people talking more about what AGI really means and how we can get there.
US, Britain, other countries ink agreement to make AI 'secure by design'
The U.S., Britain, and a bunch of other countries just agreed to make sure AI tech is safe from the get-go. This means when companies make AI, they gotta build it to be safe for everyone, keeping it out of the wrong hands. The deal's got about 18 countries on board, including big names like Germany and Australia, and they're all saying AI needs to be built with security in mind.
The head of the U.S. Cybersecurity team, Jen Easterly, is pretty stoked about it. She's saying it's a big deal that all these countries agree security can't take a backseat to cool new features or cutting costs.
Europe's already ahead of the game with AI laws, and the Biden administration's been trying to get the U.S. Congress to step up too. They even dropped an executive order in October to make sure AI doesn't hurt consumers, workers, or minority groups, while also keeping the country safe.
AI detects methane plumes from space, could be powerful tool in combating climate change
Researchers from Oxford, working with Trillium Technologies, whipped up this AI tool that spots methane leaks from space. It's a game changer in fighting climate change. Methane's a big deal because it traps heat like nobody's business—way more than CO2—but it doesn't stick around as long. If we can cut down methane fast, we can cool things down quicker.
The old way of finding these methane leaks was a real headache. It was like trying to spot something invisible and getting lost in a sea of noisy data. But this new AI tool uses hyperspectral satellites, which are like super-detailed cameras in space. They can pick up the specific vibes of methane and cut through the noise.
They trained this AI with a ton of satellite pictures over the Four Corners in the U.S. and got it really good at spotting methane. It's over 81% accurate and way better than the old methods. Plus, it's not fooled as easily, with way fewer false alarms.
They're thinking of loading this AI right onto satellites. This way, satellites can work together, like a team, focusing on places where they might sniff out methane. The idea is to send only the important alerts back to Earth, like a text message with GPS coordinates of a methane party.
AI Sharpens Rainfall Estimates from Satellites
There's this new study saying that AI can really up the game in figuring out how much rain or snow is hitting the ground by looking at it from space. Usually, satellites are great at peeping storms from way up there, but they kinda struggle to tell exactly how much precipitation is falling.
Haonan Chen, who's into remote sensing at Colorado State University, says it's tough to turn what satellites see into accurate rain or snow measurements on the ground. They're using some fancy deep learning tricks, which means they teach a computer to think kinda like a brain, tweaking it until it gets real good at guessing how much rain is falling.
They tested this out with data from some fancy satellites called GOES-R, focusing on the southwestern US over a couple of summers. They also checked if adding lightning data made their rain guesses better. Turns out, their AI was way better at this than the old methods, especially when it came to heavy rain.
What's cool is this could go global. By mixing data from satellites around the world, they might map rainfall everywhere, which is a big deal for places that don't have their own weather radars.
Chinese EV maker Nio sees AI, robots replacing 30% of workforce by 2027 to improve efficiency, cut costs
Chinese electric car maker Nio are planning to trim down their team by a third by 2027, swapping out folks with robots. They already cut 10% of their crew to keep up in the race. Ji Huaqiang says they're doubling down on AI to make smarter decisions and cut down on the need for managers by half in a couple of years.
They're aiming to use fewer people on the production lines by 30% in the next four years. Nio had about 7,000 employees at the end of last year, and they're dreaming big about having their factories run completely on robots and AI, though they're not sure when that'll happen.
They've got two big factories, one can make 150,000 cars a year, and the other can pump out 300,000. They're using 756 robots in one of their plants to fully automate a process. The plan is to make this factory the smartest one around.
Partisan Media Bias Shapes AI Sentiment
Some smart folks at Virginia Tech did a deep dive into how different news outlets talk about AI (artificial intelligence). They checked out over 7,500 articles from both sides of the political spectrum. What they found was that the liberal media seems a bit more down on AI compared to conservative media. This is mostly because they're worried about AI making existing problems in society, like race, gender, and money issues, even worse.
The researchers used a fancy method to figure out the tone of each article. They basically counted the good and bad words and then did some math to give each article a score on how it feels about AI. They found that after George Floyd's death, the media got even more concerned about AI's role in social biases.
The way the media from different political sides talks about AI can influence how the public and even lawmakers feel about it. This could mean big things for how AI is handled in the future. And just to be clear, the researchers aren't saying who's right or wrong here, just that these differences in opinion exist and are worth understanding.
How AI could power the climate breakthrough the world needs
Farmers in central India, struggling with extreme weather, are getting help from a Silicon Valley startup, ClimateAi. This AI tool predicts how weather changes will affect crop growth, using local climate, water, and soil data. For example, it showed Indian tomato farmers a possible 30% drop in production due to heat and drought, nudging them to switch seeds and planting schedules.
AI, like the stuff in ChatGPT, is not just for chatting; it's also tackling climate change. It's great at crunching a ton of data and making predictions, which can help in all sorts of ways, from cutting pollution to making weather forecasts better. For instance, AI is speeding up research on stuff like new materials for solar panels.
But AI isn't perfect. It needs a lot of power, and that can strain the environment. Experts say we've got to find a balance. Some big companies like Amazon are working on making their data centers less resource-hungry.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply