- NATURAL 20
- Posts
- Windows 11 Update with Copilot
Windows 11 Update with Copilot
Microsoft's latest Windows 11 update, featuring the revolutionary Copilot AI assistant
Today:
Microsoft releases big Windows 11 update with Copilot AI assistant included
Microsoft just dropped a major update for Windows 11, and it's packed with cool new features. At the heart of it all is Copilot, a chatty AI assistant that’s ready to help with pretty much anything. It’s like having a buddy who's really good at computers right there with you.
Copilot can whip up emails, answer questions, and even do stuff in Windows for you, all based on just a couple words from you. It’s a bit like the Bing chatbot Microsoft rolled out earlier this year, but now it’s all integrated into the operating system.
For the workaholics, there’s also Microsoft 365 Copilot. It’s an AI add-on designed to make corporate life a bit easier. As for the regular Windows Copilot, it’s got a bunch of handy tricks up its sleeve, like helping you take screenshots or switch to dark mode.
This update is not just about Copilot, though. There's a ton of other cool stuff, like a virtual video editor, a smarter way to read screens in different languages, and some features to help save energy. You can even back up your apps and preferences, and soon, you'll be able to paint with words in the Paint app. Plus, there are a bunch of tweaks to make the taskbar more customizable.
Group representing the New York Times and 2,200 others just dropped a scathing 77-page white paper on ChatGPT and LLMs being an illegal ripoff
The big dogs of the news world, like the New York Times and Wall Street Journal, are part of a huge alliance that's got beef with generative AI, like ChatGPT. They're saying it’s been riding high on the stock market and getting tons of hype, but it’s on shaky ground with them since its release in November 2022.
Now, they've dropped a heavy 77-page report, calling out AI creators like OpenAI and Google for allegedly lifting content from news articles to train their chatbots. They’re saying it’s pretty much like stealing since the responses from the chatbots can end up being a dead ringer for the original copyrighted stuff.
These language models, which spit out text that sounds like a human wrote it, are causing a stir because no one really knows what's being fed into them to train them up. The news alliance did some digging and thinks that news content is being used way more than other stuff on the web. They’re calling foul, saying it's breaking the "fair use" rules.
Teachers in India help Microsoft Research design AI tool for creating great classroom content
Microsoft Research is cooking up something special in India, a tool called Shiksha copilot, designed to give teachers a leg up in crafting classroom content. It's like a digital sidekick for educators, helping them whip up personalized lesson plans and classroom activities in a jiffy. They've even got teachers in the mix, helping to shape the tool to make sure it hits the mark.
The project's a team effort, with folks from different parts of Microsoft putting their heads together. They're testing it out in some schools around Bengaluru, India, working with a local group focused on buffing up public education.
So, what's it all about? Well, a lesson plan is pretty much a game plan for teachers. It lays out what students need to learn and how to get there. And that's where Shiksha copilot comes in—it helps teachers map out that journey, fast.
Sam Altman warns AI could kill us all. But he still wants the world to use it
Sam Altman, the head honcho of OpenAI, has raised the alarm bells about AI, saying it could spell doom for us all. Still, he's all in on using it to change the game in pretty much every field you can think of. He went to Washington, DC, hat in hand, asking lawmakers to whip up some smart rules to keep AI in check.
Since launching ChatGPT, Altman's been the face of AI, showing off what it can do, from writing emails to helping folks build websites, and even acing law and business school exams. But, it's not all sunshine and rainbows. Folks are worried about cheating in schools, robots taking our jobs, and AI just getting too darn powerful.
Altman knows the stakes are high. He's told Congress we need to keep an eye on AI, especially when it comes to messing with elections and spreading fake news. He's even signed a letter with other big brains in AI, saying we need to treat AI risks just as seriously as we do pandemics and nuclear war. But while he's preaching caution, his company is hitting the gas, even talking about cooking up an AI gadget that could take the smartphone's throne.
As for the guy himself, Altman's got his bases covered, prepping for all sorts of disasters, including an AI uprising. But some folks in the AI world think he might be focusing too much on the doomsday scenarios and not enough on the real problems AI could cause right now.
DeepMind’s latest AlphaFold model is more useful for drug discovery
DeepMind, a Google research lab, made this thing called AlphaFold a while back that predicts protein shapes in our bodies. They spiced it up and dropped AlphaFold 2 in 2020. Now, there's a newer, cooler version that can guess the structure of almost all the molecules in this big molecule database. Isomorphic Labs, a cousin of DeepMind, is using this new AlphaFold for drug design, helping them figure out molecule structures needed to fight diseases.
The new AlphaFold doesn't just guess protein shapes. It can also predict the shape of ligands, which are molecules that affect how cells talk to each other, and nucleic acids, which are the boss molecules with our genetic deets. Oh, and it can also predict the after-party changes in proteins. This is big news for drug discovery because it helps scientists find and make new potential drugs.
Right now, drug researchers play a guessing game with proteins and ligands using computer games called "docking methods." But with the latest AlphaFold, they can skip some steps, which is a game-changer. DeepMind's own tests show that this new model is better at predicting some protein stuff, especially things that matter for making drugs.
AI better than biopsy at assessing some cancers, study finds
AI is beating out the old needle poke when it comes to checking how aggressive some cancers are. A study found that AI was about twice as good as a biopsy in rating the danger level of certain rare cancers called sarcomas. Catching these bad boys early can mean the difference between life and death. Using this AI method, doctors can better figure out who needs quick treatment and who can chill a bit.
This new tech was tested on hard-to-diagnose cancers in the abdomen, and it nailed the diagnosis 82% of the time, compared to biopsies which got it right only 44% of the time. The cool part? The hope is to use this AI globally, making it easier for hospitals everywhere to spot and treat this cancer. The cherry on top? It might work for other cancers too. Big wigs in the cancer world are hyped about the results and how it can help folks get treated faster.
News Group Says AI Chatbots Heavily Rely On News Content
The News Media Alliance, a big group standing up for newspapers, claims A.I. chatbots are leaning too much on news articles instead of other online stuff. This bunch, which includes heavy hitters like The New York Times, points out that these high-tech tools are not just using but actually overusing news content, which is a no-no because of copyright laws.
They've been beating this drum for a year, saying that ChatGPT and its buddies are nibbling too much on copyrighted material. The Alliance even did some homework, comparing the stuff used to train the big brain behind chatbots with a pile of generic web content. Turns out, the news stuff is getting used way more, like 5 to 100 times more. And sometimes, the chatbots are just spitting out chunks of articles word for word.
On top of that, there's worry in the news world about losing web traffic and jobs to A.I. If everyone starts chatting with bots instead of clicking on news sites, that's bad news for the news folks. And of course, they're scared of being replaced by robots. So, it's a bit of a hot mess, and the News Media Alliance is looking into ways to get their content licensed collectively to tackle it.
Scientists train AI to illuminate drugs' impact on largest family of cellular targets
Scientists at The Herbert Wertheim UF Scripps Institute are training AI to better understand how drugs affect cells. Why? Because what works wonders for Joe might flop for Jane. The team used some serious tech to watch how over 100 drug targets, including their genetic variations, do their thing in our cells. After collecting this data for over ten years, they taught an AI system to predict how these targets would react to drugs. The good news? The AI was right over 80% of the time!
A big chunk of the drugs we use connect with cell-surface receptors called GPCRs. Think of these like docking stations on cells where drugs plug in to work their magic. We've got about 800 types of these receptors in us. Some work as expected, but others had the scientists raising their eyebrows. To get the full picture, they used a tech that's kinda like tagging cells with glow-in-the-dark stickers, and then watched how the cells' light show changed when drugs came into play.
The goal? To create meds that are more dialed in for individuals. Instead of a one-size-fits-all pill, imagine a drug made just for you. This research could help make that dream a reality. The team's aiming to understand better how our genetic differences play into how we react to meds. Future goal? Custom-made prescriptions for everyone. Now, that's what I call next-level healthcare!
Top 12 Medical AI Companies
The article gives us a rundown of the top 12 medical AI companies, highlighting how AI is making big waves in healthcare. MedAware and DXplain are examples of systems that help doctors make better decisions and diagnose diseases more accurately.
We’re seeing major tech players like Google and Microsoft diving into medical AI too. Google’s working on "Med-PaLM 2", an AI doc in a box, and Microsoft's teamed up with Epic to draft patient messages using AI.
The medical imaging field is booming with AI, expected to jump from $1.9 billion in 2022 to a whopping $29.8 billion by 2032. But, it's not all smooth sailing. Companies are grappling with using synthetic data, a stand-in for real patient data, and the privacy issues that come with it.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply