- NATURAL 20
- Posts
- Nvidia & iPhone Maker Foxconn's Big Play in AI
Nvidia & iPhone Maker Foxconn's Big Play in AI
Nvidia and Foxconn aiming to shape the next era of computing with their AI-centric factories amidst the backdrop of global export controls
Today:
Nvidia and iPhone maker Foxconn tie-up to build ‘AI factories’
Nvidia, the big US tech company, and Foxconn, the folks behind the iPhone, are joining forces to create "AI factories." This news dropped just as the US is putting a tighter leash on AI chip sales to China, which kinda throws a wrench in Nvidia's sales plans over there.
Nvidia's main dude, Jensen Huang, says these "AI factories" will help teach things like self-driving cars, robots, and big computer programs how to work. Foxconn, on the other hand, is trying to wear more hats than just being known as the iPhone builder; they're aiming to be big players in the computer tech that powers self-driving tech.
ChatGPT Creator Partners With Abu Dhabi’s G42 in Middle East AI Push
OpenAI, the folks behind ChatGPT, is joining forces with Abu Dhabi's top AI company, G42. They're aiming to spread OpenAI's fancy tech across different businesses, from banks to hospitals, in the UAE and nearby areas. G42 is big in the UAE's move into AI. They're also buddying up with other tech giants and pouring big bucks into emerging tech businesses.
OpenAI’s cool tools have caught the eye of investors worldwide. Sam Altman from OpenAI gave props to the UAE for jumping on the AI train early. He said they were talking about AI in Abu Dhabi before it became the in-thing. Financial details? They kept that under wraps.
Universal Music sues Anthropic over AI-generated lyrics
Universal Music's throwing down the gauntlet, suing AI company Anthropic for copyright infringement. The big issue? Anthropic's chatbot, Claude, spits out lyrics almost identical to Universal's artists' songs. Like, ask Claude for "I Will Survive" lyrics and you get a nearly exact copy. Universal's not cool with this, saying just because it's online doesn't mean it's up for grabs. Anthropic hasn't said peep yet.
Remember when everyone was up in arms about Napster back in the day? This AI thing feels a lot like that. Universal's trying to play nice, teaming up with BandLab to handle copyrights the right way and working with Google on AI songs. They want AI to be cool, but say Anthropic's crossing the line big time. They've even told Spotify to stop giving AI peeps access to their tunes.
Microsoft-affiliated research finds flaws in GPT-4
Microsoft-backed research found some hiccups with OpenAI's big brain GPT-4. Turns out, while GPT-4 is really good at following orders, it might follow them too well. This means some sneaky folks can give it specific prompts, kinda like giving it a nudge in the wrong direction, to get it to say biased or harmful stuff. This isn't great, especially if it spills beans it shouldn't.
Microsoft uses GPT-4 for its Bing Chat feature, so why would they spill the beans on something they use? Well, before this news even hit the streets, Microsoft made sure it didn't affect their services. They even gave OpenAI a heads up about these issues.
Reality Defender raises $15M to detect text, video and image deepfakes
Reality Defender, a startup focused on spotting fake content created by AI, just bagged $15 million in a big funding round. Big names like DCVC, Comcast, and a few others threw cash in the pot. They plan to use this money to double their team size and beef up their tech to catch these sneaky deepfakes.
Ben Colman, the head honcho of Reality Defender, says these fake videos and stuff can really mess things up. They keep popping up everywhere, and his team's on a mission to spot 'em before they spread like wildfire. Deepfakes are booming. There are three times as many fake videos and eight times as many fake voices online now compared to last year. Making a fake video or voice used to cost a ton and needed a brainiac to do it. But now, with tools like ElevenLabs and Stable Diffusion, any troll can do it for cheap or even free.
Nirvana nabs $57M to make AI inroads into commercial trucking insurance
Nirvana Insurance, a fresh-faced startup, just scored $57 million to spice up the truck insurance game. They're using all sorts of tech, like AI, and tons of trucking data to figure out how risky a truck is. The big deal? They're hoping to help out small trucking businesses that are barely scraping by. Most trucking businesses are tiny, with about 90% of them running fewer than 50 trucks. With gas prices skyrocketing and insurance being a pain (costing a whopping $15k-$20k per truck every year), these little guys are struggling.
Nirvana are all about getting things done faster and better. They're using sensors in trucks to get loads of data. This helps them give quicker insurance quotes that fit just right. And if truckers need to make a claim? Nirvana's got tools to make that easier too. Plus, all this tech can figure out how good a driver is. Drive safe and you might snag a sweet 20% discount.
British School Employs AI Robot As Principal Headteacher For Enhanced Decision-Making
Cottesmore, a top-notch UK boarding school, has rolled out a pretty wild idea: they've made an AI robot, named Abigail Bailey, their "principal headteacher." It's not replacing humans but helping the headmaster, Tom Rogerson, make decisions. Think of it like a super-smart helper; it's kinda like asking ChatGPT, that online chatbot, for advice.
Rogerson says this AI buddy, loaded with tons of knowledge, offers quick insights without bugging other folks. Even though running a school can feel lonely, having this tech on standby gives him some peace of mind. And just so you know, attending Cottesmore ain't cheap - it costs a whopping 32,000 pounds a year!
Cybercriminals register .AI domains of trusted brands for malicious activity
A new report says that almost half of the big shot companies on Forbes' Global 2000 list don't actually own their .AI web addresses. Instead, shady folks are grabbing them. Why? Because .AI is hot right now and these cybercrooks want to fool you by using familiar brand names with the .AI ending.
This trickery has skyrocketed by 350% in just a year. And it ain't just the .AI bit. These tricksters are also making fake websites that look a lot like legit brands to pull scams and rip-offs.
And get this, out of all these companies that should have their .AI web addresses, 84% are owned by some other random person, and nearly half of them are up for grabs. Sectors like banking and IT are getting hit the hardest.
How 'AI watermarking' system pushed by Microsoft and Adobe will and won't work
Microsoft, Adobe, and some other heavy-hitters are introducing a new symbol to flag AI-generated pictures. This sticker, which looks like the lowercase letters "cr" inside a speech bubble, is a tool for showing if a picture was made by a computer or a person. This "cr" logo was dreamt up by a group called C2PA, with big players like Intel and Microsoft behind it.
So, let's say you open a photo with this new sticker on your app. Click on the symbol, and boom! You'll see where the pic came from – like if it was made by Adobe's Photoshop or Microsoft's Bing Image tool. The app you use has to know about this symbol, or you won't see it. If someone removes the "cr" from the picture or if you open the photo in an older app, the sticker won't show.
Adobe has another trick up its sleeve. They have a cloud system where photos with this sticker get stored. So, if someone tries to share a photo without the sticker, the system can check Adobe's cloud to see if it matches a photo there. If it does, the sticker's info can be added back.
Two friends used AI and $185 to start a side hustle—they just sold it for $150,000: ‘It prints money’
Sal Aiello and Monica Powers turned $185 into a gold mine. After using ChatGPT for their own market research, they found a way to make the AI chatbot super useful for testing business ideas.
Aiello, a tech guru, and Powers, a design whiz with her own branding company, launched DimeADozen in just a few days. This tool lets folks fill out a form about their business ideas. DimeADozen then uses ChatGPT to create a detailed report about the idea's potential. It's quick, and at $39 a pop, they made over $66,000 in seven months. Best part? Almost all of that was pure profit.
Last month, they sold it for a whopping $150,000 to a couple looking to run it full-time, while Aiello and Powers will stick around as advisors. The duo's secret sauce? They're pros at asking ChatGPT the right questions to get legit answers. And while you could do similar research on Google, using DimeADozen saves time and gives better results.
How ‘A.I. Agents’ That Roam the Internet Could One Day Replace Workers
Researchers have amped up chatbots into something called A.I. agents. Nvidia used the tech behind ChatGPT, a popular chatbot, to teach it to play Minecraft, like mining gold and building houses. Unlike your basic chatbot, these A.I. agents can use online tools, such as spreadsheets and travel sites. People think that in the future, these agents might replace white-collar jobs.
Writing computer programs on the fly. Dr. Fan from Nvidia said the real magic word here is "code." Instead of clicking buttons like humans, A.I. agents use what's called A.P.I.s to chat with online services. Imagine telling an agent to upload a video, and it writes the code to chat with YouTube's A.P.I. and get it done.
Some A.I. agents are learning to use software just like us, mouse click by mouse click. They watch videos of us doing it and learn from our moves. Companies are making agents that use popular websites and apps, thinking these A.I. pals can one day do almost anything online.
How to Launch a Game-Changing Artificial Intelligence Pilot
CDW's Intelligent CX team is jazzing up customer service by using virtual assistants. These AI buddies help folks do stuff online without bugging call centers. They take care of the easy stuff so humans can handle the trickier, feels-heavy problems. They can even give call center peeps pointers during a call to help the customer out.
Thinking about starting an AI project? CDW's got your back. They'll help you figure out who's using your services, find out what's bugging them, and come up with tech fixes for those headaches. They'll even whip up a demo to show the big bosses how awesome it could be. And if you're not sure if it's worth the bucks? They'll break down the dollars and cents for ya.
Marc Andreessen's AI manifesto hurts his own cause
Marc Andreessen, a big shot in the tech world, wrote a long piece praising technology and how it can change our lives for the better. He's all for pushing tech forward and doesn't like too many rules slowing it down. However, he also listed some ideas he thinks are bad, including things like "tech ethics" and "risk management."
The thing is, we're seeing problems right now when companies don't consider ethics and take too many risks. And guess what? Andreessen himself supports these "bad ideas" in his own business dealings. So, while he's not saying do bad stuff on purpose, he's kinda giving folks a pass to ignore important safeguards.
The Rise of the AI Engineer
AI is blowin' up big time! What used to take a fancy team of researchers and a solid 5 years back in 2013, now just needs an afternoon and some tech smarts in 2023. Andrej Karpathy even said we're gonna see way more AI Engineers than ML engineers. But, here's the kicker – turning AI into real products ain't a cakewalk.
There's a bunch of stuff to get through – different AI models, tools, and the flood of new AI info popping up daily. It's enough to make your head spin. This AI Engineer thing is becoming a legit full-time gig, much like other techie roles we've seen pop up over the years.
And here's a twist – when it comes to creating AI products, you want engineers, not just researchers. The writer's noticing that AI Engineer roles are growing faster than ML Engineer roles. And soon, AI Engineers won't need to dig into the hardcore AI research; they'll just use the tools and learn by doing.
The Greatest Threat to Generative AI is Humans Being Bad at Using it
AI is moving faster than a speeding bullet. While tech nerds are hyped about new and improved AI, regular folks just grab what's easy and free, like ChatGPT. But it feels like the quality of the stuff AI's churning out has gone downhill. Like, Gizmodo goofed up a Star Wars timeline using ChatGPT, and folks are pushing bad AI art just to get clicks. And now, there's this vibe that if something is "Made by AI", it's junk. Heck, even politicians are throwing shade with "sounds like ChatGPT" insults.
The thing is, AI typically pumps out "average" content. It plays it safe, so sometimes it's just meh. If you really want to jazz it up, you gotta do some prompt engineering, but most users don’t know how. Plus, people share AI-made stuff without saying it's AI, and that's not helping.
Some folks think the biggest headache for AI is legal issues, like getting sued for training on copyrighted stuff. But it's more about folks not using AI right and understanding it. AI's future might be controlled by big companies, and that's a bummer. But, hey, AI isn't going away.
What'd you think of today's edition? |
What are MOST interested in learning about AI?What stories or resources will be most interesting for you to hear about? |
Reply