• Posts
  • The End of GPT-4? New York Times and OpenAI Lawsuit

The End of GPT-4? New York Times and OpenAI Lawsuit

The New York Times takes on OpenAI, Microsoft, and their affiliates, demanding billions and a complete overhaul of AI content generation


The End of GPT-4? New York Times and OpenAI Lawsuit

The New York Times is taking OpenAI and its partners, like Microsoft, to court over copyright infringement, demanding big bucks, potentially billions, and wants OpenAI to wipe out ChatGPT, GPT-4, and anything trained on NYT's articles. 

The New York Times' lawsuit might be a bit different. They say OpenAI's tools, using Microsoft Bing, can cough up detailed bits of NYT articles, hurting their relationship with readers and costing them money. But so far, the way courts have handled these cases, it's a long shot for the Times. They need to prove that the AI holds actual copies of their work, which hasn't been easy in past cases.

Microsoft’s Copilot app is now available on iOS

Microsoft just launched its Copilot app for iPhone and iPad users. This comes right after its Android app release. This app's a real game-changer, giving you the lowdown from Microsoft's revamped Bing Chat, now called Copilot. Think of it like having a buddy in your pocket that can answer questions, help you write emails, or sum up long texts. Plus, it's got a cool trick up its sleeve – it can whip up pictures using DALL-E3, an artsy AI. 

Copilot is powered by the latest and greatest AI brain, GPT-4, and you don't even need to shell out for a subscription to use it. Microsoft's making moves, shifting Bing Chat to this new Copilot identity and rolling it out across Android and Apple gadgets, plus a dedicated web version that stands alone, separate from Bing.

GitHub makes Copilot Chat generally available, letting devs ask questions about code

GitHub's Copilot Chat, a programming chatbot similar to ChatGPT, is now available for everyone. It was first offered to businesses using Copilot, then to individual users for $10 a month. Now, it's in the sidebar of Microsoft’s Visual Studio Code and Visual Studio, free for some like students and teachers.

Copilot Chat uses OpenAI's GPT-4, tweaked for coding help. Developers can ask it to explain things, find issues, or write tests. However, there's controversy over using public code for training, with some lawsuits claiming copyright issues. GitHub hasn’t given a way to opt out of this training data.

Microsoft’s CEO said Copilot has a million users and thousands of business clients. Despite its popularity, it's costly to run, losing about $20 per user monthly. GitHub faces competition from Amazon's CodeWhisperer and others like Magic, Tabnine, and Meta’s Code Llama.

Defense startup Shield AI raises $300M at $2.8B valuation

Shield AI, a big deal in military tech, just banked a cool $300 million, bumping their worth up to $2.8 billion. They make this smart flight software, Hivemind, that lets drones and planes fly solo, no human or GPS needed. Handy for areas where signals are jammed. They've even had a jet fly itself for 17 hours! Plus, they've got this V-Bat drone for spying and carrying stuff, packing a powerful Nvidia chip. 

Recently, they launched V-Bat Teams for coordinating drone swarms. Next, they're testing their tech on the XQ-58 Valkyrie, a fast, 27-foot wingspan drone. Meanwhile, other defense startups are also raking in big bucks.

AI Detects Unusual Signal Hidden in a Famous Raphael Masterpiece

AI's been playing detective on a famous Raphael painting, the "Madonna della Rosa." Turns out, AI's got a sharper eye than us, spotting that St. Joseph's face wasn't done by Raphael himself. This ain't a total shock, 'cause art buffs have been scratching their heads over this piece for a while.

These brainy folks from the UK and US whipped up a special AI algorithm, trained on legit Raphael works. This AI can pick up on Raphael's signature moves - brushstrokes, colors, shading, you name it.

They used a combo of Microsoft's ResNet50 tech and some old-school machine learning to train the AI. Usually, it's spot-on 98% of the time. When they checked out the "Madonna della Rosa," everything matched Raphael's style except for St. Joseph's mug. Looks like one of Raphael's students, maybe Giulio Romano, could've stepped in. This painting, dating back to 1518-1520, has been iffy since the 1800s. Now, AI's pretty much confirming those doubts. 

AI’s big test: Making sense of $4 trillion in medical expenses

In the U.S., hospitals and insurers are in a race to develop AI tools for better handling the whopping $4 trillion in annual medical bills. This tech race is part of a significant shift in healthcare economics, impacting providers, insurers, and the government. Healthcare providers dream of AI that can swiftly and accurately handle billing, while insurers and government agencies seek similar tech to check these bills for accuracy and fraud.

The potential benefits are huge: reduced admin costs, faster processing, and possibly smaller workforces. Congress and the Biden administration are just starting to address AI's role in healthcare, focusing on issues like fraud, Medicare cuts, and patient care. The stakes are high, with AI's influence potentially reshaping the future of Medicare, Medicaid, and private insurance, along with the financial health of hospitals and clinics.

Former Trump lawyer Michael Cohen accidentally cited fake court cases generated by AI

Michael Cohen, ex-lawyer for Donald Trump, goofed by using fake AI-generated court cases in a legal doc, now in front of a judge. He used Google’s Bard, thinking it was a fancy search engine, not an AI chatbot. He was trying to cut short his three-year probation from a tax evasion guilty plea. But, the judge caught on, none of the cases were real. 

Cohen said he didn't mean to trick the court and didn't know Bard could make up cases. His lawyer, who added these cases without checking, might face penalties. This isn't new; earlier, two lawyers got fined for using fake AI cases in court.

Company executives can ensure generative AI is ethical with these steps

Marc Warner, a big shot in AI, says generative AI's a game-changer for businesses, thinkin' it could rake in trillions. But it ain't just plug-and-play. Companies gotta be smart about it. Over 80% of them will be into generative AI soon, but only 17% are dealing with the risks like bias, privacy issues, and copyright mess-ups. 

The U.S., U.K., and EU are all on this, setting up rules to keep AI in check. Warner’s advice? Focus on people-first decisions, keep a tight leash on AI content, and sync up all your AI stuff. Do it right, and you're setting up your biz for success and some serious cash flow.

Here’s how major media companies are handling OpenAI

Big media's playing ball with OpenAI after The New York Times sued 'em for swiping content. The Times, hot on the story, says giants like Gannett, News Corp (runs The Wall Street Journal), and IAC (owns The Daily Beast) are chatting with OpenAI to figure out a fair shake for using their stuff. 

There's also this News/Media Alliance, repping a bunch of newsrooms in the US and Canada, trying to hammer out a group deal. Already, Axel Springer and The Associated Press have cut their own deals with OpenAI. 

4 Things Experts Say Could Happen With AI In 2024 —  And Why It Could Be Bad News For OpenAI

In 2024, AI's expected to become a huge part of our lives, and OpenAI might face tough challenges. Here's the lowdown:

  1. AI Everywhere: Tech giants like Google and OpenAI are pushing AI big time. Google’s new AI model, Gemini, competes with OpenAI’s GPT-4. AI tools will likely be in our daily tech, making life easier but also more AI-dependent.

  2. OpenAI's Rough Patch: OpenAI, famous for ChatGPT, is hitting some bumps. Users report ChatGPT acting up and there’s been some company drama. Plus, competitors like Google’s Gemini and others are stepping up their game.

  3. Copyright Conflicts: Big legal questions hover around AI, especially about using copyrighted material to train AI models. This could be a game changer for the AI industry, with possible huge costs if AI companies lose legal battles.

  4. Urgent Need for Regulation: AI's moving fast and laws can't keep up. The EU’s setting some AI rules, but the US is still playing catch-up. Regulating AI is tricky because it’s evolving quickly, and we need to figure out how to do it right.

So, 2024's looking like a year where AI becomes more common, OpenAI may struggle, legal battles loom, and the need for AI regulation becomes crucial.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Join the conversation

or to participate.