• NATURAL 20
  • Posts
  • Sam Altman and Bill Gates Talk AI | Is GPT-5 close?

Sam Altman and Bill Gates Talk AI | Is GPT-5 close?

PLUS: OpenAI Allows Military Use, Secret US-China AI Safety Talks and more.

Today:

Sam Altman and Bill Gates Talk AI | Is GPT-5 close?

Bill Gates chats with Sam Altman about AI's future and its impact on society. Gates is intrigued yet philosophical about AI potentially overtaking human tasks, like eradicating malaria. He ponders how future generations, raised without scarcity, will shape society. 

Altman shares his excitement about AI advancements in productivity, especially in programming, healthcare, and education. He also talks about the importance of developing AI with safety and ethical considerations in mind. 

The conversation shifts to AI's role in startups, advising that companies should gear up for GPT-5 and AGI, emphasizing the need for contextual over behavioral optimization. 

OpenAI changes policy to allow military applications

OpenAI recently tweaked their policy, now giving a thumbs-up to military uses of their tech. This change wasn't broadcasted, but The Intercept caught it. It's not just a small tweak - it's a big deal. OpenAI used to say "no" to anything military, but not anymore. 

They still say "no weapons," but they're open to other military stuff, like helping army engineers with research. It's a head-scratcher for many companies - how to juggle working with the military without crossing lines. OpenAI's move shows they're ready to work with military clients, even if they're keeping quiet about it.

US companies and Chinese experts engaged in secret diplomacy on AI safety

AI companies from the US, like OpenAI, Anthropic, and Cohere, secretly met with Chinese AI experts. They're worried about AI spreading fake news and messing with society. These secret meetings happened in Geneva last year, with folks from top-notch American and Chinese institutions. 

They chatted about the dangers of AI and how to make it safer. It's a big deal because the US and China are usually butting heads over tech stuff. The White House knew about it, but they're keeping mum. The meetings were set up by the Shaikh Group, who usually deal with conflicts in the Middle East. Chinese big shots like ByteDance and Tencent weren't there, but Google DeepMind got the lowdown. 

They're planning more talks to make sure AI plays nice with laws and cultural values. Some Chinese and Western AI brains are already pushing for tighter AI controls, like having kill switches and spending more dough on safety. OpenAI was there, but Anthropic and Cohere are zipped up about it.

Microsoft wants to automatically launch its Copilot AI on some Windows 11 devices

Microsoft's testing a new trick with Windows 11 where their AI buddy, Copilot, pops up automatically on wide screens when you start your computer. They're still figuring out what counts as "wide" though. This is in the early test phase with some folks giving it a whirl and giving their two cents.

Just recently, Microsoft also showed off a special button on some new laptops that starts Copilot. Brands like Dell and Lenovo are already on board with this. Plus, there's chatter about more personalization coming to Copilot, maybe even letting other companies plug their own chatbots into it.

šŸ§ RESEARCH

DeepSeekMoE, a new tech approach in language models is a smarter way to manage computing power when making big models. The idea is to break down the tasks among smaller, more focused parts, making the whole system more efficient. This method can match or even beat older models while using less computing juice. The team tested it up to 145 billion parameters, showing great results with far less computing than usual.

PALP, a new method for making personalized images with text-to-image models, is all about getting images that really match what people ask for, including specific styles and settings. Current methods often can't handle complex requests without losing personal touch. PALP fixes this by focusing on one prompt at a time, nailing the text's details and letting creators blend in various elements or draw inspiration from other images. The team shows how PALP outperforms other ways of doing this, both in how well it works and how the images look.

TrustLLM is a deep dive into making large language models (like ChatGPT) more reliable. The team sets out rules for trustworthiness in these models, covering everything from truthfulness to privacy. They tested 16 big-name language models and found that, usually, the more trustworthy a model is, the better it works. Surprisingly, models from big companies are often more reliable than open-source ones, but some open-source models are almost as good. There's a catch, though: some models are so focused on being safe that they overdo it and ignore harmless requests.

Parrot is a new way to make better images from text descriptions. It uses reinforcement learning, a type of AI training, to balance different quality aspects of the images. The trick is to get the right mix of qualities without messing up others, which isn't easy. Parrot smartly finds the best balance of these qualities during training. It also tweaks the text prompts for better results and keeps the original prompt in mind to stay true to what the user asked for. Tests and a user study show Parrot beats other methods in making images that look good, match the text well, and appeal to people.

šŸ› ļøTOP TOOLS

Airplane - developer platform for building internal tools. It allows engineers to rapidly create UIs and workload automation for various teams. The platform enables building of custom tools, internal workflows, and dashboards tailored to specific needs. 

Frase - AI-powered SEO and content creation tool that streamlines the process of producing content that ranks well on Google. It offers features like SERP analysis, AI-generated content briefs and outlines, and an AI writer for SEO-optimized copy. 

Neural Hub - platform designed for AI enthusiasts, researchers, and engineers to experiment and innovate in AI, specifically with neural networks. It offers a system for building, tuning, and running neural networks, providing tools and a library for creation and experimentation.

My AskAI - AI customer support assistant to reduce time on customer support, enhancing efficiency without changing existing tools.

Plus AI - AI presentation maker for Google Slides, enabling users to generate and edit AI-driven presentations efficiently. It offers features like custom presentation generation, AI editing for slides, and seamless integration with Google Slides. 

šŸ“²SOCIAL MEDIA

šŸ—žļøMORE NEWS

OpenAI increased the GPT-4 Turbo with Vision rate limits 

The GPT-4 Turbo with Vision rate limits for OpenAI's highest-usage clients have been extended by fivefold to 3,000 requests per minute and 50,000 requests per day. Rate limits are set by OpenAI's API to control how often users can access services, protecting against misuse and ensuring fair use. They're measured in requests and tokens per minute or day, and vary by model and user tier, from free to Tier 5. Higher tiers allow more usage. Limits are also shown in HTTP response headers. To avoid exceeding limits, OpenAI suggests cautious programmatic access and features a Python script for managing requests. If limits are hit, retrying with exponential backoff is recommended. Limits increase with higher usage and payment tiers. OPENAI

UK government to publish ā€˜testsā€™ on whether to pass new AI laws

The UK's eyeing new AI laws, setting up "tests" to decide when to act. They're leaning on a light-touch approach, fearing strict rules might stunt growth. Key triggers for action include AI risks missed by their new AI Safety Institute or companies ditching voluntary safety promises. Globally, the US and EU are stricter, while China focuses on content control. The UK's waiting for evidence that new laws would balance risk without killing innovation. Meanwhile, experts doubt if voluntary commitments alone can ensure safety. FINANCIAL TIMES

AI girlfriend bots are already flooding OpenAIā€™s GPT store

Just two days into OpenAI's GPT store launch, AI "girlfriend" bots are swarming in, bending the rules. Despite OpenAI's policy against bots for romance or regulated activities, searches like ā€œgirlfriendā€ yield multiple options, like ā€œKorean Girlfriendā€ or ā€œVirtual Sweetheart,ā€ offering users prompts for personal fantasies. This trend bucks OpenAIā€™s rules, updated with the store's debut, aiming to prevent misuse. The surge in relationship bots could reflect America's loneliness crisis, with half the adults feeling isolated. OpenAI's trying to police this through a mix of tech and human oversight, but it's proving a wild west scenario, challenging the regulation of such AI tools. This rush to market reflects tech firms' urgency to lead in AI, despite potential missteps. QUARTZ

Microsoft Tops Apple to Become Most Valuable Public Company

Microsoft just outranked Apple as the most valuable public company, marking a shift in tech leadership. This change, driven by the rise of generative AI, sees Microsoft's market value hitting $2.89 trillion, nudging past Apple's $2.87 trillion. This shift reflects the growing significance of AI in tech and finance. Microsoft's surge is thanks to its big bets on AI, like investing in OpenAI and integrating AI into products and cloud services. Apple, once the unchallenged market leader since overtaking Exxon Mobil in 2011, now faces questions about its AI strategy and future innovations. Microsoft's success showcases the changing priorities in tech, prioritizing AI's transformative potential over traditional hardware-focused approaches. THE NEW YORK TIMES

Is OpenAI extending an olive branch to creators and writers?

OpenAI seems to be reaching out to creators and writers, hinting at a strategy to build better relations with these communities. They're hiring for roles like ā€œwriting community specialistā€ and ā€œcreator community specialist,ā€ focused on engaging with authors, content creators, and influencers. This move follows several copyright lawsuits against OpenAI, suggesting a push to improve ties with those whose works train AI models. These specialists will lead workshops, gather feedback, and collaborate in marketing, possibly to generate goodwill and address copyright concerns. Additionally, OpenAI's willingness to invest in high-quality, licensed data, like its deal with Axel Springer, reflects the importance of copyrighted content in training effective AI. QUARTZ

Google AI has better bedside manner than human doctors ā€” and makes better diagnoses

Google's AI, trained for medical interviews, is showing potential to outdo human doctors in diagnosing certain health issues and demonstrating empathy. The AI, named AMIE, excelled in diagnosing respiratory and cardiovascular conditions in tests with actors simulating patients. Developed by Google Health, this large language model-based chatbot matched or surpassed doctors in obtaining medical history and ranked higher in empathy. While still experimental and not tested on real patients, AMIE signals a significant advance in AI healthcare applications. Its creation involved training on medical dialogues and self-critiquing to improve interactions. In tests, AMIE beat physicians in diagnostic accuracy across six medical specialties and excelled in conversation quality. NATURE

AI-Optimized Catheter Design Could Prevent Urinary Tract Infections without Drugs

Researchers have used artificial intelligence to create a new catheter design that could reduce urinary tract infections without antibiotics. Traditional catheters provide a smooth path for bacteria to cause infections. This new catheter has 3D-printed geometric shapes inside that act as an obstacle course for bacteria, preventing them from colonizing the inner surface. The design significantly reduces bacterial colonies and the need for antibiotics. While currently optimized for E. coli, further research is needed to make it resistant to other bacteria. AI modeling shows potential for designing other innovations, such as drugs and airplane propellers. SCIENTIFIC AMERICAN

FireCat to debut AI-powered sensory shirt that detects vital signs, location of responders

FireCat will debut an AI-powered sensory shirt for first responders called the 6th Sense System at SHOT Show 2024 in Las Vegas. This innovative shirt is equipped with sensors to detect vital signs and the wearer's location. It connects to an app that enables real-time monitoring by team members. Additionally, a hardware box, resistant to water and temperature changes, is worn on the responder's waist belt and includes a panic button for emergencies. Pressing the panic button triggers an alarm to summon assistance. The hardware has a lifespan of 7-10 years and provides up to 12 hours of battery life, with an option for a 20-hour battery life upgrade. The washable shirt can be worn under a uniform and vest, making it a versatile and valuable tool for first responders. POLICE1

Marc Newson designs swarovski's world-first AI binoculars that identify species on their own

Unveiled at CES 2024, SWAROVSKI Optik and designer MARC NEWSON have teamed up to create AX VISIO, the world's first AI-supported binoculars. These binoculars combine high-performance analog long-range optics with digital intelligence to detect and identify over 9,000 bird and wildlife species with a simple touch of a button. They feature Swavroksi's Swarovision analog optics and a real-time identification system that autonomously names birds and other wildlife species using technology. DESIGNBOOM

    What'd you think of today's edition?

    Login or Subscribe to participate in polls.

    What are MOST interested in learning about AI?

    What stories or resources will be most interesting for you to hear about?

    Login or Subscribe to participate in polls.

    Reply

    or to participate.