GPT 4.5 TURBO GOES LIVE!

Discover what makes GPT 4.5 Turbo the game-changer in AI technology

Today:

GPT 4.5 TURBO GOES LIVE!

Users on various platforms, including mobile and desktop, report different responses from the AI regarding its version, with some confirming the existence of GPT 4.5 Turbo. This update appears to offer enhanced capabilities, such as faster and more accurate web browsing and improved reasoning and writing skills. 

Speculation is fueled by a leaked screenshot and a cryptic response from Sam Altman to a Twitter inquiry about the leak. Some users speculate whether this update is a deliberate improvement or just a temporary glitch. 

Prompt engineering

OpenAI offers tips for better results with GPT models like GPT-4. It suggests:

  1. Clear Instructions: Be specific in your asks. Shorter replies, expert-level detail, or a preferred format—let the model know exactly what you want.

  2. Include Details: Add relevant details for more accurate answers.

  3. Adopt a Persona: Guide the model to respond in a certain style or character.

  4. Use Delimiters: Clearly mark different parts of your input.

  5. Outline Steps: Break down tasks into steps for easier follow-through.

  6. Provide Examples: Show examples for the model to imitate.

  7. Specify Length: Ask for responses of a certain length.

  8. Reference Texts: Give the model text to work off to avoid made-up answers.

  9. Simple Subtasks: Break complex tasks into smaller, manageable parts.

  10. Model's Thought Process: Ask the model to explain its reasoning.

  11. Use External Tools: Enhance the model's abilities with other tools.

  12. Test Systematically: Regularly check and tweak your methods for best results.

Basically, be super clear and detailed in what you want from the model. Break big stuff into smaller chunks. And use all the tools you've got to get the best answers.

Microsoft says its newest compact “small language model,” Phi-2, is bigger and better

Microsoft Research's Machine Learning team introduced Phi-2, a nifty 2.7 billion-parameter language model. It's a step up from their earlier small language models (SLMs), like the 1.3 billion-parameter PHI-1 and PHI-1.5. Phi-2's a real whiz at reasoning and language, showing off skills that usually need way bigger models – think 25 times bigger! It's super for research, offering a sandbox for digging into stuff like interpretability and safety.

They've also been chatting about Holoportation, a cool tech that lets doctors and patients meet virtually. These folks focused on top-notch training data and clever scaling techniques. This approach means Phi-2 punches above its weight, showing off in benchmarks and even beating some giants.

Training-wise, it's a Transformer model, chewing through 1.4 trillion tokens in about two weeks. And it's cleaner in terms of bias and toxicity, even without special fine-tuning. In tests, Phi-2 stood toe-to-toe with some big names, excelling in tasks like coding and math. 

This AI code that detects when guns, threats appear on school cameras is available for free

Iterate.ai, a tech company known for AI app development, is now giving away a game-changing AI system to help schools spot guns and beef up security. This tool, usually not their usual gig, was cooked up by their CTO, Brian Sathianathan, after realizing schools needed a solid plan against gun violence. It took a heap of work, sifting through 20 years of attack data and loads of weapon info, with a little help from ex-cops. 

The system, which hooks up to existing cameras, is smart enough to spot guns, big knives, and even bulletproof vests. If it sees something fishy, it shoots out an alert pronto. This move is all about tackling the scary spike in school shootings and getting schools to join forces in making things safer.

Japan scientists create world's first mental images with AI tech

Japanese scientists made a big leap by using AI to create images straight from people's minds. They're from big-deal places like the National Institutes for Quantum Science and Technology and Osaka University. They whipped up pictures of stuff like a leopard and an airplane just by looking at brain activity. This cool technique, called "brain decoding," might help out in healthcare and other helpful ways. They published their work in a fancy science journal, "Neural Networks."

Before, folks could only get simple brain pictures, like letters, from brain scans (fMRI). But now, using AI, they can make more complex images. They tested this by showing people 1,200 pictures and then checking their brain waves. The AI learned how these matched up. This breakthrough could lead to new ways to communicate and understand dreams and hallucinations.

Image recognition accuracy: An unseen challenge confounding today’s AI

The MIT CSAIL team found a blind spot in AI's image recognition: how difficult some pics are for humans and machines to identify. They developed a new metric, "minimum viewing time" (MVT), measuring how long it takes to recognize an image. They discovered that most image tests used in AI are actually full of easier shots, skewing AI performance metrics. Larger AI models do better on simple images but struggle with tougher ones. 

This study is a game-changer. It shows we need to test AI on harder images to truly gauge its chops against human sight. The team's work is huge for fields like healthcare, where understanding complex visuals is key. They're now probing the brain's role in recognizing tough images, aiming to boost AI's ability to judge image difficulty. This study's a big step toward making AI truly match human-level object recognition.

Medical Text Written By Artificial Intelligence Outperforms Doctors

Artificial intelligence (AI) might soon be answering your health questions, easing the workload for docs. A study in "Nature" by Dr. Cheng Peng's team at the University of Florida tested their AI, GatorTronGPT, against real doctors. 

This AI, built on ChatGPT's framework but trained on heaps of clinical text, matched doctors in making sense and being relevant medically. They checked its readability and relevance with special tests, including a doc's version of the Turing test, where it fooled docs more than half the time into thinking its writing was human. 

Can GenAI Do Strategy?

In the biz world, everyone's got a shot at cooking up killer strategies, not just the bigwigs. Now, we've got AI like ChatGPT in the mix, and it's changing the game. Initially, AI was just for stats and Googling stuff, but now it's spitting out fresh ideas. Like when Wolfram hooked up ChatGPT to its math software, the AI started solving tough math problems.

When they plugged ChatGPT into the Blue Ocean strategy tool. Suddenly, the AI's dishing out unique strategies, like it's no big deal. They tested this out by tackling a bagel bakery plan in Paris. The AI suggested stuff like truffle cream cheese and bagel joints near clubbing hotspots, even pinpointing specific customer types and marketing tactics. Compared to a group of MBA students working the old-school way for a week, the AI churned out a strategy in just an hour, and it was pretty much on par. 

OpenAI, Meta, Microsoft Chase Wearable AI

The pursuit of wearable AI technology by major tech giants like OpenAI, Meta (formerly Facebook), Microsoft, and others is gaining momentum. This trend is highlighted by their focus on smart glasses and other wearable devices, leveraging advancements in multimodal AI. This AI technology is capable of understanding not only text and audio but also images, hand gestures, and other visual inputs, broadening the scope of interaction and functionality in wearable tech.

The consumer electronics segment, particularly in the context of wearable AI, is seeing significant growth. This is driven by the increasing consumer demand for smart wearables that track health vitals. Companies like Noise are introducing smartwatches with features such as AI voice assistance and health monitoring, reflecting a broader trend in the industry towards integrating advanced technology into everyday wearables​.

The future of wearable AI is shaped by the convergence of consumer electronics, healthcare, and advanced AI technologies. Companies like Microsoft and Meta are leading the way with significant investments, partnerships, and open sharing of AI resources. This trend indicates a rapidly evolving landscape where AI-powered wearables are set to become an integral part of our daily lives, offering new capabilities and insights.

AI scientist Fei-Fei Li: ‘Maths is pretty clean. Humans are messy’

Fei-Fei Li, a Stanford professor and AI trailblazer, believes AI needs a human-centered approach. She's renowned for ImageNet, which revolutionized computer vision. Nowadays, big tech firms dominate AI, yet Li stresses universities' unique role in diverse, interdisciplinary research. She co-founded Stanford's Human-Centered AI Institute, emphasizing AI's societal impact over mere tech advancements. 

Li's work counters the tech industry's profit-driven focus, advocating for public sector involvement in AI, especially in academia. Despite resource challenges, she emphasizes universities' role in ethical AI development. Li's autobiography reflects her immigrant journey and commitment to diversifying AI, challenging male-dominated narratives and advocating for varied perspectives in science.

Sacramento State launches National Institute for Artificial Intelligence in Education

Sacramento State's stepping up big time in the AI game, launching the National Institute for Artificial Intelligence in Education. They're one of the first U.S. colleges diving into using AI in learning, but they're not just about the tech – they're big on doing it right and ethically. 

President Luke Wood's all about avoiding sci-fi horror scenarios, like "Terminator" gone wrong. They've got Alexander M. "Sasha" Sidorkin, a real brainiac in AI and education, leading the charge as the new AI boss. He's writing a book on using chatbots in colleges and is big on closing the gap in student success with AI. Plus, he's setting up programs to teach students the right way to use AI. Sac State isn't stopping there. 

They're hiring seven new computer whizzes focused on AI and quantum computing to create cool tools and figure out how to use this tech to help students learn better. In short, Sac State's making big moves to be at the forefront of AI in education, making sure it's used smartly and for the good of all.

The 3 Most Important AI Policy Milestones of 2023

This year saw major moves in AI policy. First up, in October, President Biden dropped a hefty executive order on AI. It's all about handling AI risks, job impacts, and privacy stuff. It's a big deal, but it's still a wait-and-see game on how it pans out.

Then, two days later, the UK hosted the first AI Safety Summit at Bletchley Park. Big names in AI and officials from 28 countries showed up. They all agreed AI's gotta be safe, but some folks weren't thrilled about who was and wasn’t at the table.

Finally, the EU's been cooking up its own AI Act. After a lot of back-and-forth, they decided to tighten the reins on big AI systems, banning some sketchy uses and slapping hefty fines for rule breakers.

Looking ahead, the U.S. Senate's eyeing its own AI laws, and the U.N.'s got plans for an AI advisory body. Big things are brewing in AI land, and 2024's gonna be a key year to watch.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.