New AI Threat 'FraudGPT

The Emergence of 'FraudGPT', Its Potential Dangers, and the Importance of a Proactive, Defense-in-Depth Cybersecurity Approach

Today:

New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

A new troublemaker, called 'FRAUDGPT', has hit the digital streets. It's an AI bot that baddies are using to cook up phishing emails and malware. A security firm called Netenrich found this bot for rent on the dark web for $200 a month. The creators are all over the place boasting that it's perfect for folks looking to wreak havoc online.

They claim that FRAUDGPT can whip up nasty code, find holes in security, and has already been bought over 3,000 times. We don't know how exactly this thing was made, but it sure isn't good news.

These tools are giving even greenhorn cybercrooks a helping hand to pull off large scale attacks that could lead to theft of valuable information. It's a piece of cake to use a tool like ChatGPT without the safety measures.

Forging Partnerships to Advance AI Technologies in India

Meta, the tech giant, is teaming up with 'India AI', a unit under the Indian government, to give a boost to artificial intelligence (AI) innovation in India. They've just signed a memo of understanding that lays out how they'll work together. They aim to make AI tech more accessible, focus on research, and consider setting up a center to support startups working on AI.

The partnership will also put a spotlight on Indian languages. They're planning to use Meta's AI models, like LlaMA and others, to build datasets in these languages. This will help make AI more inclusive, improve government services, and promote innovation in the field.

India AI and Meta are also hoping to raise awareness about the upsides and downsides of AI. They're working on creating guidelines for responsible AI use.

Researchers find a way to easily bypass guardrails on OpenAI’s ChatGPT and all other A.I. chatbots

This detailed article discusses a significant security flaw discovered in large language models (LLMs) by researchers from Carnegie Mellon University and the Center for A.I. Safety. The researchers found a way to bypass the guardrails or limitations set by A.I. developers to prevent inappropriate output, such as bomb-making recipes or offensive jokes. This flaw could allow attackers to manipulate the model into engaging in racist or sexist dialogue, write malware, and essentially go against the training provided by the model creators.

This issue has severe implications for the use of LLMs as digital assistants performing actions across the internet, as it may not be possible to entirely prevent such models from being exploited for harmful purposes.

The researchers found that their attack method was effective to some extent on all chatbots, including OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Bing Chat, and Anthropic’s Claude 2. It was most effective when the attacker had access to the entire A.I. model, including its weights.

Hugging Face, GitHub and more unite to defend open source in EU AI legislation

The recent coalition formed by six major open-source AI stakeholders — Hugging Face, GitHub, EleutherAI, Creative Commons, LAION, and Open Future — to defend open-source innovation amidst the development of the European Union's comprehensive AI law, the EU AI Act.

The group has released a policy paper titled "Supporting Open Source and Open Science in the EU AI Act," offering recommendations to ensure that the act doesn't impose obligations that could be detrimental or impractical for open AI development practices. The paper argues that current "overbroad obligations" favoring proprietary AI development from companies like OpenAI, Anthropic, and Google could potentially disadvantage the open AI ecosystem.

In AI-enabled drug discovery, there might be more than one winner

NVIDIA, the leading chipmaker, recently invested $50 million into the biotech startup Recursion. The latter plans to use the funds to advance the development of its AI foundation models for biology and chemistry.

Ben Mabey, CTO of Recursion, stated that the company is aiming to build a definitive foundation model for drug discovery, which is considered one of the most challenging fields. NVIDIA CEO Jensen Huang echoes this ambition, highlighting the revolutionary potential of generative AI to discover new medicines and treatments.

Despite the uniqueness of its mission, Recursion is not alone in this endeavor. The company recently acquired two other firms operating in the AI-enabled drug discovery space - Cyclica and Valence. Interestingly, many competitors in this sector are based in Israel.

Gretel's Tabular LLM

Gretel has developed a new large language model that specializes in generating synthetic tabular data. After training on a massive amount of data tokens, this model can create highly realistic and valuable data for a variety of uses.

This model can whip up data from thin air, making it ideal for demos, tests, or building data sets specific to any region or language. If you've got a dataset that's missing some pieces, Gretel's model can step in, fill in the gaps, and even expand the dataset by adding columns and records.

What's neat is you can use SQL or everyday language to prompt the model. It's even capable of generating datasets that match a SQL schema or query, and ensuring the data types are spot-on.

Effortless prep for your 1:1 meeting

Supermanage AI is here to give your one-on-one meetings a serious upgrade. This tool provides you with a tailored brief before each meeting, condensing all the info from your public Slack channels into a two-minute read.

No more spending most of your meeting time playing catch-up. With Supermanage AI, you can dive into deeper, more meaningful conversations right off the bat. Celebrate your team's achievements, support them, and help everyone excel.

The tool gives you a snapshot of all the contributions your team is making, including those tasks that often go unnoticed. You'll also get insight into any personal or work-related challenges your team members may be facing.

Harness the Power of AI to Detect Lies and Heart Rate Fluctuations

Say hello to LiarLiar, an innovative lie detection tech using AI to decipher the truth. This tool scrutinizes micromovements, heart rate, and body language hints during video calls or footage to sniff out deception.

A breeze to use, LiarLiar is designed for everyone. Its user-friendly interface allows even the least tech-savvy among us to become lie-detecting gurus with a simple click.

You can put LiarLiar to work on any video feed you like, whether it's Zoom, Google Meet, Skype, YouTube, or your own saved videos. So you can dissect any chat, any time.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.