Dark Forest: AI vs Humanity

PLUS: Scientists Unite Against A.I. Bioweapons, Cultural Bias in LLMs and more.


The Internet Goes EXTINCT as Gen AI Takes Over | The Dark Forest Internet & Proving Your "Humanness"

The Dark Forest hypothesis suggests that the reason we haven't discovered extraterrestrial life is because advanced civilizations prefer to stay silent to avoid detection by potentially hostile entities. Online, it parallels how genuine interaction is buried under bots, ads, and fake content. As AI evolves, distinguishing real from generated content becomes harder, transforming the web into a 'dark forest' where authentic voices are rare.

This concern extends to fears of AI overpowering human uniqueness. Strategies to preserve human identity in this digital wilderness include leveraging our comprehensive sensory and cultural experiences, beyond what AI can mimic. Essentially, it's about ensuring the web remains a space where humanity, not just algorithms, thrives. 

Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons

More than 90 scientists, including Nobel laureate Frances Arnold, have joined forces to limit the dangers of A.I. in creating bioweapons while boosting its benefits in biology, like making new meds and vaccines. Last year, Anthropic's CEO warned Congress that A.I. could let the wrong hands whip up massive bioattacks, stirring a mix of concern and debate. 

These experts are now focusing on keeping DNA-making gear—the key to turning A.I. protein designs into real-world threats—under tight watch. They're not looking to halt A.I.'s growth but to guide its safe use, ensuring the tech's positive impact outweighs the bad, without putting its potential in a stranglehold.

Insilico Medicine unveils first AI-generated and AI-discovered drug in new paper

Insilico Medicine, a biotech startup, has unveiled what it claims as the first AI-discovered drug, now in Phase II trials. The drug, INS018_055, targets idiopathic pulmonary fibrosis, a lung disease. It's a breakthrough in AI-driven drug discovery, using biology AI to find the target and generative chemistry AI to create the molecule. 

This progress, achieved in just two and a half years at a fraction of traditional costs, showcases the potential of AI to revolutionize drug development. Insilico's success, backed by investors like Sinovation Ventures, signifies a significant milestone in the field, with implications for faster, cheaper drug development.

Microsoft begins blocking some terms that caused its AI tool to create violent, sexual images

Microsoft has begun adjusting its Copilot AI tool after concerns were raised about its image-generation capabilities. Certain prompts, including terms related to controversial topics like abortion and drugs, are now blocked. Additionally, there's a warning about policy violations leading to tool suspension. Microsoft aims to strengthen safety filters to prevent misuse. 

Despite these changes, issues like generating violent or sexualized images persist, prompting further scrutiny from regulators and stakeholders. Microsoft's response underscores the ongoing challenge of ensuring responsible AI development and usage in the face of evolving capabilities and potential risks.

LLMs exhibit significant Western cultural bias, study finds

A study by researchers at Georgia Tech reveals significant cultural bias in large language models (LLMs), even when prompted in Arabic or trained on Arabic data. LLMs tend to favor Western concepts and entities, impacting users from non-Western cultures. T

he study introduces CAMeL, a benchmark dataset for assessing cultural biases, and suggests hiring diverse data labelers and exploring technical solutions to mitigate bias. Addressing these challenges requires collaboration among researchers, AI developers, and policymakers to ensure culturally aware AI systems promote inclusivity and understanding globally.


PixArt-Σ, a cutting-edge model, generates 4K images directly, outdoing its predecessor with sharper, text-aligned pics. It upgrades through "weak-to-strong training," using better data and a smart attention module for efficiency. Despite being smaller in size, it tops bigger models in quality, fitting for industries like film and gaming. 

Teaching Large Language Models (LLMs) to reason using Reinforcement Learning (RL) methods, particularly Reinforcement Learning from Human Feedback (RLHF), proves effective in aligning LLM outputs with human preferences. Various algorithms, including Expert Iteration, PPO, and Return-Conditioned RL, enhance LLM reasoning capabilities with sparse and dense rewards, yielding comparable performance. Expert Iteration shows superior results, with similar sample complexities to PPO, indicating potential for RL in LLM fine-tuning.

The Yi model family, developed by 01.AI, encompasses language and multimodal models, including chat and vision-language models. Leveraging base models pretrained on large corpora, Yi models achieve impressive performance on benchmarks like MMLU. Finetuned chat models excel in human preference rates on platforms like AlpacaEval and Chatbot Arena, attributing success to high-quality data engineering efforts. With scalable infrastructure and transformer architecture, Yi models benefit from extensive pretraining on 3.1 trillion tokens of English and Chinese corpora, and meticulous finetuning on small-scale instruction datasets. 

Despite recent advancements in Vision-Language Models (VLMs), such as GPT-4V, in various tasks, including vision-language tasks, our exploration into visual deductive reasoning uncovers significant limitations. Assessing VLMs on Raven's Progressive Matrices (RPMs), we reveal blindspots in their abilities to perform multi-hop relational and deductive reasoning solely based on visual cues. Comprehensive evaluations on diverse datasets indicate that while VLMs excel in text-based reasoning, they struggle with visual deductive reasoning due to difficulties in perceiving and comprehending abstract patterns. 

The study explores tool-augmented Large Language Models (LLMs), focusing on their ability to learn and utilize tools accurately. Existing models, including GPT-4, exhibit only 30% to 60% correctness in tool usage. A biologically inspired approach, simulated trial and error (STE), leveraging imagination and memory, significantly improves tool learning, boosting Mistral-Instruct-7B by 46.7% and surpassing GPT-4 performance.


Customers.ai - improves full-funnel marketing performance with visitor identification, B2C & B2B prospecting data and AI-powered remarketing

Chatfuel - messaging platform designed to automate business communication on Facebook, WhatsApp, Instagram, and websites.

DeepL - online translation service renowned for its accuracy and fluency in translating between numerous languages.

Seventh Sense - AI software designed to drive maximum performance and engagement with your existing email marketing program.

Ava - a digital sales representative designed to automate the end-to-end cold email sales process. 

NotesGPT - AI-powered tool that transforms voice notes into structured summaries and actionable items. 

Mindy -  AI-driven "Chief of Staff" accessible via email, designed to tackle a wide range of tasks from research and shopping to scheduling meetings.


Researchers enhance peripheral vision in AI models

MIT researchers enhance AI models' peripheral vision to mimic human abilities, aiding driver safety and understanding human behavior. They develop a dataset simulating peripheral vision, improving AI object detection. Results show AI still lags behind humans, prompting further exploration to bridge this gap. The study offers insights for AI safety and human-machine interactions. MIT

AI passes computer science course with 100% accuracy

An AI system excels in BYU's Computer Science 110, sparking debates on AI's role in education. Concerns over academic integrity arise amidst widespread AI use. BYU professor Porter Jenkins emphasizes balancing technological advancements with integrity, advocating for embracing AI while upholding academic standards. The incident underscores the evolving landscape of education. THE DAILY UNIVERSE

AI-generated cloud looms over Hollywood’s ‘Barbenheimer’ Oscars 

Hollywood's Oscars face AI disruption as the 'Barbenheimer' cloud emerges. AI-generated films challenge traditional cinema, impacting storytelling and actors' roles. The industry grapples with technological advances, blurring lines between authenticity and artificiality. Hollywood's future hinges on navigating this digital revolution and preserving its essence amid AI's encroachment. FINANCIAL TIMES

Adobe Finds AI Hype Is a Two-Edged Sword

Adobe's rapid adoption of AI boosted its stock, but recent disappointment in AI contributions led to a 10% drop in shares. Concerns spiked further with OpenAI's Sora tool announcement. Despite exaggerated worries, Adobe's competitive threat remains manageable. However, its inflated valuation renders its stock vulnerable. THE WALL STREET JOURNAL

AI is Driving Record Sales at Multibillion-Dollar Databricks. An IPO Can Wait …

Databricks, a data powerhouse, flaunts a $1.6 billion revenue, eyeing an IPO amid a record sales surge. CEO Ali Ghodsi touts readiness but waits for the perfect IPO window. With AI driving growth, Databricks invests in ventures like Glean. Market timing, amidst IPO uncertainty, dictates the waiting game. THE WALL STREET JOURNAL

The Nvidia Chips Inside Powerful AI Supercomputers

Nvidia's graphics processing units (GPUs), initially designed for gaming, now power AI systems like ChatGPT. Unlike traditional chips, GPUs handle multiple computations simultaneously through parallel processing, making them crucial for AI tasks since around 2012. THE WALL STREET JOURNAL

AI Talent Is in Demand as Other Tech Job Listings Decline

Amidst declining tech job listings, demand for AI talent is soaring in the US. Companies are actively recruiting professionals in artificial intelligence, offering higher pay. While the pandemic prompted layoffs across industries, AI-related roles have remained resilient, according to job listings data. THE WALL STREET JOURNAL

More Workers Need Help To Become AI-Literate, LinkedIn Study Shows

As AI reshapes work, AI literacy is crucial. Leaders must grasp AI's impact and promote continuous learning. A LinkedIn study reveals a lack of AI preparation in the UK and Europe. With AI skills becoming pivotal, fostering a learning culture is vital for talent retention and organizational success. Leaders should lead by example, integrate AI into strategies, encourage collaboration, and address ethical concerns to thrive in the AI-driven future. FORBES

Google Gemini Tutorial: How to Use Gemini AI (With Images)

Google Gemini, Google's AI chatbot, competes with ChatGPT, offering free usage and advanced paid plans. Users need a Google account to access it. Gemini focuses on prompts for actions, such as generating images. Management of data usage is possible, and ethical concerns exist. Gemini Advanced offers enhanced features for a fee. TECH.CO

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Join the conversation

or to participate.