• NATURAL 20
  • Posts
  • Tech Leaders Meet to Discuss Responsible AI Development Amidst Calls for Pause

Tech Leaders Meet to Discuss Responsible AI Development Amidst Calls for Pause

Top Executives from OpenAI, Microsoft, Google, Apple, Nvidia, and Others Join Forces to Set Standards for Safe AI Use and Address Growing Concerns

Today:

Top Tech Executives to Hold Council on AI Guardrails Amid Calls for Development Pause

Top tech executives and staffers from OpenAI, Microsoft, Google, Apple, Nvidia, Stability AI, Hugging Face, and Anthropic will meet to discuss setting standards for AI use. The meeting, convened by top investment firm SV Angel, is expected to discuss how to develop AI responsibly. The meeting comes after prominent tech innovators published an open letter calling for a temporary pause in AI development to develop safety protocols. Meanwhile, U.S. lawmakers have commented on the need for AI regulation, but no major legislation has been passed. Sen. Lindsey Graham, R-SC, has called for more regulation of both the use and development of AI in the US.

OpenAI: Our Approach to AI Safety

OpenAI has stated that the best approach to AI safety is to dedicate more time and resources to researching effective mitigations and alignment techniques and testing them against real-world abuse. They also believe that improving AI safety and capabilities should go hand in hand. OpenAI will be increasingly cautious with the creation and deployment of more capable models and continue to enhance safety precautions as their AI systems evolve. They may take longer than six months to improve AI systems' safety. Policymakers and AI providers need to ensure that AI development and deployment is governed effectively on a global scale. Addressing safety issues requires extensive debate, experimentation, and engagement, including on the bounds of AI system behavior. OpenAI fosters collaboration and open dialogue among stakeholders to create a safe AI ecosystem.

Users Who Spot Bugs In ChatGPT Can Now Make Up To $20,000

OpenAI is now offering rewards between $200 and $20,000 for users who find bugs in its ChatGPT plugins, API, and other related services. This comes after a recent data breach that exposed private data from ChatGPT Plus users, and growing concerns over privacy risks. The bug bounty program will be managed by Bugcrowd and the reward amount will depend on the severity of the bug found. However, there are certain rules and guidelines for what will not be rewarded. Four vulnerabilities have already been rewarded at the time of publication. Let the bug hunting begin!

AI: China Tech Giant Alibaba To Roll Out ChatGPT Rival

Alibaba, the Chinese tech giant, plans to introduce its own generative AI chatbot called Tongyi Qianwen to be used in its cloud computing unit, DingTalk, and Tmall Genie, a voice assistant similar to Amazon's Alexa. Generative AI chatbots have gained popularity since the release of ChatGPT by Microsoft-backed OpenAI. The US tech giant also announced that it will integrate a version of ChatGPT in its Office apps. Recently, Google and Baidu have also released similar AI models and chatbots. Chinese authorities have proposed draft measures to manage generative AI, which will hold companies responsible for the legitimacy of data used to train the technology. Meanwhile, concerns have been raised by industry figures about the potential risks of developing powerful AI systems.

As Critics Circle, Sam Altman Hits The Road To Hype OpenAI | The AI Beat

OpenAI CEO Sam Altman is embarking on a global spring tour to promote OpenAI, which includes stops in 17 cities worldwide. Altman has already met with Japan's prime minister to discuss opening an OpenAI office and expanding its services in the country. The tour comes at a time when OpenAI is facing criticism, including a contentious open letter calling for an AI 'pause', Italy's ban on OpenAI's ChatGPT due to data privacy concerns, and a complaint that GPT-4 violates FTC rules. As regulators start circling and critics get louder, a "round-the-world goodwill tour" may be just what OpenAI needs.

Attackers Hide Redline Stealer Behind ChatGPT, Google Bard Facebook Ads

Cybercriminals are taking advantage of Facebook ads to distribute the info-stealing malware RedLine Stealer by posing as AI chatbots, including ChatGPT and Google Bard. Veriti researchers detected the campaign in January and followed it through to its peak in March. RedLine Stealer is malware-as-a-service platform that targets browsers to collect user data, including passwords and payment card details. The malware can also perform other malicious functions, such as uploading and downloading files and executing commands. The malware is an inexpensive choice for the campaign, with a cost of $100 to $150 on the Dark Web, providing attackers with a significant return on investment. Veriti researchers recommend a comprehensive approach to cybersecurity to combat this type of attack.

Two Google Employees Tried to Stop the Company from Launching Bard, Citing Concerns About Inaccurate and Dangerous Responses

Two Google employees raised concerns about the company's AI development, specifically about the release of its chatbot, Bard, which they felt generated dangerous or false statements. The employees recommended blocking Bard's release, but the director of Google's Responsible Innovation group altered their risk evaluation document to downplay the risks and move forward with a limited experimental launch. This incident comes amid a rush to deploy generative AI products, prompting several AI heavyweights to call for a six-month pause on advanced AI development. The rate of AI development has picked up at an unprecedented speed, and experts warn that we are not ready for the impact this technology might have.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

What are MOST interested in learning about AI?

What stories or resources will be most interesting for you to hear about?

Login or Subscribe to participate in polls.

Reply

or to participate.