OpenAI Forms Expert Team To Study ‘Catastrophic’ AI Risks, Including Nuclear Threats

OpenAI today announced that it’s created a new team to assess, evaluate and probe AI models to protect against what it describes as “catastrophic risks.”

Aleksander Madry, the head of MIT’s Centre for Deployable Machine Learning, will serve as the team’s leader. The group is named Preparedness. (Madry joined OpenAI in May as “head of Preparedness,” per LinkedIn.) The main duties of preparedness will include monitoring, predicting, and guarding against the threats posed by future AI systems, which can range from their capacity to generate harmful code to their capacity to deceive and convince people (as in phishing attacks).

A few of the risk classifications Being ready is linked to studying things that appear more… unrealistic than others. For instance, OpenAI cites “chemical, biological, radiological, and nuclear” risks as the main concerns with regard to AI models in a blog post.

Sam Altman, the CEO of OpenAI, is a well-known AI pessimist who frequently expresses concerns that AI “may lead to human extinction,” whether out of personal belief or just for show. However, it goes farther than this writer could have imagined, to hint that OpenAI would really dedicate resources to researching situations right out of science fiction dystopian literature.

According to the corporation, it is also open to researching “less obvious” and more grounded areas of AI danger. OpenAI is inviting community members to submit ideas for risk assessments in conjunction with the formation of the Preparedness team. The best ten entries will get a $25,000 reward and an opportunity to work at Preparedness.

One of the contest entry questions states, “Imagine we gave you unrestricted access to OpenAI’s Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALLE·3 models, and you were a malicious actor.” “Consider the most unusual, potentially disastrous misuse of the model that is still likely.”

According to OpenAI, the Preparedness team will also be tasked with creating a “risk-informed development policy,” which will outline the company’s risk-reduction strategies, governance framework for supervision throughout the model development process, and methodology for developing AI model evaluations and monitoring tools. According to the business, it’s intended to supplement OpenAI’s prior work in the field of AI safety, with an emphasis on both the pre- and post-model deployment stages.

Leave a Reply

Your email address will not be published. Required fields are marked *