news-01082024-111448

Amid concerns about the safety of advanced intelligence systems, OpenAI CEO Sam Altman announced that the company’s next major generative AI model will undergo safety checks by the U.S. government. Altman mentioned that OpenAI has been collaborating with the U.S. AI Safety Institute to ensure the safety of its upcoming model and to advance AI evaluation science.

OpenAI has been a prominent name in the AI industry, known for products like ChatGPT and other foundation models. However, the company has faced criticism for its fast-paced approach, with some former employees raising concerns about the lack of focus on safety in advanced AI research. In response to these concerns, U.S. senators wrote to Altman questioning OpenAI’s commitment to safety and reports of retribution against employees who raised concerns publicly.

In a letter responding to the senators, OpenAI’s chief strategy officer reaffirmed the company’s dedication to developing AI for the benefit of humanity and implementing rigorous safety protocols. OpenAI has also taken steps like allocating 20% of its computing resources to safety research, canceling the non-disparagement clause in employment agreements, and partnering with the AI Safety Institute for safe model releases.

The U.S. government, through the AI Safety Institute, is working with OpenAI and other tech companies to address risks associated with advanced AI. OpenAI has similar agreements with the U.K. government for safety screening of its models. Safety concerns for OpenAI intensified in May when key members of the superalignment team resigned, citing a lack of focus on safety culture and processes.

Despite these challenges, OpenAI has continued to release new products and has formed a safety and security committee to review its processes and safeguards. The committee is led by industry experts and aims to enhance OpenAI’s safety measures and ensure responsible AI development.

Overall, OpenAI’s collaboration with government bodies and commitment to safety research demonstrate its efforts to address concerns about the safety of advanced intelligence systems. By working with experts and regulatory bodies, OpenAI aims to prioritize safety in its AI development efforts and contribute to the responsible advancement of artificial intelligence.