Regulation
OpenAI Forms New Committee for Safety and Security
OpenAI has announced the establishment of a new Safety and Security Committee. This strategic move is aimed at positioning the organization to make key safety and security decisions about its projects and operations.
The committee will be instrumental in recommending procedures to the full board as well as putting in place efficient processes within OpenAI’s developmental frameworks especially as the company moves to train its next frontier model.
OpenAI Introduces Safety and Security Oversight
This new committee is led by Bret Taylor and members include Sam Altman who is the CEO of OpenAI, Adam D’Angelo, and Nicole Seligman. This team will first be tasked with assessing and improving the safety and security of OpenAI.
They are expected to come up with their first report in the next 90 days, which will be vital in determining the safety measures of OpenAI projects. The formation of this committee is a sign that OpenAI is keen on ensuring high safety levels as it seeks to achieve better artificial intelligence technologies.
OpenAI Board forms Safety and Security Committee, responsible for making recommendations on critical safety and security decisions for all OpenAI projects. https://t.co/tsTybFIl7o
— OpenAI (@OpenAI) May 28, 2024
This comes after the recent commencement of training on the latest OpenAI AI model that seeks to replace the GPT-4 system that is currently in use in its ChatGPT chatbot. The organization has stated its commitment to being at the forefront not only in capability but in safety, which shows a positive outlook towards the potential dangers of AI creation.
What Led to This Move?
The formation of the Safety and Security Committee is rather timely given that the safety of AI is now emerging as a major topic of discussion among the technological fraternity.
Some have interpreted OpenAI’s decision to make this committee official as a reaction to the ongoing controversies and discussions on AI safety standards, particularly after some of its employees resigned or publicly criticized the organization.
Jan Leike, an ex-employee at OpenAI, has previously expressed his concerns regarding the company, pointing out that product development seems to be valued more than the safety measures.
This new committee is a part of the steps OpenAI is taking to maintain the innovative character of the project while keeping safety as one of the main priorities in the project development process.
Read Also: Wall Street Reverts To T+1 Settlement, What It Means For Crypto
The presented content may include the personal opinion of the author and is subject to market condition. Do your market research before investing in cryptocurrencies. The author or the publication does not hold any responsibility for your personal financial loss.
✓ Share: