Auto-translation used

The security of all OpenAI AI models will be monitored by an "independent committee"

The head of the American company OpenAI, Sam Altman, decided to withdraw from the internal commission established in May 2024 to control the security of artificial intelligence models developed by the company. The new committee will be composed anew from the current members of the OpenAI board of directors and will be given the authority to suspend the release of the product if any danger is noticed in it.

According to journalists, a crisis situation has been developing at OpenAI over the past few months. Against this background, about 50% of employees working with long-term risks of new OpenAI products have left their jobs. Many of those who hold senior positions in the company accused Sam Altman of opposing real measures to regulate artificial intelligence that hinder the achievement of corporate goals. In part, the critics' correctness was confirmed by a significant increase in OpenAI's lobbying expenses: during the first six months of this year, the budget increased from $ 260 thousand to $ 800 thousand.

The new security committee at OpenAI is headed by Professor of Carnegie Mellon University Cicco Coulter.

The main responsibilities of the new independent committee will be to receive and study regular reports on the technical assessment of existing and developing AI models. The Committee will also review the monitoring reports on the released products after their release. It is already known that an independent commission has analyzed the security level of the latest ChatGPT 01 model.

At the same time, TechCrunch journalists are skeptical about the honesty and activities of the new committee, since even after Sam Altman left the commission, there is little indication that the committee's representatives will make decisions that are unfavorable for the company.

The official website of OpenAI says that information security is one of the most important components of the security of artificial intelligence. Therefore, the company will continue to use a risk-based approach in implementing security measures, as well as develop this approach as the threat model and risk profiles created by artificial intelligence models change.

More on the topic: