Science and Tech

OpenAI begins training its next AI model and creates a Safety Committee to guide critical decisions

OpenAI begins training its next AI model and creates a Safety Committee to guide critical decisions

May 28. (Portaltic/EP) –

OpenAI has started to train your next Artificial Intelligence (AI) model, with which it anticipates reaching the “next level of capabilities” to advance towards Artificial General Intelligence (AGI) and, given this advance, has announced the creation of a new Security Committeewith which you will make recommendations on critical security decisions for the company’s projects and operations.

During this month of May, the technology company launched its latest updated model of AI GPT-4o, what is faster and more capable than previous versionsas it offers responses in an average of 320 milliseconds (similar to human response time), in addition to accepting any combination of text, audio and image.

However, the company led by Sam Altman continues to advance in the development of its AI models and, although their intention is to continue launching “industry-leading” models, both in terms of capabilities and security, they have also indicated that it is necessary to encourage “a robust debate at this important time”, regarding the latest advances of the company.

In this framework, OpenAI has shared that has “recently” started training its next AI model, which he has referred to as the “next frontier model” and which is expected to be the successor to GPT-4, with which he has assured that They will have the technology to advance to the “next level of capabilities” on the path to AGI.

Faced with this innovation, the technology company has announced the creation of a new Security Committee, who will be in charge of evaluating and developing the company’s processes and safeguards to, after that, make recommendations to the Board on “critical security decisions” related to OpenAI projects and operations.

This is what OpenAI has shared in a statement on their websitein which he detailed that the Security Committee is led by AI technology experts, Bret Taylor, Adam D’Angelo and Nicole Seligman, as well as by the technology company’s CEO, Sam Altman.

Specific, during the next 90 daysthe Security Committee will evaluate and develop “further” the processes and safeguards that OpenAI currently has in place. After this period, he will discuss his recommendations with the OpenAI Board and then will publicly share an update on the adopted recommendations.

All this under the OpenAI objective of make security measures for new technologies more consistent of the company. In this way, the company has stressed that the Committee will also have technical and security experts to support this work, including former cybersecurity officials, Rob Joyce and John Carling.

Source link