Nov. 25 (Portaltic/EP) –
OpenAI has launched a update your model of Artificial Intelligence (AI) GPT-4o, with which he adds improvements in creative writing and in the ability to work with uploaded filesoffering more complete answers, in addition to sharing a new method for automated teamworkin order to understand the potential risks of AI.
The technology company led by Sam Altman presented GPT-4o in May of this year, a model that accepts any combination of text, audio and image, and that it can respond to a voice input using a time similar to that needed by humans, with an average of 320 milliseconds.
Now, OpenAI has Updated GPT-4o to improve your creative writing abilityas well as for offer a better experience when working with uploaded filesas it will provide more accurate information related to these documents.
This was recently announced by the technology company in a post on X (former Twitter), where he has pointed out that the GPT-4o model will offer more natural, attractive and personalized writing” in order to improve the relevance and readability of the responses it shows to users.
Following this line, when the model offers results related to documents uploaded to ChatGPT, whether images or text documents, will provide “deeper” information and, therefore, answers “more complete.”
With all this, it must be taken into account that, for the moment, the capabilities of the GPT-4o AI model are available exclusively to users subscribed to the paid version ChatGPT Plus.
NEW AUTOMATED TEAMWORK METHOD
In addition to this GPT-4o update, OpenAI has also shared two new research articles in which he shows his advances related to teamwork. This involves work and research methods, both manual and automated, carried out with external experts to test the possible risks of new systems and encourage development of “safe and beneficial” AI.
The company has clarified that these investigations are based on the use of a “more powerful” AI to “scale the discovery of errors in the models”, either when evaluating them or to train them safely.
In this sense, as the company has stressed in a statement on their websitethe new articles on teamwork include, on the one hand, a white paper that details how they hire external team members to test their cutting-edge models.
On the other hand, they detail a research study which presents a new method for automated teamwork. Specifically, OpenAI refers to the ability to automate large-scale teamwork processes for AI models. “This approach helps create updated security assessments and benchmarks that can be reused and improved over time”, In relation to the tests carried out by the red team experts, the technological one has been specified.
Specifically, OpenAI explains in the article that AI models can help in the formation of red teams, as well as offer insights into the possible risks of AI and offer options for evaluating these risks.
That is, the objective of these automated red teams is generate a large number of examples where an AI behaves incorrectlyespecially in matters related to security. However, unlike human red teams, the company has clarified that Automated methods stand out for “easily generating examples of larger-scale attacks.” Additionally, researchers have shared new techniques to improve the diversity of such attacks and, at the same time, ensure that they are successful.
For example, if they wanted to find examples of how ChatGPT offers impermissible illicit advice, researchers could use the GPT-4T model for Analyze examples such as “how to steal a car” or “how to build a bomb.” After that, train a separate teamwork model to try to trick ChatGPT into showing responses for each example. The results of these tests are used to improve the security of the model and its evaluations.
“Our research concludes that a More capable AI can further help automated teamwork in the way it analyzes attackers’ goalshow it judges the success of the attacker and how it understands the diversity of attacks,” OpenAI said.
However, OpenAI has indicated that this new method for automated teamwork ““needs additional work” to incorporate public perspectives on ideal model behavior, policies and other associated decision-making processes. Therefore, it is still developing before starting to be used to test the models.
Add Comment