While in casual conversation the distinction between GPT-3.5 and GPT-4 can be subtle, the company points out that its main difference is that it is more reliable, creative, and capable of handling much more nuanced instructions.
Another of the features that are part of GPT-4 is that it is a multimodal model, that is, it has the ability to generate not only text, but also other types of media, something that AI researchers highlight, since they consider that These types of systems are the ideal model to create better and more capable AI systems.
However, it is important to highlight that this feature is limited, since although it accepts text and image input, it only emits text responses. For OpenAI, the combination of both formats is part of a process to analyze increasingly complex requests, since if a user uploads a photograph that is not visually understandable, the AI will try to generate a convincing interpretation.
The problems of GPT-4
OpenAI noted that GPT-4 was put to the test for six months for the purpose of shoring up the security of the system and according to internal analysis, it was “82% less likely to respond to requests for disallowed content and 40% more likely to produce false content.”