Artificial intelligenceOnline services

OpenAI Introduced The GPT-4 Model: It Can Understand Not Only Text But Also Images

OpenAI has introduced a new GPT-4 language model that outperforms the current version of GPT-3.5 (which runs ChatGPT) in a number of ways. The main feature was that GPT-4 is able to understand and answer complex questions, which became possible due to the broader knowledge of the system.

Unlike GPT-3.5, GPT-4 can understand not only text, but also images. This option is currently being tested by only one partner of the company, a service for the visually impaired and the blind Be My Eyes. The app now has a “virtual volunteer” feature that can answer questions about images sent to them. For example, tell about the contents of the refrigerator from its photo and offer recipes from available products.

Among other changes, we note an increase in stability, optimization of the system as a whole and other improvements. At the same time, the developers have not yet opened access to everyone. The GPT-4 language model is currently available to ChatGPT Plus subscribers. In the future, the system will be added to Duolingo, Stripe and Khan Academy products.

Leave a Reply

Your email address will not be published. Required fields are marked *