OpenAI has rolled out new beta options for ChatGPT Plus members. Subscribers can now add information and work with them, together with receiving multimodal help. The chatbot will now not require customers to manually choose modes like Browse with Bing; as an alternative, it’s going to predict the mode based mostly on context. These options convey a number of the workplace capabilities of ChatGPT Enterprise plan to particular person chatbot subscribers.
Whereas the multimodal replace hasn’t been obtained by all Plus plan customers but, the Superior Information Evaluation function has been examined by some. As soon as a file is uploaded to ChatGPT, it takes a brief interval to course of earlier than the chatbot is able to work with it. The chatbot can then carry out duties comparable to summarizing knowledge, answering questions, or producing knowledge visualizations based mostly on prompts.
Moreover, the chatbot isn’t restricted to textual content information alone. An instance posted on Threads showcased how a person uploaded a picture of a capybara and requested ChatGPT, using DALL-E 3, to create a Pixar-style picture based mostly on it. The person then uploaded one other picture, a wiggly skateboard, to additional modify the idea. Apparently, the chatbot added a hat to the picture.
OpenAI continues to boost the capabilities of their chatbot, offering customers with new and intuitive options. These updates empower customers to work with completely different file varieties and leverage multimodal help, increasing the capabilities of ChatGPT Plus.
- Supply Article: Wes Davis, The Verge