Computing in the end comes all the way down to 1s and 0s. For instance: in the event you tack an additional 0 onto the $20 price ticket for ChatGPT Plus, you’ll have the price of ChatGPT Pro—the latest premium tier of OpenAI’s chatbot platform that provides entry to the corporate’s new “reasoning” mannequin.
The brand new subscription choice, marketed for engineers and researchers, will value $200 per 30 days and presents limitless use of the corporate’s GPT-4o and o1 fashions, in addition to full entry to the o1 mannequin’s “professional mode”, which is designed to imitate human reasoning to reply extra complicated questions. The announcement was made as a part of OpenAI’s “12 Days of Shipmas” throughout which it plans to indicate off 12 new merchandise within the lead-up to the vacation season.
OpenAI previewed o1 earlier this yr with restricted entry to its o1-preview and o1-mini fashions, beforehand identified by the codename Strawberry. The next-generation large language models confirmed off the corporate’s new strategy to complicated computations by chain-of-thought reasoning. Principally, it’s a chatbot able to “pondering” earlier than it responds to questions. Whereas chatbots working on fashions like GPT-4o or GPT-4 may require refining prompts and questions with a purpose to ship a significant reply, o1 is designed to do all that work behind the scenes earlier than responding.
The outcomes, in line with the corporate’s benchmarking tests, are spectacular. OpenAI claims the o1 mannequin scored within the 89th percentile in programming competitions held by Codeforces and was in a position to appropriately reply 83 p.c of questions from an Worldwide Arithmetic Olympiad qualifying take a look at. In contrast, GPT-4o solely managed to get 14 p.c proper.
However the mannequin has its justifiable share of shortcomings, too. It’s each slower and costlier than GPT-4o and different fashions. Within the preview model of o1, input tokens—primarily models of textual content that the mannequin makes use of to parse a immediate—value about three times the price of tokens for GPT-4o. An analysis of o1 conducted by the AI developer platform Vellum discovered the reasoning mannequin is 30 instances slower than its predecessor.
Reviews of the mannequin additionally discovered that whereas it does appear extra able to tackling complicated math issues and coding duties, it’s no higher—and in some instances, worse—at answering easy questions. In reality, OpenAI’s own help pages admit, “GPT-4o remains to be the best choice for many prompts.”
After all, the brand new ChatGPT Professional subscription offers customers entry to each the GPT-4o and new o1 reasoning mannequin, so you would all the time change between choices relying in your wants. Or, alternatively, you would spend $200 on anything.
Trending Merchandise