OpenAI has officially introduced GPT-5.5, expanding its capabilities across ChatGPT and Codex users while signaling a broader push toward advanced AI-driven productivity. Initially, the rollout covers Plus, Pro, Business, and Enterprise tiers within ChatGPT and Codex, and notably, API access is expected soon. At the same time, the company has introduced GPT-5.5 Pro for higher-tier ChatGPT users, further strengthening enterprise-grade AI usage.

Moreover, Codex users across multiple plans—including Plus, Pro, Business, Enterprise, Edu, and Go—can now access GPT-5.5, which features an impressive 400,000-token context window. This expanded capacity allows the model to process larger datasets and handle more complex instructions efficiently.

According to OpenAI, GPT-5.5 is specifically designed to manage broader and multi-step workflows with minimal prompting. As a result, it demonstrates strong performance in coding, online research, data analysis, document generation, spreadsheet management, and software operations. Importantly, the company highlights that GPT-5.5 maintains the same per-token latency as GPT-5.4 in real-world scenarios, ensuring speed does not compromise performance.

In addition, OpenAI emphasized that GPT-5.5 consumes fewer tokens compared to GPT-5.4 for similar Codex tasks. Consequently, this efficiency translates into better cost-effectiveness and improved outcomes for developers and knowledge workers alike.

Stronger Focus on Coding Performance

A significant portion of this update centers on software engineering advancements. OpenAI reported that GPT-5.5 achieved 82.7% on Terminal-Bench 2.0 and 58.6% on SWE-Bench Pro. Furthermore, it outperformed GPT-5.4 in internal Expert-SWE evaluations, particularly for long-horizon coding tasks.

These benchmarks measure how effectively AI models handle real-world development scenarios such as command-line workflows, GitHub issue resolution, and extended coding operations. Not only did GPT-5.5 outperform its predecessor, but it also achieved these improvements while using fewer tokens.

Additionally, OpenAI stated that these enhancements are clearly visible in Codex. The model can now efficiently manage implementation, debugging, refactoring, testing, and validation tasks. Early testing also suggests improved context retention across large systems and better reasoning when addressing ambiguous failures.

Expanding into Workplace Productivity

Beyond coding, GPT-5.5 is positioned as a powerful tool for workplace productivity. OpenAI explained that its improvements support research, information gathering, validation processes, and content creation across formats such as documents, spreadsheets, and presentations.

Within the company, more than 85% of employees reportedly use Codex weekly across departments like engineering, finance, marketing, communications, data science, and product management. For example, teams have used the model to analyze six months of communication data, review 24,771 K-1 tax forms totaling 71,637 pages, and automate weekly reports—saving up to 10 hours per week for some employees.

Furthermore, in benchmarked knowledge work, GPT-5.5 achieved 84.9% on GDPval, 78.7% on OSWorld-Verified, and 98.0% on Tau2-bench Telecom without prompt tuning. It also recorded strong results in finance and enterprise-related tasks.

Advancements in Research Capabilities

OpenAI also highlighted improvements in scientific and technical research workflows. GPT-5.5 demonstrated better performance in fields such as genetics, quantitative biology, and bioinformatics. The company noted gains over GPT-5.4 on GeneBench and leading performance on BixBench among models with published scores.

Interestingly, OpenAI revealed that an internal version of GPT-5.5 contributed to discovering a new proof related to off-diagonal Ramsey numbers in combinatorics. The company later verified this result using Lean, reinforcing the model’s growing role in advanced research.

Infrastructure and Performance Enhancements

To support these advancements, OpenAI redesigned parts of its inference system. The company co-designed, trained, and deployed GPT-5.5 on NVIDIA GB200 and GB300 NVL72 systems. In addition, Codex analyzed production traffic patterns and developed custom heuristic algorithms for load balancing and partitioning. As a result, token generation speeds increased by more than 20%.

Enhanced Safety Measures

OpenAI stated that GPT-5.5 launches with its strongest safeguards to date. The company conducted extensive testing under its safety and preparedness frameworks, including evaluations for cybersecurity and biological risks. It collaborated with internal and external red teams and collected feedback from nearly 200 trusted early-access partners.

According to OpenAI, GPT-5.5 is rated High for biological, chemical, and cybersecurity risks under its Preparedness Framework. However, the company clarified that the model does not reach its Critical cybersecurity capability level. Still, it represents a step forward compared to GPT-5.4 and introduces stricter controls for high-risk activities and misuse prevention.

Pricing and API Access

For developers, OpenAI will offer GPT-5.5 through the Responses and Chat Completions APIs at $5 per one million input tokens and $30 per one million output tokens, with a one million-token context window. Additionally, GPT-5.5 Pro will be available at higher pricing tiers.

Although GPT-5.5 is priced above GPT-5.4, OpenAI emphasized that improved efficiency and reduced token usage ultimately deliver better value for users.

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com