Home/Models/OpenAI/GPT-5.1
O

GPT-5.1

輸入:$1/M
輸出:$8/M
GPT-5.1 is a general-purpose instruction-tuned language model focused on text generation and reasoning across product workflows. It supports multi-turn dialogue, structured output formatting, and code-oriented tasks such as drafting, refactoring, and explanation. Typical uses include chat assistants, retrieval-augmented QA, data transformation, and agent-style automation with tools or APIs when supported. Technical highlights include text-centric modality, instruction following, JSON-style outputs, and compatibility with function calling in common orchestration frameworks.
新
商用
Playground
概览
功能亮点
定价
API

GPT 5.1 API is what GPT-5.1 Thinking is the advanced reasoning variant of OpenAI’s GPT-5.1 family, it prioritizes adaptive, higher-quality reasoning while giving developers explicit control over the latency / compute trade-off.

Basic features

  • Adaptive reasoning: the model dynamically adjusts thinking depth per request — faster on routine tasks, more persistent on complex ones. This reduces latency and token use for common queries. explicitly allocates more reasoning time for complex prompts, and is more persistent on multi-step problems; can be slower for hard tasks but gives deeper answers.
  • Reasoning modes: none / low / medium / high (GPT-5.1 defaults to none for low-latency cases; choose higher levels for more demanding tasks). The Responses API exposes a reasoning parameter to control this.
  • Default tone & style: written to be clearer on complex topics (less jargon), more explanatory and “patient.”
  • Context window (tokens / long context) Thinking: much larger — 400K token context for paid tiers.

Key technical details

  • Adaptive compute allocation — training and inference design causes the model to expend fewer reasoning tokens on trivial tasks and proportionally more on difficult tasks. This is not a separate “think engine” but a dynamic allocation within the reasoning pipeline.
  • Reasoning parameter in the Responses API — clients pass a reasoning object (for example reasoning: { "effort": "high" }) to request deeper internal reasoning; setting reasoning: { "effort": "none" } effectively disables the extended internal reasoning pass for lower latency. The Responses API also returns reasoning/token metadata (helpful for cost and debugging). )
  • Tools & parallel tool calls — GPT-5.1 improves parallel tool calling and includes named tools (like apply_patch) that reduce failure modes for programmatic edits; parallelization increases end-to-end throughput for tool-heavy workflows.
  • Prompt cache and persistence — prompt_cache_retention='24h' is supported on Responses and Chat Completions endpoints to retain context across multi-turn sessions (reduces repeated token encoding).

Benchmark performance

Latency / token efficiency examples (vendor-provided): on routine queries, OpenAI reports dramatic reductions in tokens/time (example: an npm listing command that took ~10s / ~250 tokens on GPT-5 now takes ~2s / ~50 tokens on GPT-5.1 in their representative test). Third-party early testers (e.g., asset managers, coding firms) reported 2–3× speedups on many tasks and token-efficiency gains in tool-heavy flows.

OpenAI and early partners published representative benchmark claims and measured improvements:

EvaluationGPT‑5.1 (high)GPT‑5 (high)
SWE-bench Verified (all 500 problems)76.3%72.8%
GPQA Diamond (no tools)88.1%85.7%
AIME 2025 (no tools)94.0%94.6%
FrontierMath (with Python tool)26.7%26.3%
MMMU85.4%84.2%
Tau2-bench Airline67.0%62.6%
Tau2-bench Telecom*95.6%96.7%
Tau2-bench Retail77.9%81.1%
BrowseComp Long Context 128k90.0%90.0%

Limitations & safety considerations

  • Hallucination risk persists. Adaptive reasoning helps on complex problems but does not eliminate hallucinations; higher reasoning_effort improves checks but does not guarantee correctness. Always validate high-stakes outputs.
  • Resource and cost tradeoffs: while GPT-5.1 can be far more token-efficient on simple flows, enabling high reasoning effort or long agentic tool-use can increase token consumption and latency. Use prompt caching to mitigate repeated costs where appropriate.
  • Tool safety: apply_patch and shell tools increase automation power (and risk). Production deployments should gate tool execution (review diffs / commands before execution), use least privilege, and ensure robust CI/CD and operational guardrails.

Comparison with other models

  • vs GPT-5: GPT-5.1 improves adaptive reasoning and instruction adherence; OpenAI reports faster response times on easy tasks and better persistence on hard tasks. GPT-5.1 also adds the none reasoning option and extended prompt caching.
  • vs GPT-4.x / 4.1: GPT-5.1 is designed for more agentic, tool-heavy, and coding tasks; OpenAI and partners report gains on coding benchmarks and multi-step reasoning. For many standard conversational tasks, GPT-5.1 Instant may be comparable to earlier GPT-4.x chat models but with improved steerability and personality presets.
  • vs Anthropic / Claude / other LLMs: ChatGPT 5.1′;s MoA architecture gives it a distinct edge in tasks requiring complex, multi-step reasoning. It scored an unprecedented 98.20 on the HELM benchmark for complex reasoning, compared to Claude 4’s 95.60 and Gemini 2.0 Ultra’s 94.80.

GPT-5.1 的功能

了解 GPT-5.1 的核心能力,帮助提升性能与可用性,并改善整体体验。

GPT-5.1 的定价

查看 GPT-5.1 的竞争性定价,满足不同预算与使用需求,灵活方案确保随需求扩展。
Comet 价格 (USD / M Tokens)官方定价 (USD / M Tokens)折扣
輸入:$1/M
輸出:$8/M
輸入:$1.25/M
輸出:$10/M
-20%

GPT-5.1 的示例代码与 API

GPT 5.1 API is what GPT-5.1 Thinking is the advanced reasoning variant of OpenAI’s GPT-5.1 family, it prioritizes adaptive, higher-quality reasoning while giving developers explicit control over the latency / compute trade-off.
Python
JavaScript
Curl
from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)
response = client.responses.create(
    model="gpt-5.1", input="Tell me a three sentence bedtime story about a unicorn."
)

print(response)

更多模型