Home/Models/OpenAI/GPT-5.2 Pro
O

GPT-5.2 Pro

입력:$16.8/M
출력:$134.4/M
맥락:400,000
최대 출력:128,000
gpt-5.2-pro is the highest-capability, production-oriented member of OpenAI’s GPT-5.2 family, exposed through the Responses API for workloads that demand maximal fidelity, multi-step reasoning, extensive tool use and the largest context/throughput budgets OpenAI offers.
새로운
상업적 사용
Playground
개요
기능
가격
API

What is GPT-5.2-Pro

GPT-5.2-Pro is the “Pro” tier of OpenAI’s GPT-5.2 family intended for the hardest problems — multi-step reasoning, complex code, large document synthesis, and professional knowledge work. It’s made available in the Responses API to enable multi-turn interactions and advanced API features (tooling, reasoning modes, compaction, etc.). The Pro variant trades throughput and cost for maximum answer quality and stronger safety/consistency in hard domains.

Main features (what gpt-5.2-pro brings to applications)

  • Highest-fidelity reasoning: Pro supports OpenAI’s top reasoning settings (including xhigh) to trade latency and compute for deeper internal reasoning passes and improved chain-of-thought-style solution refinement.
  • Large-context, long-document proficiency: engineered to maintain accuracy across very long contexts (OpenAI benchmarked up through 256k+ tokens for family variants), making the tier suitable for legal/technical document review, enterprise knowledge bases, and long-running agent states.
  • Stronger tool & agent execution: designed to call toolsets reliably (allowed-tools lists, auditing hooks, and richer tool integrations) and to act as a “mega-agent” that can orchestrate multiple subtools and multi-step workflows.
  • Improved factuality & safety mitigations: OpenAI reports notable reductions in hallucination and undesirable responses on internal safety metrics for GPT-5.2 vs prior models, supported by updates in the system card and targeted safety training.

Technical capabilities & specifications (developer-oriented)

  • API endpoint & availability: Responses API is the recommended integration for Pro-level workflows; developers can set reasoning.effort to none|medium|high|xhigh to tune internal compute devoted to reasoning. Pro exposes the highest xhigh fidelity.
  • Reasoning effort levels: none | medium | high | xhigh (Pro and Thinking support xhigh for quality-prioritized runs). This parameter lets you trade cost/latency for quality.
  • Compaction & context management: New compaction features allow the API to manage what the model “remembers” and reduce token usage while preserving relevant context—helpful for long conversations and document workflows.
  • Tooling & custom tools: Models can call custom tools (send raw text to tools while constraining model outputs); stronger tool-calling and agentic patterns in 5.2 reduce the need for elaborate system prompts.

Benchmark performance

Below are the most relevant, reproducible headline numbers for GPT-5.2 Pro (OpenAI’s verified/internal results):

  • GDPval (professional work benchmark): GPT-5.2 Pro — 74.1% (wins/ties) on the GDPval suite — a marked improvement over GPT-5.1. This metric is designed to approximate value in real economic tasks across many occupations.
  • ARC-AGI-1 (general reasoning): GPT-5.2 Pro — 90.5% (Verified); Pro was reported as the first model to cross 90% on this benchmark.
  • Coding & software engineering (SWE-Bench): strong gains in multi-step code reasoning; e.g., SWE-Bench Pro public and SWE-Lancer (IC Diamond) show material improvements over GPT-5.1 — representative family numbers: SWE-Bench Pro public ~55.6% (Thinking; Pro results reported higher on internal runs).
  • Long-context factuality (MRCRv2): GPT-5.2 family shows high retrieval and needle-finding scores across 4k–256k ranges (examples: MRCRv2 8 needles at 16k–32k: 95.3% for GPT-5.2 Thinking; Pro maintained high accuracy at larger windows). These show the family’s resilience to long-context tasks, a Pro selling point.

How gpt-5.2-pro compares with peers and other GPT-5.2 tiers

  • vs GPT-5.2 Thinking / Instant:: gpt-5.2-pro prioritizes fidelity and maximal reasoning (xhigh) over latency/cost. gpt-5.2 (Thinking) sits in the middle for deeper work, and gpt-5.2-chat-latest (Instant) is tuned for low-latency chat. Choose Pro for highest-value, compute-intensive tasks.
  • Versus Google Gemini 3 and other frontier models: GPT-5.2 (family) as OpenAI’s competitive response to Gemini 3. Leaderboards show task-dependent winners — on some graduate-level science and professional benchmarks GPT-5.2 Pro and Gemini 3 are close; in narrow coding or specialized domains outcomes can vary.
  • Versus GPT-5.1 / GPT-5: Pro shows material gains in GDPval, ARC-AGI, coding benchmarks and long-context metrics vs GPT-5.1, and adds new API controls (xhigh reasoning, compaction). OpenAI will keep earlier variants available during transition.

Practical use cases and recommended patterns

High-value use cases where Pro makes sense

  • Complex financial modeling, large spreadsheet synthesis and analysis where accuracy and multi-step reasoning matter (OpenAI reported improved investment banking spreadsheet task scores).
  • Long-document legal or scientific synthesis where the 400k token context preserves entire reports, appendices, and citation chains.
  • High-quality code generation and multi-file refactoring for enterprise codebases (Pro’s higher xhigh reasoning helps with multi-step program transformations).
  • Strategic planning, multi-stage project orchestration, and agentic workflows that use custom tools and require robust tool calling.

When to choose Thinking or Instant instead

  • Choose Instant for fast, lower-cost conversational tasks and editor integrations.
  • Choose Thinking for deeper but latency-sensitive work where cost is constrained but quality still matters.

How to access and use GPT-5.2 pro API

Step 1: Sign Up for API Key

Log in to cometapi.com. If you are not our user yet, please register first. Sign into your CometAPI console. Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.

Step 2: Send Requests to GPT-5.2 pro API

Select the “gpt-5.2-pro” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience. Replace <YOUR_API_KEY> with your actual CometAPI key from your account. Where to call it: Responses-style APIs.

Insert your question or request into the content field—this is what the model will respond to . Process the API response to get the generated answer.

Step 3: Retrieve and Verify Results

Process the API response to get the generated answer. After processing, the API responds with the task status and output data.

See also Gemini 3 Pro Preview API

자주 묻는 질문

{ "error": { "message": "This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?", "type": "invalid_request_error", "param": "model", "code": null } }

Please use the “v1/responses” endpoint.

GPT-5.2 Pro의 기능

[모델 이름]의 성능과 사용성을 향상시키도록 설계된 주요 기능을 살펴보세요. 이러한 기능이 프로젝트에 어떻게 도움이 되고 사용자 경험을 개선할 수 있는지 알아보세요.

GPT-5.2 Pro 가격

[모델명]의 경쟁력 있는 가격을 살펴보세요. 다양한 예산과 사용 요구에 맞게 설계되었습니다. 유연한 요금제로 사용한 만큼만 지불하므로 요구사항이 증가함에 따라 쉽게 확장할 수 있습니다. [모델명]이 비용을 관리 가능한 수준으로 유지하면서 프로젝트를 어떻게 향상시킬 수 있는지 알아보세요.
코멧 가격 (USD / M Tokens)공식 가격 (USD / M Tokens)할인
입력:$16.8/M
출력:$134.4/M
입력:$21/M
출력:$168/M
-20%

GPT-5.2 Pro의 샘플 코드 및 API

[모델 이름]의 포괄적인 샘플 코드와 API 리소스에 액세스하여 통합 프로세스를 간소화하세요. 자세한 문서는 단계별 가이드를 제공하여 프로젝트에서 [모델 이름]의 모든 잠재력을 활용할 수 있도록 돕습니다.

더 많은 모델