Home/Models/OpenAI/o4-mini-deep-research
O

o4-mini-deep-research

Input:$1.6/M
Output:$6.4/M
Context:200K
Max Output:100K
O4-Mini-Deep-Research is OpenAI’s latest agentic reasoning model, combining the lightweight o4-mini backbone with the advanced Deep Research framework. Designed to deliver fast, cost-efficient deep information synthesis, it enables developers and researchers to perform automated web searches, data analysis, and chain-of-thought reasoning within a single API call.
New
Commercial Use
Playground
Overview
Features
Pricing
API
Versions

OpenAI’s O4-Mini-Deep-Research represents the convergence of two pivotal innovations: the compact yet powerful o4-mini reasoning model and the agentic Deep Research framework. Launched in June 2025, this hybrid system delivers autonomous, high-fidelity research capabilities at a fraction of the cost and latency of its full-sized counterparts. By leveraging the streamlined architecture of o4-mini within the Deep Research agent, developers and researchers can now execute extended web browsing, data synthesis, and complex analysis workflows in minutes, rather than days .

Features

  • Lightweight Architecture: Utilizes the compact o4-mini variant for reduced latency and inference cost .
  • Integrated Web Search: Capable of invoking search tools within its reasoning pipeline, yielding richer, up-to-date context .
  • Python Interpreter Access: Supports on-the-fly code execution for mathematical proofs, data processing, and interactive querying.
  • Modular Agent Design: Pluggable tool interfaces allow seamless integration with custom retrieval or external APIs, enhancing flexibility.

Technical Details

O4-Mini-Deep-Research builds on the transformer-based o4-mini model, fine-tuned under an agentic framework that orchestrates:

  1. Query Decomposition: Breaks down complex prompts into sub-tasks.
  2. Search-Augmented Reasoning: Embeds retrieval steps into its chain-of-thought, enabling real-time fact grounding.
  3. Self-Validation Loops: Implements self-check routines to reduce hallucination, though some inaccuracies persist.
  4. Interpreter Invocation: Dynamically spins up a sandboxed Python runtime for computations, raising its performance on benchmarks like AIME.

Benchmark Performance

  • AIME 2025: o4-mini achieved 92.7% accuracy on the American Invitational Mathematics Examination, outperforming o3 on math reasoning tasks.
  • GPQA Diamond: Scored 81.4 on Ph.D.-level science questions, demonstrating robust performance in scientific domains .
  • BrowseComp Agentic Browsing: Delivered 45.6% accuracy in agentic browsing benchmarks, compared to 51.5% for deep research mode—trading some depth for speed .

Model Versioning

OpenAI publishes date-stamped model identifiers to ensure reproducibility and version control:

  • o4-mini-deep-research-2025-06-26
  • Future updates will follow the <model>-<YYYY-MM-DD> convention, allowing developers to pin specific snapshots in production.

Limitations

  • Time-out Constraints: Queries exceeding 600 seconds will error out and refund compute credits, emphasizing shorter, iterative research cycles .
  • Depth vs. Speed Trade-off: While optimized for throughput, o4-mini-deep-research may yield less exhaustive syntheses on ultra-complex queries compared to its o3 counterpart .
  • Reliance on Retrieval: Quality depends on upstream search results; missing or paywalled sources can impact completeness.

How to access o4-mini-deep-researc API

Step 1: Sign Up for API Key

Log in to cometapi.com. If you are not our user yet, please register first. Sign into your CometAPI console. Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.

img

Step 2: Send Requests to o4-mini-deep-research API

Select the “\**o4-mini-deep-research\**” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience. Replace <YOUR_API_KEY> with your actual CometAPI key from your account.

Insert your question or request into the content field—this is what the model will respond to . Process the API response to get the generated answer.

Step 3: Retrieve and Verify Results

Process the API response to get the generated answer. After processing, the API responds with the task status and output data.

Features for o4-mini-deep-research

Explore the key features of o4-mini-deep-research, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for o4-mini-deep-research

Explore competitive pricing for o4-mini-deep-research, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how o4-mini-deep-research can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$1.6/M
Output:$6.4/M
Input:$2/M
Output:$8/M
-20%

Sample code and API for o4-mini-deep-research

Access comprehensive sample code and API resources for o4-mini-deep-research to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of o4-mini-deep-research in your projects.

Versions of o4-mini-deep-research

The reason o4-mini-deep-research has multiple snapshots may include potential factors such as variations in output after updates requiring older snapshots for consistency, providing developers a transition period for adaptation and migration, and different snapshots corresponding to global or regional endpoints to optimize user experience. For detailed differences between versions, please refer to the official documentation.
version
o4-mini-deep-research
o4-mini-deep-research-2025-06-26

More Models