Text Generation
Transformers
Safetensors
nemotron_h
conversational
custom_code
Blog Model Blog Blog Dataset Model Demo Eval Logs
Adopted by NVIDIA's Nemotron family of models!

πŸ€— HuggingFace | Slack | WeChat

OpenResearcher-30B-A3B Overview

OpenResearcher-30B-A3B is an agentic large language model designed for long-horizon deep research fine-tuned from NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 on 96K OpenResearcher dataset with 100+ turns. The dataset is derived by distilling GPT-OSS-120B with native browser tools. More info can be found on the dataset card at OpenResearcher dataset.

The model achieves an impressive 54.8% accuracy on BrowseComp-Plus, surpassing performance of GPT-4.1, Claude-Opus-4, Gemini-2.5-Pro, DeepSeek-R1 and Tongyi-DeepResearch.

OpenResearcher Teaser

Deep Research Benchmark Results

Deep Research Benchmark Results

Evaluate OpenResearcher-30B-A3B

We evaluate OpenResearcher-30B-A3B across a range of deep research benchmarks, including BrowseComp-Plus, BrowseComp, GAIA, xbench-DeepSearch. Please find more details in GitHub.

Quick Start

We provide a quick-start in GitHub that demonstrates how to use OpenResearcher-30B-A3B for deep research.

Citation

@article{li2026openresearcher,
  title={{OpenResearcher: A Fully Open Pipeline for Long-Horizon Deep Research Trajectory Synthesis}},
  author={Li, Zhuofeng and Jiang, Dongfu and Ma, Xueguang and Zhang, Haoxiang and Nie, Ping and Zhang, Yuyu and Zou, Kai and Xie, Jianwen and Zhang, Yu and Chen, Wenhu},
  journal={arXiv preprint arXiv:2603.20278},
  year={2026}
}
Downloads last month
10,064
Safetensors
Model size
32B params
Tensor type
F32
Β·
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 2 Ask for provider support

Model tree for OpenResearcher/OpenResearcher-30B-A3B

Finetuned
(3)
this model
Quantizations
3 models

Dataset used to train OpenResearcher/OpenResearcher-30B-A3B

Spaces using OpenResearcher/OpenResearcher-30B-A3B 6

Paper for OpenResearcher/OpenResearcher-30B-A3B