Qwen3-Reranker-0.6B (ONNX)
ONNX conversion of Qwen/Qwen3-Reranker-0.6B for use with Transformers.js v4.
The model is exported with ORT graph optimization (level 2), which fuses Qwen3's grouped-query attention into com.microsoft.GroupQueryAttention ops — the contrib op Transformers.js v4 uses for accelerated inference.
Available ONNX Variants
| File | Format | Notes |
|---|---|---|
onnx/model_quantized.onnx |
int8 | Dynamic int8 (MatMul/Gemm only) |
onnx/model_q4.onnx |
4-bit | com.microsoft.MatMulNBits, block_size=32 |
Note: fp32 and fp16 variants are not provided because this model's weights exceed the ONNX single-file size limit and require external data files (
model.onnx_data), which are not supported by ONNX Runtime Web (WASM/WebGPU).
How the Reranker Works
Qwen3-Reranker is a CausalLM-based reranker — not a classifier. It scores relevance by:
- Formatting query + document into a structured chat prompt
- Running the model and reading logits for
"yes"/"no"tokens at the last token position - Computing
score = softmax([yes_logit, no_logit])[0]
Usage (Transformers.js v4)
import { AutoTokenizer, AutoModelForCausalLM } from "@huggingface/transformers";
const MODEL_ID = "huggingworld/Qwen3-Reranker-0.6B-ONNX";
const tokenizer = await AutoTokenizer.from_pretrained(MODEL_ID);
const model = await AutoModelForCausalLM.from_pretrained(MODEL_ID, {
dtype: "q4", // "q8" | "q4"
device: "webgpu", // or "wasm" / "cpu"
});
// Token IDs for binary scoring
const TOKEN_YES = tokenizer.convert_tokens_to_ids("yes");
const TOKEN_NO = tokenizer.convert_tokens_to_ids("no");
const SYSTEM_PROMPT =
'Judge whether the Document meets the requirements based on the Query and the Instruct provided. ' +
'Note that the answer can only be "yes" or "no".';
function buildPrompt(query, doc, instruction = "Given a web search query, retrieve relevant passages that answer the query") {
return (
`<|im_start|>system\n${SYSTEM_PROMPT}<|im_end|>\n` +
`<|im_start|>user\n<Instruct>: ${instruction}\n\n<Query>: ${query}\n\n<Document>: ${doc}<|im_end|>\n` +
`<|im_start|>assistant\n<think>\n\n</think>\n`
);
}
async function scoreDocument(query, doc) {
const prompt = buildPrompt(query, doc);
const inputs = tokenizer(prompt, { truncation: true, max_length: 8192 });
const output = await model(inputs);
// Extract logits for the last generated token
const seqLen = output.logits.dims[1];
const vocabSize = output.logits.dims[2];
const lastLogits = output.logits.data.slice(
(seqLen - 1) * vocabSize,
seqLen * vocabSize
);
const yesScore = Math.exp(lastLogits[TOKEN_YES]);
const noScore = Math.exp(lastLogits[TOKEN_NO]);
return yesScore / (yesScore + noScore); // normalized probability
}
async function rerank(query, documents) {
const scores = await Promise.all(documents.map(doc => scoreDocument(query, doc)));
return documents
.map((doc, i) => ({ doc, score: scores[i] }))
.sort((a, b) => b.score - a.score);
}
// Example
const results = await rerank(
"What is the capital of France?",
[
"Berlin is the capital of Germany.",
"Paris is the capital and largest city of France.",
"France is a country in Western Europe.",
]
);
console.log(results);
// [
// { doc: "Paris is the capital...", score: 0.982 },
// { doc: "France is a country...", score: 0.341 },
// { doc: "Berlin is the capital...", score: 0.018 },
// ]
await model.dispose();
Notes
- Padding: left-padded (configured in
tokenizer_config.json) - Context window: 32K tokens; recommended
max_length: 8192for practical use - Custom instructions: A short task description (English, 1–5 words) improves accuracy by 1–5%
- WebGPU: Recommended for best performance in browsers with Transformers.js v4
- q4 format: Uses
com.microsoft.MatMulNBits(ORT contrib op), natively supported in v4
Original Model Model Card
Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
Exceptional Versatility: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard (as of June 5, 2025, score 70.58), while the reranking model excels in various text retrieval scenarios.
Comprehensive Flexibility: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
Multilingual Capability: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
Model Overview
Qwen3-Reranker-0.6B has the following features:
- Model Type: Text Reranking
- Supported Languages: 100+ Languages
- Number of Paramaters: 0.6B
- Context Length: 32k
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub.
Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|---|---|---|---|---|---|---|---|
| Text Embedding | Qwen3-Embedding-0.6B | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | Qwen3-Embedding-4B | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | Qwen3-Embedding-8B | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | Qwen3-Reranker-0.6B | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | Qwen3-Reranker-4B | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | Qwen3-Reranker-8B | 8B | 36 | 32K | - | - | Yes |
Note:
MRL Supportindicates whether the embedding model supports custom dimensions for the final embedding.Instruction Awarenotes whether the embedding or reranking model supports customizing the input instruction according to different tasks.- Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
KeyError: 'qwen3'
Transformers Usage
# Requires transformers>=4.51.0
import torch
from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM
def format_instruction(instruction, query, doc):
if instruction is None:
instruction = 'Given a web search query, retrieve relevant passages that answer the query'
output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)
return output
def process_inputs(pairs):
inputs = tokenizer(
pairs, padding=False, truncation='longest_first',
return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens)
)
for i, ele in enumerate(inputs['input_ids']):
inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokens
inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)
for key in inputs:
inputs[key] = inputs[key].to(model.device)
return inputs
@torch.no_grad()
def compute_logits(inputs, **kwargs):
batch_scores = model(**inputs).logits[:, -1, :]
true_vector = batch_scores[:, token_true_id]
false_vector = batch_scores[:, token_false_id]
batch_scores = torch.stack([false_vector, true_vector], dim=1)
batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
scores = batch_scores[:, 1].exp().tolist()
return scores
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-0.6B", padding_side='left')
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B").eval()
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
token_false_id = tokenizer.convert_tokens_to_ids("no")
token_true_id = tokenizer.convert_tokens_to_ids("yes")
max_length = 8192
prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = ["What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
# Tokenize the input texts
inputs = process_inputs(pairs)
scores = compute_logits(inputs)
print("scores: ", scores)
vLLM Usage
# Requires vllm>=0.8.5
import logging
from typing import Dict, Optional, List
import json
import logging
import torch
from transformers import AutoTokenizer, is_torch_npu_available
from vllm import LLM, SamplingParams
from vllm.distributed.parallel_state import destroy_model_parallel
import gc
import math
from vllm.inputs.data import TokensPrompt
def format_instruction(instruction, query, doc):
text = [
{"role": "system", "content": "Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\"."},
{"role": "user", "content": f"<Instruct>: {instruction}\n\n<Query>: {query}\n\n<Document>: {doc}"}
]
return text
def process_inputs(pairs, instruction, max_length, suffix_tokens):
messages = [format_instruction(instruction, query, doc) for query, doc in pairs]
messages = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=False, enable_thinking=False
)
messages = [ele[:max_length] + suffix_tokens for ele in messages]
messages = [TokensPrompt(prompt_token_ids=ele) for ele in messages]
return messages
def compute_logits(model, messages, sampling_params, true_token, false_token):
outputs = model.generate(messages, sampling_params, use_tqdm=False)
scores = []
for i in range(len(outputs)):
final_logits = outputs[i].outputs[0].logprobs[-1]
token_count = len(outputs[i].outputs[0].token_ids)
if true_token not in final_logits:
true_logit = -10
else:
true_logit = final_logits[true_token].logprob
if false_token not in final_logits:
false_logit = -10
else:
false_logit = final_logits[false_token].logprob
true_score = math.exp(true_logit)
false_score = math.exp(false_logit)
score = true_score / (true_score + false_score)
scores.append(score)
return scores
number_of_gpu = torch.cuda.device_count()
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Reranker-0.6B')
model = LLM(model='Qwen/Qwen3-Reranker-0.6B', tensor_parallel_size=number_of_gpu, max_model_len=10000, enable_prefix_caching=True, gpu_memory_utilization=0.8)
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
max_length=8192
suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
true_token = tokenizer("yes", add_special_tokens=False).input_ids[0]
false_token = tokenizer("no", add_special_tokens=False).input_ids[0]
sampling_params = SamplingParams(temperature=0,
max_tokens=1,
logprobs=20,
allowed_token_ids=[true_token, false_token],
)
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = ["What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
pairs = list(zip(queries, documents))
inputs = process_inputs(pairs, task, max_length-len(suffix_tokens), suffix_tokens)
scores = compute_logits(model, inputs, sampling_params, true_token, false_token)
print('scores', scores)
destroy_model_parallel()
📌 Tip: We recommend that developers customize the instruct according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an instruct on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
Evaluation
| Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | FollowIR |
|---|---|---|---|---|---|---|---|
| Qwen3-Embedding-0.6B | 0.6B | 61.82 | 71.02 | 64.64 | 50.26 | 75.41 | 5.09 |
| Jina-multilingual-reranker-v2-base | 0.3B | 58.22 | 63.37 | 63.73 | 39.66 | 58.98 | -0.68 |
| gte-multilingual-reranker-base | 0.3B | 59.51 | 74.08 | 59.44 | 66.33 | 54.18 | -1.64 |
| BGE-reranker-v2-m3 | 0.6B | 57.03 | 72.16 | 58.36 | 59.51 | 41.38 | -0.01 |
| Qwen3-Reranker-0.6B | 0.6B | 65.80 | 71.31 | 66.36 | 67.28 | 73.42 | 5.41 |
| Qwen3-Reranker-4B | 4B | 69.76 | 75.94 | 72.74 | 69.97 | 81.20 | 14.84 |
| Qwen3-Reranker-8B | 8B | 69.02 | 77.45 | 72.94 | 70.19 | 81.22 | 8.05 |
Note:
- Evaluation results for reranking models. We use the retrieval subsets of MTEB(eng, v2), MTEB(cmn, v1), MMTEB and MTEB (Code), which are MTEB-R, CMTEB-R, MMTEB-R and MTEB-Code.
- All scores are our runs based on the top-100 candidates retrieved by dense embedding model Qwen3-Embedding-0.6B.
Citation
If you find our work helpful, feel free to give us a cite.
@article{qwen3embedding,
title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
journal={arXiv preprint arXiv:2506.05176},
year={2025}
}
- Downloads last month
- -
Model tree for huggingworld/Qwen3-Reranker-0.6B-ONNX
Base model
Qwen/Qwen3-0.6B-Base