Fusion

Beta

Multi-model deliberation as a server tool

Beta

Server tools are currently in beta. The API and behavior may change.

The openrouter:fusion server tool exposes the Fusion pipeline as a callable tool. When the calling model decides a prompt needs particular thoughtfulness — research, expert critique, or multiple perspectives — it can invoke openrouter:fusion, receive structured analysis JSON from a panel of expert models, and use it to write the final answer.

The tool is a strict superset of the fusion plugin: the plugin is sugar that automatically attaches this tool to a request.

Quick start

1const response = await fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 Authorization: 'Bearer {{API_KEY_REF}}',
5 'Content-Type': 'application/json',
6 },
7 body: JSON.stringify({
8 model: '{{MODEL}}',
9 messages: [
10 {
11 role: 'user',
12 content: 'Survey the strongest arguments for and against a carbon tax. Where do experts disagree?',
13 },
14 ],
15 tools: [
16 { type: 'openrouter:fusion' },
17 ],
18 }),
19});
20
21const data = await response.json();
22console.log(data.choices[0].message.content);

When the model invokes the tool

The tool description tells the calling model to only invoke it when the task genuinely needs deliberation. Short tactical prompts will not trigger fusion. Long-form research, multi-domain critique, “compare and contrast” prompts, or anything where being wrong is expensive are common triggers.

If you want to force fusion on every request, use the openrouter/fusion model alias or set tool_choice to require the tool.

Parameters

The tool accepts an optional parameters object on the tool entry:

1{
2 "tools": [
3 {
4 "type": "openrouter:fusion",
5 "parameters": {
6 "analysis_models": [
7 "~google/gemini-flash-latest",
8 "deepseek/deepseek-v3.2-20251201",
9 "~moonshotai/kimi-latest"
10 ],
11 "model": "~anthropic/claude-opus-latest"
12 }
13 }
14 ]
15}
FieldDefaultDescription
analysis_modelsQuality preset (~anthropic/claude-opus-latest, ~openai/gpt-latest)Slugs to run in parallel as the analysis panel. Each call has openrouter:web_search and openrouter:web_fetch enabled.
modelThe outer request’s modelSlug of the judge model that produces the structured analysis JSON. Defaults to the same model that is invoking the tool — so the tool acts as a “second opinion” loop.

Tool result schema

The tool returns JSON with the following shape:

1{
2 "status": "ok",
3 "analysis": {
4 "consensus": ["..."],
5 "contradictions": [
6 { "topic": "...", "stances": [{ "model": "...", "stance": "..." }] }
7 ],
8 "partial_coverage": [
9 { "models": ["..."], "point": "..." }
10 ],
11 "unique_insights": [
12 { "model": "...", "insight": "..." }
13 ],
14 "blind_spots": ["..."]
15 },
16 "responses": [
17 { "model": "...", "content": "..." }
18 ]
19}

When something fails (e.g. all analysis models error), the tool returns { "status": "error", "error": "..." } and the calling model can fall back to writing the answer without the analysis.

Web search and fetch

openrouter:web_search and openrouter:web_fetch are enabled on the analysis and judge calls — never on the outer synthesis. By the time the calling model writes the final answer it already has fresh, structured analysis to ground its response.

Recursion protection

Inner fusion calls carry an x-openrouter-fusion-depth header. Analysis or judge models cannot recursively invoke openrouter:fusion or openrouter/fusion — the plugin refuses to inject the tool a second time so the deliberation stays bounded.