Fusion
Multi-model deliberation as a server tool
Beta
Server tools are currently in beta. The API and behavior may change.
The openrouter:fusion server tool exposes the Fusion pipeline as a callable tool. When the calling model decides a prompt needs particular thoughtfulness — research, expert critique, or multiple perspectives — it can invoke openrouter:fusion, receive structured analysis JSON from a panel of expert models, and use it to write the final answer.
The tool is a strict superset of the fusion plugin: the plugin is sugar that automatically attaches this tool to a request.
Quick start
When the model invokes the tool
The tool description tells the calling model to only invoke it when the task genuinely needs deliberation. Short tactical prompts will not trigger fusion. Long-form research, multi-domain critique, “compare and contrast” prompts, or anything where being wrong is expensive are common triggers.
If you want to force fusion on every request, use the openrouter/fusion model alias or set tool_choice to require the tool.
Parameters
The tool accepts an optional parameters object on the tool entry:
Tool result schema
The tool returns JSON with the following shape:
When something fails (e.g. all analysis models error), the tool returns { "status": "error", "error": "..." } and the calling model can fall back to writing the answer without the analysis.
Web search and fetch
openrouter:web_search and openrouter:web_fetch are enabled on the analysis and judge calls — never on the outer synthesis. By the time the calling model writes the final answer it already has fresh, structured analysis to ground its response.
Recursion protection
Inner fusion calls carry an x-openrouter-fusion-depth header. Analysis or judge models cannot recursively invoke openrouter:fusion or openrouter/fusion — the plugin refuses to inject the tool a second time so the deliberation stays bounded.
Related
- Fusion plugin
- Web Search server tool
- Web Fetch server tool
/labs/fusion— interactive playground for the same pipeline