Unlock Any LLM Model With OpenRouter Integration
detail.loadingPreview
Seamlessly integrate with any Large Language Model (LLM) available through OpenRouter. This n8n workflow allows you to dynamically choose and utilize a wide array of AI models for your automated tasks, offering unparalleled flexibility.
About This Workflow
Empower your n8n workflows with the vast capabilities of Large Language Models through a single, unified integration. This template leverages OpenRouter's extensive model catalog, enabling you to switch between cutting-edge AI models like those from OpenAI, Google, DeepSeek, Mistral, and Qwen with ease. By configuring a simple 'Settings' node, you define the LLM you wish to use and provide your prompt. The 'AI Agent' node then processes your request, utilizing chat memory to maintain context and deliver intelligent responses. This provides a highly adaptable solution for anyone looking to harness the power of diverse LLMs within their automation pipelines.
Key Features
- Universal LLM Access: Connect to any LLM model supported by OpenRouter without complex API configurations.
- Dynamic Model Selection: Easily switch between different AI models by simply updating a parameter.
- Configurable Prompts: Define your interaction logic directly within the workflow.
- Contextual Conversations: Utilizes chat memory to maintain coherent dialogue across interactions.
- Extensible Automation: Integrate diverse AI capabilities into your existing n8n workflows.
How To Use
- Trigger Setup: Configure the 'When chat message received' node to initiate your workflow. This node acts as the entry point for your AI interaction.
- Model and Prompt Configuration: In the 'Settings' node, specify the desired LLM model name (e.g.,
deepseek/deepseek-r1-distill-llama-8b) and define your prompt using the{{ $json.chatInput }}variable. - AI Model Integration: The 'LLM Model' node is pre-configured to use OpenRouter credentials. Ensure your OpenRouter API key is set up correctly in n8n.
- Chat Memory: The 'Chat Memory' node is configured to use a session ID, allowing for stateful conversations. The
{{ $json.sessionId }}variable should be passed from your trigger. - AI Agent Execution: Connect the 'AI Agent' node to receive the prompt and model information from the 'Settings' node and the LLM/memory configurations from the respective nodes. This node orchestrates the AI interaction.
- Explore Model Options: Refer to the 'Model examples' sticky note and the OpenRouter website (https://openrouter.ai/models) to discover and select from a wide range of available LLMs.
Apps Used
Workflow JSON
{
"id": "a5f57ac5-2a0e-4554-ad0a-faa6b9906206",
"name": "Unlock Any LLM Model With OpenRouter Integration",
"nodes": 8,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: a5f57ac5-2a0e...
About the Author
N8N_Community_Pick
Curator
Hand-picked high quality workflows from the global community.
Statistics
Related Workflows
Discover more workflows you might like
Build a Custom OpenAI-Compatible LLM Proxy with n8n
This workflow transforms n8n into a powerful OpenAI-compatible API proxy, allowing you to centralize and customize how your applications interact with various Large Language Models. It enables a unified interface for diverse AI capabilities, including multimodal input handling and dynamic model routing.
Effortless Bug Reporting: Slack Slash Command to Linear Issue
Streamline your bug reporting process by instantly creating Linear issues directly from Slack using a simple slash command. This workflow enhances team collaboration by providing immediate feedback and a structured approach to logging defects, saving valuable time for development and QA teams.
Automate Qualys Report Generation and Retrieval
Streamline your Qualys security reporting by automating the generation and retrieval of reports. This workflow ensures timely access to crucial security data without manual intervention.