Unlock the Power of Any LLM with OpenRouter Integration
detail.loadingPreview
Seamlessly integrate with any Large Language Model (LLM) through the OpenRouter API using this n8n workflow. This template offers unparalleled flexibility in choosing and utilizing diverse AI models for your automation needs.
About This Workflow
This n8n workflow template empowers you to harness the capabilities of virtually any LLM available through the OpenRouter platform. By leveraging OpenRouter's unified API, you can dynamically select and switch between a vast array of cutting-edge AI models without complex individual integrations. The workflow captures chat messages, sets desired model parameters, and interacts with the chosen LLM, while also maintaining conversation history through chat memory. This provides a robust and adaptable solution for AI-powered chatbots, content generation, data analysis, and more, all within the flexible environment of n8n.
Key Features
- Universal LLM Access: Connect to and utilize any LLM supported by OpenRouter.
- Dynamic Model Selection: Easily switch between different AI models by changing a single parameter.
- Conversation Memory: Maintains context and continuity in chat interactions.
- Flexible Input Handling: Processes incoming chat messages to feed into the LLM.
- n8n Integration: Seamlessly fits into your existing n8n automation workflows.
How To Use
- Trigger Setup: Configure the 'When chat message received' node to initiate the workflow based on your preferred chat platform or event.
- Model & Prompt Configuration: In the 'Settings' node, define the
modelyou wish to use from OpenRouter (e.g., 'deepseek/deepseek-r1-distill-llama-8b') and set thepromptto dynamically pull from the incoming message (={{ $json.chatInput }}). Optionally, configure asessionIdfor chat memory. - LLM Integration: The 'LLM Model' node is pre-configured to use OpenRouter credentials. Ensure your OpenRouter API key is correctly set up in n8n.
- AI Agent Configuration: The 'AI Agent' node takes the configured
promptandmodelto process the request. - Chat Memory: The 'Chat Memory' node uses the
sessionIdto store and retrieve conversation history, ensuring context is maintained. - Execution: Activate the workflow and test it by sending messages through your configured trigger.
Apps Used
Workflow JSON
{
"id": "b4b9c1ad-fcf5-4262-b854-7e963e14aa43",
"name": "Unlock the Power of Any LLM with OpenRouter Integration",
"nodes": 23,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: b4b9c1ad-fcf5...
About the Author
AI_Workflow_Bot
LLM Specialist
Building complex chains with OpenAI, Claude, and LangChain.
Statistics
Related Workflows
Discover more workflows you might like
Effortless Bug Reporting: Slack Slash Command to Linear Issue
Streamline your bug reporting process by instantly creating Linear issues directly from Slack using a simple slash command. This workflow enhances team collaboration by providing immediate feedback and a structured approach to logging defects, saving valuable time for development and QA teams.
Build a Custom OpenAI-Compatible LLM Proxy with n8n
This workflow transforms n8n into a powerful OpenAI-compatible API proxy, allowing you to centralize and customize how your applications interact with various Large Language Models. It enables a unified interface for diverse AI capabilities, including multimodal input handling and dynamic model routing.
Automate Qualys Report Generation and Retrieval
Streamline your Qualys security reporting by automating the generation and retrieval of reports. This workflow ensures timely access to crucial security data without manual intervention.