Unlock Local AI Power: Chat with Ollama via n8n
detail.loadingPreview
Seamlessly integrate and chat with your self-hosted Large Language Models (LLMs) using n8n and Ollama. This powerful combination allows you to send prompts and receive AI-generated responses directly within your n8n workflows.
About This Workflow
This n8n workflow empowers you to harness the capabilities of local Large Language Models (LLMs) like never before. By leveraging the integration with Ollama, a popular tool for managing and running LLMs on your own hardware, you can create conversational AI experiences directly within your automation processes. The workflow is designed to capture incoming chat messages, route them to your Ollama-hosted LLM for processing, and then deliver the AI's insightful responses back to you. This opens up a world of possibilities for private, secure, and customized AI interactions without relying on external cloud services.
Key Features
- Local LLM Integration: Connect directly to your Ollama instance to utilize any of your downloaded LLMs.
- Real-time Chat Interaction: Send prompts and receive AI-generated responses within your n8n workflow.
- Private AI Conversations: Keep your data and AI interactions secure by running everything locally.
- Simplified Workflow Setup: Easy-to-configure nodes for quick integration into your automation.
How To Use
- Install and Run Ollama: Ensure Ollama is installed and a model is downloaded and running on your machine.
- Configure Ollama Credentials in n8n: Set up the 'Local Ollama' credentials in n8n, pointing to your Ollama API endpoint (usually
http://localhost:11434). If running n8n in Docker, ensure network connectivity to the host (--net=host). - Set up the Chat Trigger: Configure the 'When chat message received' node to capture incoming chat inputs.
- Connect the LLM Chain: Link the 'Chat LLM Chain' node to the trigger.
- Configure Ollama Chat Model: In the 'Ollama Chat Model' node, select your desired LLM and ensure it's connected to the 'Chat LLM Chain' node.
- Execute the Workflow: Activate the workflow and start chatting with your local LLM.
Apps Used
Workflow JSON
{
"id": "6a7b2eb1-8921-4d01-b360-ff0fa75039e5",
"name": "Unlock Local AI Power: Chat with Ollama via n8n",
"nodes": 18,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 6a7b2eb1-8921...
About the Author
N8N_Community_Pick
Curator
Hand-picked high quality workflows from the global community.
Statistics
Related Workflows
Discover more workflows you might like
Automated PR Merged QA Notifications
Streamline your QA process with this automated workflow that notifies your team upon successful Pull Request merges. Leverage AI and vector stores to enrich notifications and ensure seamless integration into your development pipeline.
Visualize Your n8n Workflows: Interactive Dashboard with Mermaid.js
Gain unparalleled visibility into your n8n automation landscape. This workflow transforms your n8n instance into a dynamic, interactive dashboard, leveraging Mermaid.js to visualize all your workflows in one accessible place.
Build a Custom OpenAI-Compatible LLM Proxy with n8n
This workflow transforms n8n into a powerful OpenAI-compatible API proxy, allowing you to centralize and customize how your applications interact with various Large Language Models. It enables a unified interface for diverse AI capabilities, including multimodal input handling and dynamic model routing.