Unlock Local AI Power: Chat with LLMs via n8n and Ollama
detail.loadingPreview
Seamlessly integrate your self-hosted Large Language Models (LLMs) with n8n using Ollama. This workflow enables direct chat interactions with local AI models, bringing the power of AI directly into your automation processes.
About This Workflow
Empower your n8n automations with the intelligence of your own locally hosted Large Language Models. This workflow leverages the combined strengths of n8n and Ollama to create a robust chat interface for interacting with your preferred LLMs. By connecting to an Ollama instance, you can send prompts, receive sophisticated AI-generated responses, and integrate this powerful AI capability into your existing n8n workflows. Ideal for developers and power users looking to harness private AI without relying on external cloud services, this solution offers both flexibility and control over your AI-powered automation projects.
Key Features
- Local LLM Integration: Connect directly to Ollama for seamless interaction with your self-hosted AI models.
- Real-time Chat Interface: Send prompts and receive AI responses within your n8n environment.
- Private AI Power: Utilize the power of LLMs without sending sensitive data to external services.
- Customizable AI Chains: Build sophisticated AI logic by chaining LLM interactions within n8n.
- Ollama Configuration Flexibility: Easily adjust Ollama connection details for different setups.
How To Use
- Install and Run Ollama: Ensure Ollama is installed and your desired LLM is downloaded and running on your machine.
- Configure n8n Credentials: In n8n, set up credentials for your Ollama API, usually pointing to
http://localhost:11434. - Set Up Chat Trigger: Configure the
When chat message receivednode to capture incoming chat messages. - Connect to Ollama: In the
Ollama Chat Modelnode, link it to your Ollama credentials and select your preferred LLM. - Build Chat Chain: Use the
Chat LLM Chainnode to process the incoming message and send it to the Ollama model. - Define Workflow Logic: Connect the output of the
Chat LLM Chainto further nodes in your workflow to deliver the AI's response or take subsequent actions. - Docker Considerations: If running n8n in Docker, ensure
--net=hostis used to allow network access to the local Ollama instance.
Apps Used
Workflow JSON
{
"id": "72a39c01-54a5-4418-9db7-6c6ec22b150f",
"name": "Unlock Local AI Power: Chat with LLMs via n8n and Ollama",
"nodes": 22,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 72a39c01-54a5...
About the Author
SaaS_Connector
Integration Guru
Connecting CRM, Notion, and Slack to automate your life.
Statistics
Related Workflows
Discover more workflows you might like
Effortless Bug Reporting: Slack Slash Command to Linear Issue
Streamline your bug reporting process by instantly creating Linear issues directly from Slack using a simple slash command. This workflow enhances team collaboration by providing immediate feedback and a structured approach to logging defects, saving valuable time for development and QA teams.
Automated PR Merged QA Notifications
Streamline your QA process with this automated workflow that notifies your team upon successful Pull Request merges. Leverage AI and vector stores to enrich notifications and ensure seamless integration into your development pipeline.
Visualize Your n8n Workflows: Interactive Dashboard with Mermaid.js
Gain unparalleled visibility into your n8n automation landscape. This workflow transforms your n8n instance into a dynamic, interactive dashboard, leveraging Mermaid.js to visualize all your workflows in one accessible place.