Build a Custom RAG Chatbot with Local LLMs (Ollama & Qdrant)
detail.loadingPreview
This workflow empowers you to create a personalized Retrieval Augmented Generation (RAG) chatbot using your own documents. Ingest PDF files to build a custom knowledge base with Qdrant, then chat with an AI agent powered by local Ollama models for secure and private interactions.
About This Workflow
Unlock the power of your data by creating a custom RAG chatbot. This comprehensive n8n workflow streamlines the process of ingesting PDF documents via a web form, automatically splitting them into manageable chunks, and generating embeddings using a local Ollama service (mxbai-embed-large:latest). These embeddings are then stored in your local Qdrant vector database. A separate chatbot interface allows users to ask questions, with an AI agent intelligently leveraging the stored data as a retrieval tool, backed by a local Ollama chat model and conversational memory. Ideal for privacy-conscious users and custom knowledge base applications.
Key Features
- Local-First AI: Leverages Ollama for both embeddings and chat models, ensuring data privacy and reducing reliance on external APIs.
- Custom Knowledge Base: Easily ingest PDF documents through a user-friendly web form to build your proprietary data repository.
- Advanced Data Processing: Automatically splits documents using a Recursive Character Text Splitter for optimal embedding and retrieval.
- Robust Vector Database: Integrates with Qdrant for efficient storage and semantic search of your embedded data.
- Intelligent AI Agent: An n8n AI Agent orchestrates responses, utilizing the Qdrant database as a tool to provide context-aware answers from your uploaded documents.
- Conversational Memory: The chatbot maintains context across interactions, offering a more natural and helpful user experience.
How To Use
- Set up Local Services: Ensure you have a running Ollama service configured with
mxbai-embed-large:latestand your preferred chat model (e.g., Llama2, Mistral), and a local Qdrant instance. - Configure Credentials: Update the 'Local Ollama service' and 'Local QdrantApi database' credentials within n8n to connect to your instances.
- Deploy Ingestion Form: Activate the 'On form submission' node. Share the generated webhook URL to allow users to upload PDF files.
- Ingest Documents: Upload your PDF files via the form. The workflow will automatically process, embed, and store them in the 'rag_collection' in Qdrant.
- Start Chatting: Activate the 'When chat message received' node. Interact with the generated webhook URL (or embed in a chat interface) to start asking questions to your custom RAG chatbot. The AI Agent will retrieve relevant information from your uploaded documents to provide answers.
Apps Used
Workflow JSON
{
"id": "ceed2757-83cc-4c2a-ae30-98c63b7e6636",
"name": "Build a Custom RAG Chatbot with Local LLMs (Ollama & Qdrant)",
"nodes": 8,
"category": "Operations",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: ceed2757-83cc...
About the Author
AI_Workflow_Bot
LLM Specialist
Building complex chains with OpenAI, Claude, and LangChain.
Statistics
Related Workflows
Discover more workflows you might like
Instant WooCommerce Order Notifications via Telegram
When a new order is placed on your WooCommerce store, instantly receive detailed notifications directly to your Telegram chat. Stay on top of your e-commerce operations with real-time alerts, including order specifics and a direct link to view the order.
On-Demand Microsoft SQL Query Execution
This workflow allows you to manually trigger and execute any SQL query against your Microsoft SQL Server database. Perfect for ad-hoc data lookups, administrative tasks, or quick tests, giving you direct control over your database operations.
Automate Getty Images Editorial Search & CMS Integration
This n8n workflow automates searching for editorial images on Getty Images, extracts key details and embed codes, and prepares them for seamless integration into your Content Management System (CMS), streamlining your content creation process.