Build a Local RAG AI Assistant with Ollama & Qdrant
detail.loadingPreview
Empower your AI agents with a private, document-specific knowledge base using this n8n workflow. It enables local PDF ingestion into Qdrant via Ollama embeddings and provides a server trigger for AI tools to perform semantic retrieval from your custom data.
About This Workflow
This robust n8n workflow delivers a complete solution for creating a local, Retrieval Augmented Generation (RAG) system. It's divided into two key components: a RAG Ingestion Pipeline and an AI Server. The ingestion pipeline allows you to easily upload PDF documents via a simple web form, which are then automatically processed, chunked, embedded using your local Ollama instance, and stored in a Qdrant vector database. The AI Server acts as a powerful backend, enabling your custom AI agents or applications (via an MCP Client) to trigger semantic searches against this personalized knowledge base, retrieving relevant information on demand. This ensures your AI can provide accurate, context-aware responses based purely on your private data.
Key Features
- Local-first RAG Implementation: Utilize self-hosted Ollama for embeddings and Qdrant for vector storage, ensuring data privacy and control.
- Effortless PDF Ingestion: Upload PDF documents through a user-friendly n8n form to build your knowledge base.
- Intelligent Text Splitting: Documents are automatically chunked using a Recursive Character Text Splitter for optimized retrieval accuracy.
- AI Agent Integration: Designed to integrate seamlessly with custom AI agents or applications via the MCP Server Trigger, making your data accessible as a powerful tool.
- Configurable Embeddings: Leverage the
mxbai-embed-largemodel from Ollama for high-quality semantic understanding.
How To Use
To get this powerful workflow up and running, follow these steps:
- Set Up Credentials: Ensure you have configured your local Qdrant, Ollama, and MCP Client (STDIO) API credentials within n8n. These are essential for both document ingestion and AI server operation.
- Ingest Your Documents (RAG Ingestion Pipeline):
- Activate the workflow. The right-hand section, labeled "RAG Ingestion Pipeline," will handle document processing.
- Access the webhook URL provided by the "On form submission" node (labeled "Ingest PDF Files in semantic database").
- Use the web form to upload your PDF files. The workflow will automatically load, split, embed with Ollama, and store the document chunks in your Qdrant
mcp_ragcollection. - Run this section as many times as needed to build your complete knowledge base.
- Activate the AI Server (MCP Server):
- Ensure the entire workflow is active. The left-hand section, labeled "MCP Server," is now ready.
- The "MCP Server Trigger" node will listen for incoming requests from your MCP Client (or any system configured to interact with its webhook).
- When triggered, the server will utilize the "Qdrant Vector Store" (powered by "Embeddings Ollama") as a tool to perform semantic searches against your ingested documents, providing relevant data back to the requesting AI agent.
Apps Used
Workflow JSON
{
"id": "ec4d6bdb-3b4c-4b8d-a63b-ea3fc9e1d614",
"name": "Build a Local RAG AI Assistant with Ollama & Qdrant",
"nodes": 5,
"category": "Operations",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: ec4d6bdb-3b4c...
About the Author
Free n8n Workflows Official
System Admin
The official repository for verified enterprise-grade workflows.
Statistics
Related Workflows
Discover more workflows you might like
Instant WooCommerce Order Notifications via Telegram
When a new order is placed on your WooCommerce store, instantly receive detailed notifications directly to your Telegram chat. Stay on top of your e-commerce operations with real-time alerts, including order specifics and a direct link to view the order.
On-Demand Microsoft SQL Query Execution
This workflow allows you to manually trigger and execute any SQL query against your Microsoft SQL Server database. Perfect for ad-hoc data lookups, administrative tasks, or quick tests, giving you direct control over your database operations.
Automate Getty Images Editorial Search & CMS Integration
This n8n workflow automates searching for editorial images on Getty Images, extracts key details and embed codes, and prepares them for seamless integration into your Content Management System (CMS), streamlining your content creation process.