RAG on Living Notion Data with OpenAI
detail.loadingPreview
This n8n workflow enables Retrieval Augmented Generation (RAG) on dynamic Notion data. It fetches Notion page blocks, splits them into chunks, embeds them using OpenAI, stores them in a vector database, and allows for question answering.
🚀Ready to Deploy This Workflow?
About This Workflow
Overview
This workflow automates the process of creating a searchable knowledge base from your Notion pages. It leverages Retrieval Augmented Generation (RAG) to allow users to ask questions and receive answers based on the content within your Notion documents. The process involves fetching Notion page blocks, splitting them into manageable chunks, generating embeddings for each chunk using OpenAI, and storing these embeddings in a vector database (Supabase is used in this example). When a question is received, the workflow retrieves relevant chunks from the vector store, passes them along with the question to an OpenAI language model, and generates a concise answer.
This workflow is particularly useful for teams that rely heavily on Notion for documentation, internal knowledge bases, or project management, as it makes the information more accessible and actionable through natural language queries.
Key Features
- Dynamic Data Ingestion: Fetches content directly from Notion pages.
- Text Chunking: Divides long documents into smaller, more manageable pieces for efficient processing.
- OpenAI Embeddings: Utilizes OpenAI's models to create vector representations of text chunks.
- Vector Database Storage: Stores embeddings in a vector database (Supabase example) for fast similarity search.
- Retrieval Augmented Generation (RAG): Enables question answering based on the vectorized Notion data.
- Flexible Triggering: Can be triggered manually or via chat messages (with potential for Notion triggers).
- Embedding Management: Includes a step to delete old embeddings before adding new ones.
How To Use
- Configure Notion Credentials: Set up your Notion API credentials in n8n.
- Configure OpenAI Credentials: Set up your OpenAI API credentials in n8n.
- Configure Supabase Credentials (or your chosen Vector DB): Set up your database credentials.
- Set Notion Page/Database ID: In the
Get page blocksnode, specify the ID of the Notion page or database you want to index. - Set Input Reference: Ensure the
Input Referencenode correctly points to the data containing the Notion page ID and name. - Adjust Text Splitter: Configure the
Token Splitternode'schunkSizeand potentially add overlap for better retrieval accuracy. - Define Metadata: In the
Default Data Loadernode, map any relevant metadata (like page ID and name) you want to associate with the embeddings. - Set Up Trigger: Choose your preferred trigger. The
When chat message receivednode is shown for interactive querying, but aSchedule TriggerorNotion Triggercan be used for periodic updates. - Test the Workflow: Trigger the workflow and test it by asking relevant questions.
Apps Used
Workflow JSON
{
"id": "7cdd6e55-17d5-412e-a5ab-d56d6c0dae47",
"name": "RAG on Living Notion Data with OpenAI",
"nodes": 0,
"category": "AI & Machine Learning",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 7cdd6e55-17d5...
About the Author
Crypto_Watcher
Web3 Developer
Automated trading bots and blockchain monitoring workflows.
Statistics
Verification Info
Related Workflows
Discover more workflows you might like
Visa Requirement Checker
A workflow to check visa requirements based on user input, leveraging Langchain, Cohere embeddings, Weaviate vector store, and Anthropic LLM.
OpenAI Text-to-Speech Workflow
Generate audio from text using OpenAI's TTS API.
AI Assistant for Structured Metadata Generation
Automates the generation of structured metadata in English and Chinese using AI, leveraging communication platforms and various data sources.