Automate Podcast Transcription Publishing with RAG Agent
detail.loadingPreview
This workflow automates the process of transcribing podcasts and publishing them. It utilizes a Webhook Trigger, Text Splitter, Embeddings, and a RAG Agent to process and store podcast data, logging the outcome to a Google Sheet.
🚀Ready to Deploy This Workflow?
About This Workflow
Overview
This n8n workflow is designed to automate the entire lifecycle of processing podcast transcriptions and making them available. It begins with a Webhook Trigger to receive incoming podcast data. The data is then split into manageable chunks using the Text Splitter node. These chunks are converted into vector embeddings using the Embeddings node (Cohere). These embeddings are then inserted into a Pinecone vector database for efficient retrieval. A Pinecone Query node retrieves relevant information, which, along with context from Window Memory and a Chat Model, is fed into a RAG Agent. This agent processes the data, and its output (status) is logged to a Google Sheet via the Append Sheet node. Error handling is implemented with a Slack Alert for any failures.
Key Features
- Trigger podcast processing via a webhook.
- Split long transcriptions into manageable text chunks.
- Generate vector embeddings for efficient semantic search.
- Store and retrieve data from a Pinecone vector database.
- Utilize a Retrieval-Augmented Generation (RAG) agent for intelligent processing.
- Log processing status to a Google Sheet.
- Send Slack alerts for errors.
How To Use
- Set up a
Webhook Triggerto receive incoming podcast data (e.g., from a podcast hosting service or a custom API). - Configure the
Text Splitternode with appropriatechunkSizeandchunkOverlapsettings. - Ensure your Cohere API key is set up in n8n credentials for the
Embeddingsnode. - Configure your Pinecone index for the
Pinecone InsertandPinecone Querynodes, ensuring thepineconeIndexparameter matches your index name. - Set up your Anthropic API key for the
Chat Modelnode. - Configure the
RAG Agentwith a system message and prompt to define how it should process the incoming data. - Set up your Google Sheets credentials and specify the
SHEET_IDandLogsheet name for theAppend Sheetnode. - Configure your Slack credentials and channel for the
Slack Alertnode. - Connect the nodes according to the workflow logic and activate the workflow.
Apps Used
Workflow JSON
{
"id": "9b646ff3-5984-4636-a8b1-5d4c5a553839",
"name": "Automate Podcast Transcription Publishing with RAG Agent",
"nodes": 0,
"category": "Misc",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 9b646ff3-5984...
About the Author
Free n8n Workflows Official
System Admin
The official repository for verified enterprise-grade workflows.
Statistics
Verification Info
Related Workflows
Discover more workflows you might like
Automate CSV Attachment to Airtable with a RAG Agent
This n8n workflow automates the process of handling CSV attachments by using a Retrieval Augmented Generation (RAG) agent. It leverages a Webhook Trigger, Text Splitter, Embeddings, Pinecone, and a Chat Model to intelligently process and log data.
Automated Drink Water Reminder Workflow
This workflow uses n8n and Langchain to create an automated drink water reminder system. It leverages a Webhook Trigger, Text Splitter, Embeddings, and Supabase for RAG agent functionality, ultimately logging reminders to a Google Sheet.
Integrate Blog Comments with Discord via Webhook and AI
This workflow automates the process of receiving blog comments via a Webhook Trigger and processing them using Langchain AI. The processed comments are then stored in Supabase and logged to a Google Sheet, with error alerts sent to Slack.