AI-Powered Content Analysis and Fact-Checking with Ollama
detail.loadingPreview
This n8n workflow uses Ollama and Langchain to analyze text, split it into sentences, and then leverage an LLM to perform fact-checking. It identifies factual inaccuracies and provides a summary of the findings.
🚀Ready to Deploy This Workflow?
About This Workflow
Overview
This n8n workflow automates the process of content analysis and fact-checking using Large Language Models (LLMs) through Ollama. It begins by receiving input text, which is then meticulously split into individual sentences using a custom JavaScript function within the Code node. This splitting is designed to intelligently handle dates and list items, ensuring sentence integrity. The workflow then proceeds to prepare the data for an LLM by combining relevant facts with individual sentences. The Basic LLM Chain node, integrated with an Ollama Chat Model, is employed to analyze each sentence, determine its factual accuracy, and assess if it is a chit-chat or factual statement. A filtering step removes statements marked as incorrect ('No'), and the results are aggregated to provide a comprehensive analysis.
Key Features
- Advanced text splitting into sentences, preserving dates and list items.
- Integration with Ollama for local LLM deployment.
- Utilizes Langchain for structured LLM interaction.
- Identifies and categorizes factual statements versus chit-chat.
- Generates a summary of identified factual inaccuracies.
How To Use
- Input Data: Provide the text to be analyzed. This can be done via the
Edit Fieldsnode for testing or through triggers likeWhen clicking ‘Test workflow’orWhen Executed by Another Workflow. - Sentence Splitting: The
Codenode processes the input text, splitting it into individual sentences. TheSplit Out1node then separates each sentence for individual processing. - LLM Analysis: The
Basic LLM Chain4node, connected to theOllama Chat Model, analyzes each sentence. TheMerge1node combines the extracted facts with the individual sentences for context. - Fact-Checking: The
Filternode is used to identify and potentially isolate statements that the LLM has flagged as factually incorrect. - Aggregation: The
Aggregatenode compiles the results from the LLM analysis. TheBasic LLM Chainnode (the second LLM node in the workflow) is then used to summarize the findings, including the number of errors and a list of problematic statements.
Apps Used
Workflow JSON
{
"id": "8c290cea-41be-409f-a8ea-c65a90ed6277",
"name": "AI-Powered Content Analysis and Fact-Checking with Ollama",
"nodes": 0,
"category": "AI and LLMs",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 8c290cea-41be...
About the Author
AI_Workflow_Bot
LLM Specialist
Building complex chains with OpenAI, Claude, and LangChain.
Statistics
Verification Info
Related Workflows
Discover more workflows you might like
AI-Powered Stock Q&A Workflow with Langchain and Qdrant
This n8n workflow enables an AI-powered question-answering system for stock-related information. It uses Langchain nodes for embeddings and retrieval, a Qdrant Vector Store for data indexing, and OpenAI for generating responses. The workflow automates the process of fetching documents, embedding them, and querying them via a webhook.
AI-Powered Podcast Digest: Summarize Transcripts with n8n
This n8n workflow automates the summarization of podcast transcripts using AI. It takes raw transcript text and processes it to generate concise digests, making lengthy audio content more accessible.
N8n AI Image Generator with Custom Styles
This n8n workflow leverages the power of AI to generate custom images based on defined style prompts. It allows for rapid iteration and visual exploration using the 'Set' node to control stylistic elements and 'Respond to Webhook' to display results.