Testing Multiple Local LLMs with LM Studio
detail.loadingPreview
This n8n workflow empowers you to efficiently test and compare multiple local Large Language Models (LLMs) served via LM Studio. Input chat messages, dynamically select models, and analyze their responses for performance, readability, and other key metrics.
About This Workflow
Unlock the full potential of your local LLM experiments with this comprehensive n8n workflow. Designed for developers and AI enthusiasts, it seamlessly integrates with LM Studio to provide a robust framework for comparing different language models. Send chat messages, dynamically route them to various local LLMs, and gain immediate insights into response times, content quality, and readability. Whether you're fine-tuning prompts or evaluating model capabilities, this workflow offers the tools to make informed decisions about your local AI deployments.
Key Features
- Dynamic LLM Selection: Automatically retrieve and switch between multiple local LLM models loaded in LM Studio.
- Performance Tracking: Measure and compare response times for each LLM, providing valuable performance insights.
- Comprehensive Text Analysis: Evaluate model outputs with readability scores (Flesch-Kincaid) and word/sentence metrics.
- Customizable Prompts & Settings: Tailor prompts and LLM parameters (Temperature, Top P, Presence Penalty) to fine-tune testing criteria.
- Optional Google Sheet Logging: Log all test data, including prompts, responses, and analytical scores, for easy comparison and historical tracking.
How To Use
- Install LM Studio: Download and install LM Studio on your local machine. Load the LLM models you intend to test within LM Studio.
- Update Local IP: In the 'Get Models' HTTP Request node, update the
Base URL(e.g.,http://192.168.1.1:1234/v1/models) to match your LM Studio server's IP address. - Configure LLM Settings: Adjust parameters like
Temperature,Top P, andPresence Penaltyin the 'Run Model with Dynamic Inputs' node to control model behavior during testing. - Set System Prompt (Optional): Add a
System Promptin the LLM Chain (e.g., "Focus on ensuring that responses are concise, clear, and easily understandable by a 5th-grade reading level.") to guide model responses for specific testing objectives. - Create Google Sheet (Optional): For detailed logging, create a Google Sheet with the headers:
Prompt,Time Sent,Time Received,Total Time Spent,Model,Response,Readability Score,Average Word Length,Word Count,Sentence Count,Average Sentence Length. Then, configure the Google Sheets node in the workflow to map these fields correctly.
Apps Used
Workflow JSON
{
"id": "c342f853-e4d8-4ff0-9a62-cf18999078dc",
"name": "Testing Multiple Local LLMs with LM Studio",
"nodes": 5,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: c342f853-e4d8...
About the Author
Crypto_Watcher
Web3 Developer
Automated trading bots and blockchain monitoring workflows.
Statistics
Related Workflows
Discover more workflows you might like
Build a Custom OpenAI-Compatible LLM Proxy with n8n
This workflow transforms n8n into a powerful OpenAI-compatible API proxy, allowing you to centralize and customize how your applications interact with various Large Language Models. It enables a unified interface for diverse AI capabilities, including multimodal input handling and dynamic model routing.
Effortless Bug Reporting: Slack Slash Command to Linear Issue
Streamline your bug reporting process by instantly creating Linear issues directly from Slack using a simple slash command. This workflow enhances team collaboration by providing immediate feedback and a structured approach to logging defects, saving valuable time for development and QA teams.
Automate Qualys Report Generation and Retrieval
Streamline your Qualys security reporting by automating the generation and retrieval of reports. This workflow ensures timely access to crucial security data without manual intervention.