Unlock the Power of Local LLMs: Test Multiple Models with Ease
detail.loadingPreview
Streamline your local Large Language Model (LLM) testing by automating the process with n8n and LM Studio. This workflow allows you to efficiently query and compare multiple local LLMs, streamlining your AI development and evaluation process.
About This Workflow
This n8n workflow empowers you to rigorously test and compare multiple local Large Language Models (LLMs) hosted via LM Studio. By automating the process of fetching available models, configuring LLM parameters like temperature and top-p, and even optionally logging results to a Google Sheet, you can gain valuable insights into model performance. The workflow guides you through setting up LM Studio, updating connection details, and defining system prompts for tailored testing. Analyze response quality, readability, and timing to make informed decisions about which LLM best suits your needs.
Key Benefits:
- Efficient Testing: Automate the repetitive tasks of LLM evaluation.
- Informed Decisions: Compare models based on objective metrics and qualitative feedback.
- Customizable Workflows: Adapt the workflow to your specific testing criteria.
- Local Control: Leverage the power of LLMs without relying on external APIs.
Key Features
- Dynamic Model Discovery: Automatically fetches a list of all loaded LLMs from your LM Studio server.
- Configurable LLM Parameters: Easily adjust temperature, top-p, and presence penalty for fine-tuned model behavior.
- Optional Google Sheets Integration: Log detailed test results, including prompts, responses, and performance metrics.
- Readability & Performance Analysis: Built-in tools to assess response readability and calculate response times.
- Guided Setup: Clear instructions and sticky notes within the workflow to simplify configuration.
How To Use
- Install and Setup LM Studio: Download, install, and configure LM Studio. Load your desired LLM models into the LM Studio server.
- Import the n8n Workflow: Import the provided n8n workflow JSON snippet into your n8n instance.
- Update LM Studio IP Address: In the 'Get Models' node (HTTP Request), update the
urlparameter tohttp://YOUR_LM_STUDIO_IP:1234/v1/modelsto match your LM Studio server's IP address. - Configure LLM Settings: Adjust the 'Run Model with Dynamic Inputs' node's parameters (Temperature, Top P, Presence Penalty) as needed for your testing.
- Define System Prompt: Modify the content in the 'Sticky Note6' to craft your desired system prompt for guiding LLM responses.
- Optional: Google Sheets Setup: If you wish to log results, create a Google Sheet with the specified headers, then configure the Google Sheets node with your sheet's details and map the data fields accordingly.
- Run the Workflow: Trigger the workflow. Input a prompt when prompted by the 'When chat message received' node.
- Analyze Results: Review the output in n8n, or check your Google Sheet for logged data, including response times and readability scores.
Apps Used
Workflow JSON
{
"id": "3bb855af-be9f-4780-8cfe-d21a281c65c0",
"name": "Unlock the Power of Local LLMs: Test Multiple Models with Ease",
"nodes": 24,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 3bb855af-be9f...
About the Author
DevOps_Master_X
Infrastructure Expert
Specializing in CI/CD pipelines, Docker, and Kubernetes automations.
Statistics
Related Workflows
Discover more workflows you might like
Effortless Bug Reporting: Slack Slash Command to Linear Issue
Streamline your bug reporting process by instantly creating Linear issues directly from Slack using a simple slash command. This workflow enhances team collaboration by providing immediate feedback and a structured approach to logging defects, saving valuable time for development and QA teams.
Build a Custom OpenAI-Compatible LLM Proxy with n8n
This workflow transforms n8n into a powerful OpenAI-compatible API proxy, allowing you to centralize and customize how your applications interact with various Large Language Models. It enables a unified interface for diverse AI capabilities, including multimodal input handling and dynamic model routing.
Automate Qualys Report Generation and Retrieval
Streamline your Qualys security reporting by automating the generation and retrieval of reports. This workflow ensures timely access to crucial security data without manual intervention.