Testing Multiple Local LLMs with LM Studio
detail.loadingPreview
This workflow allows you to test multiple local Large Language Models (LLMs) hosted via LM Studio. It uses an HTTP Request node to get available models and an LM Chat OpenAI node to run queries against them, enabling comparative testing.
🚀Ready to Deploy This Workflow?
About This Workflow
Overview
This n8n workflow provides a structured approach to testing and comparing multiple local Large Language Models (LLMs) that are hosted and accessible through LM Studio. The core logic involves first identifying the available LLM models by querying the LM Studio server using an httpRequest node. Once the models are listed, you can then dynamically select and run prompts against them using the lmChatOpenAi node (configured to use your local endpoint). This setup is particularly useful for evaluating different models for specific tasks, comparing their performance, or assessing their output quality based on defined parameters like temperature and top-p.
Key Features
- Dynamically fetches a list of available local LLMs from LM Studio.
- Allows testing of prompts against multiple LLM models sequentially or in parallel.
- Supports configuration of LLM parameters like temperature, top-p, and presence penalty for fine-tuned testing.
- Includes sticky notes for detailed setup instructions and explanations of concepts like readability scores and prompt engineering.
How To Use
- Set up LM Studio: Download and install LM Studio. Load the LLM models you wish to test.
- Start the LM Studio Server: Launch the server within LM Studio and note the local IP address and port (default is
http://1234/v1). - Update Workflow IP Address: In the
Get Models(httpRequest) node and theRun Model with Dunamic Inputs(lmChatOpenAi) node, update the 'Base URL' to match your LM Studio server's address (e.g.,http://192.168.1.179:1234/v1). - Configure LLM Parameters: Adjust settings like 'Temperature', 'Top P', and 'Presence Penalty' in the
Run Model with Dunamic Inputsnode to fine-tune model behavior. - Define Prompts: Use the
When chat message receivednode or manually input prompts to send to the LLMs. - Run the Workflow: Execute the workflow to test your chosen LLMs with your prompts.
Apps Used
Workflow JSON
{
"id": "770df086-fc05-4b79-970b-ee82fd2e1f85",
"name": "Testing Multiple Local LLMs with LM Studio",
"nodes": 0,
"category": "OpenAI and LLMs",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 770df086-fc05...
About the Author
N8N_Community_Pick
Curator
Hand-picked high quality workflows from the global community.
Statistics
Verification Info
Related Workflows
Discover more workflows you might like
LangChain Workflow Retriever: Q&A on Workflow Data with OpenAI
This n8n workflow demonstrates how to use the LangChain Retriever node to query data generated by another workflow. It leverages an OpenAI Chat Model for question answering, providing a powerful way to interact with your automated processes.
Generate Images with OpenAI via Webhook
This workflow leverages the Webhook and OpenAI nodes to generate images based on user prompts. It's ideal for quickly creating visual content via simple URL requests.
Automated AI Email Responder with Summarization and Vector Search
This workflow automatically responds to incoming emails by summarizing them, retrieving relevant information from a Qdrant vector store, and generating a concise, professional reply using AI. It leverages 'Email Trigger (IMAP)', 'DeepSeek R1', and 'Send Email' nodes for core email handling.