Build a Custom OpenAI-Compatible LLM Proxy with n8n
detail.loadingPreview
This workflow transforms n8n into a powerful OpenAI-compatible API proxy, allowing you to centralize and customize how your applications interact with various Large Language Models. It enables a unified interface for diverse AI capabilities, including multimodal input handling and dynamic model routing.
About This Workflow
Unlock advanced AI integration by making n8n serve as your custom OpenAI-compatible API proxy. This sophisticated workflow allows you to route requests from your applications through n8n's flexible automation, presenting a unified v1/models and v1/chat/completions endpoint. Behind this facade, n8n orchestrates interactions with actual OpenAI models, handles complex multimodal inputs (text, images, files), and can even integrate Langchain AI Agents for more intelligent, tool-augmented responses. It's the perfect solution for centralizing LLM access, adding custom logic, or abstracting different AI providers under a single, familiar API surface.
Key Features
- OpenAI API Emulation: N8n webhooks act as custom
v1/modelsandv1/chat/completionsendpoints, allowing seamless integration with applications expecting the OpenAI API structure. - Multimodal Input Handling: Automatically remaps chat messages containing text, image URLs, and file URLs into a unified schema for advanced LLM interactions.
- Langchain AI Agent Integration: Leverage the power of Langchain agents within your custom API, enabling complex decision-making and tool use.
- Dynamic Model Listing: The emulated
/modelsendpoint dynamically fetches and exposes available OpenAI models. - Customizable LLM Routing: Implement conditional logic to direct requests based on parameters like streaming, enabling tailored responses for diverse scenarios.
How To Use
- Create New OpenAI Credentials: In n8n, create two new OpenAI credentials. For the first, set the Base URL to
https://<your_n8n_url>/webhook/n8n-responses-api. For the second, ensure it points to the standard OpenAI API base URL (https://api.openai.com/v1). - Configure LLM Nodes: Ensure your Langchain AI Agent and other LLM nodes are configured to use the appropriate custom OpenAI credentials, directing their requests to your n8n-proxied API.
- Activate Workflow: Activate this n8n workflow. This will expose the custom
/n8n-responses-api/modelsand/n8n-responses-api/chat/completionswebhooks. - Send Requests: Your applications can now send requests to these n8n webhook URLs, mimicking OpenAI API calls, and n8n will handle the routing and processing.
Apps Used
Workflow JSON
{
"id": "262a20f5-fb12-4b0c-bd79-fbfe8300e3b5",
"name": "Build a Custom OpenAI-Compatible LLM Proxy with n8n",
"nodes": 29,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 262a20f5-fb12...
About the Author
DevOps_Master_X
Infrastructure Expert
Specializing in CI/CD pipelines, Docker, and Kubernetes automations.
Statistics
Related Workflows
Discover more workflows you might like
Effortless Bug Reporting: Slack Slash Command to Linear Issue
Streamline your bug reporting process by instantly creating Linear issues directly from Slack using a simple slash command. This workflow enhances team collaboration by providing immediate feedback and a structured approach to logging defects, saving valuable time for development and QA teams.
Automate SSL Expiry Alerts for Proactive Website Security
Never miss an SSL certificate expiry again. This workflow automatically monitors your website URLs weekly, checks their SSL status, and sends timely alerts via email when an expiry is near, ensuring uninterrupted online presence.
Automate Qualys Report Generation and Retrieval
Streamline your Qualys security reporting by automating the generation and retrieval of reports. This workflow ensures timely access to crucial security data without manual intervention.