Proxy GitHub Models As OpenAI API With n8n
detail.loadingPreview
This n8n workflow transforms the GitHub Models API into an OpenAI-compatible endpoint, allowing seamless integration with tools and applications designed for OpenAI. It provides custom API routes for listing models and handling chat completions, all powered by n8n's robust automation capabilities.
About This Workflow
Unlock the power of GitHub Models by making them instantly compatible with any application or system built for OpenAI's API. This n8n workflow acts as a clever proxy, exposing custom webhook endpoints that mimic OpenAI's /v1/models and /v1/chat/completions routes. It fetches available models and processes chat completion requests through GitHub's AI services, re-mapping responses to fit the OpenAI standard. This means you can leverage advanced GitHub AI directly within your existing OpenAI-integrated tools, without writing a single line of adapter code. Simplify your AI infrastructure and gain flexibility by routing various LLM providers through a unified n8n interface.
Key Features
- OpenAI API Compatibility: Emulates OpenAI's API structure for models and chat completions.
- GitHub Models Integration: Directly communicates with the
models.github.aicatalog and inference endpoints. - Custom Webhook Endpoints: Provides dedicated
/github-models/modelsand/github-models/chat/completionsroutes. - Response Re-mapping: Automatically transforms GitHub's API responses into an OpenAI-like format.
- Seamless LLM Tooling: Allows existing OpenAI-configured tools and n8n nodes to utilize GitHub Models effortlessly.
How To Use
- Activate Your Workflow: Ensure this n8n workflow is active to enable its webhook endpoints.
- Create a Custom OpenAI Credential: Navigate to your n8n Credentials section and create a new
OpenAI APIcredential. - Configure Credential Details:
- Credential Name: Set this to something descriptive, e.g., "GitHub Models Proxy".
- API Key: Enter any placeholder value (e.g.,
12345), as the API key itself won't be used by the proxy. - Base URL: Crucially, set this to your n8n instance's webhook URL for this workflow:
https://<your_n8n_url>/webhook/github-models(replace<your_n8n_url>with your actual n8n host).
- Use Your Custom Credential: You can now use this newly created "GitHub Models Proxy" credential in any n8n LLM node or external application that allows configuring a custom OpenAI
Base URL. Requests will be seamlessly routed to GitHub Models via your n8n workflow.
Apps Used
Workflow JSON
{
"id": "150f6005-0de0-40a3-a26b-4542772de040",
"name": "Proxy GitHub Models As OpenAI API With n8n",
"nodes": 25,
"category": "DevOps",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 150f6005-0de0...
About the Author
SaaS_Connector
Integration Guru
Connecting CRM, Notion, and Slack to automate your life.
Statistics
Related Workflows
Discover more workflows you might like
Build a Custom OpenAI-Compatible LLM Proxy with n8n
This workflow transforms n8n into a powerful OpenAI-compatible API proxy, allowing you to centralize and customize how your applications interact with various Large Language Models. It enables a unified interface for diverse AI capabilities, including multimodal input handling and dynamic model routing.
Effortless Bug Reporting: Slack Slash Command to Linear Issue
Streamline your bug reporting process by instantly creating Linear issues directly from Slack using a simple slash command. This workflow enhances team collaboration by providing immediate feedback and a structured approach to logging defects, saving valuable time for development and QA teams.
Automated PR Merged QA Notifications
Streamline your QA process with this automated workflow that notifies your team upon successful Pull Request merges. Leverage AI and vector stores to enrich notifications and ensure seamless integration into your development pipeline.