This project is proudly developed and maintained by Wistron DXLab.
⚡Update: This is the new n8n-nodes-ai-agent-langfuse project, an upgraded version with Agent integration and enhanced structured tracing support.
npm package: https://www.npmjs.com/package/n8n-nodes-openai-langfuse
- Support for OpenAI-compatible chat models (e.g.,
gpt-4.1-mini
,gpt-4o
) - Automatic Langfuse tracing for every request and response
- Custom metadata injection:
sessionId
,userId
, and structured JSON
n8n is a fair-code licensed workflow automation platform.
Installation
Credentials
Operations
Compatibility
Usage
Resources
Version history
Follow the installation guide in the official n8n documentation for community nodes.
For n8n v0.187+, install directly from the UI:
- Go to Settings → Community Nodes
- Click Install
- Enter
n8n-nodes-openai-langfuse
in Enter npm package name - Agree to the risks of using community nodes
- Select Install
A preconfigured Docker setup is available in the docker/
directory:
- Clone the repository and navigate to the docker/ directory
git clone https://github.com/rorubyy/n8n-nodes-openai-langfuse.git cd n8n-nodes-openai-langfuse/docker
- Build the Docker image
docker build -t n8n-openai-langfuse .
- Run the container
docker run -it -p 5678:5678 n8n-openai-langfuse
You can now access n8n at http://localhost:5678
For a standard installation without Docker:
# Go to your n8n installation directory
cd ~/.n8n
# Install the node
npm install n8n-nodes-openai-langfuse
# Restart n8n to apply the node
n8n start
This credential is used to:
- Authenticate your OpenAI-compatible LLM endpoint
- Enable Langfuse tracing, by sending structured request/response logs to your Langfuse instance
Field Name | Description | Example |
---|---|---|
OpenAI API Key | Your API key for accessing the OpenAI-compatible endpoint | sk-abc123... |
OpenAI Organization ID | (Optional) Your OpenAI organization ID, if required | org-xyz789 |
OpenAI Base URL | Full URL to your OpenAI-compatible endpoint | default: https://api.openai.com/v1 |
Field Name | Description | Example |
---|---|---|
Langfuse Base URL | The base URL of your Langfuse instance | https://cloud.langfuse.com or self-hosted URL |
Langfuse Public Key * | Langfuse public key used for tracing authentication | pk-xxx |
Langfuse Secret Key * | Langfuse secret key used for tracing authentication | sk-xxx |
🔑 How to find your Langfuse keys:
Log in to your Langfuse dashboard, then go to:
Settings → Projects → [Your Project] to retrieve publicKey and secretKey.
Once filled out, your credential should look like this:
✅ After saving the credential, you're ready to use the node and see traces in your Langfuse dashboard.
This node lets you inject Langfuse-compatible metadata into your OpenAI requests.
You can trace every run with context such as sessionId
, userId
, and any custom metadata.
Field | Type | Description |
---|---|---|
sessionId |
string |
Logical session ID to group related runs |
userId |
string |
ID representing the end user making the request |
metadata |
object |
Custom JSON object with additional context (e.g., workflowId, env) |
Input Field | Example Value |
---|---|
Session ID | {{$json.sessionId}} |
User ID | test |
Custom Metadata (JSON) |
{
"project": "test-project",
"env": "dev",
"workflow": "main-flow"
}
- Node Configuration UI: This shows a sample n8n workflow using the Langfuse Chat Node.
- Workflow Setup: A typical workflow using this node.
- Langfuse Trace Output Here’s how traces appear inside the Langfuse dashboard.
- Requires n8n version 1.0.0 or later
- Compatible with:
- OpenAI official API (https://api.openai.com)
- Any OpenAI-compatible LLM (e.g. via LiteLLM, LocalAI, Azure OpenAI)
- Langfuse Cloud and self-hosted instances
- v1.0 – Initial release with OpenAI + Langfuse integration
MIT © 2025 Wistron DXLab