Segment for LLM Traces? Seeking Feedback on an Open Source LLM Log Router

Hey everyone, I’m considering starting a new open source project and wanted to see if anyone else thinks the idea could be useful. The concept is simple: an open source LLM log router that works like Segment—but specifically for LLM logs. It would let you easily route logs to different analytics, eval, monitoring, and data warehouse platforms (think LangFuse, Gentrace, Lattitude, etc.) so you can leverage the strengths of each without needing separate integrations. I’ve run into a few recurring challenges when integrating multiple eval and monitoring tools.

Conflicting Integrations: Many tools use their own forks of popular packages (like the OpenAI SDK or LangChain), which often conflict—making it nearly impossible to use them together.

Inconsistent Prompt Templating: Sometimes different tools require different prompt formats, complicating the process of switching between them or using multiple tools simultaneously.

Data Migration Challenges: Moving logs between systems is a hassle. Testing a new tool often means generating new data or deploying changes in production, making it hard to evaluate if switching is worthwhile.

While some solutions exist (such as LangChain and LiteLLM integrations with various eval platforms), they don’t fully address these issues—especially data migration and integrating with multiple tools at once. To solve this, I’m toying with the idea of a new open source project—a lightweight, self-hosted server that acts as a “Segment for LLM traces and logs”. Here are some of the features I envision.

Self-hosted logging server: Send LLM traces to a single endpoint without impacting your app’s performance.

Centralized aggregation and routing: Gather traces and logs on a single server, then forward them to any destination you choose (evals, analytics, monitoring, alerting, data warehouses, etc).

Lightweight Framework integrations: Support for popular frameworks and SDKs (LangChain, LiteLLM, OpenAI SDK, LlamaIndex, etc.) with integrations that never block your event loop—so even if the logging server or a destination platform goes down your app continues to function as expected.

Easy configuration: A simple interface for managing data sources and destinations without making code changes or redeploying your application.

Data portability: Store logs in a database with the option to re-export to new tools in the future—ensuring you never face vendor lock-in.

Custom integrations: Webhooks to easily set up your own custom destinations.

I’d love to hear if you’ve experienced any similar issues integrating with multiple LLM monitoring, eval, and analytics platforms. If so, how did you address them? Do you see value in an open source data router like this? Please share your thoughts in the comments, and if you’re interested in contributing or using a project like this in the future, it’d be super helpful if you could fill out this survey. https://yk1m5yevl9j.typeform.com/to/cQdxF6bN

Thanks in advance for your feedback!

10 points | by patethegreat 18 hours ago

2 comments

  • jtchang 12 hours ago
    Funny how this popped up as I was just talking to a friend about some of the challenges I've had with logging. Would definitely be interested in contributing to a project like this. Hit me up (email in profile).