35 comments

  • al_borland 10 hours ago
    I like the idea of a single chat with many models. Pre-AI-everything, I was already a Kagi user, so already paying for that. I've started using the Kagi Assistant[0] to solve this for myself. I pay $10/month, as I always did, and that's my credit limit to the various LLMs. They added an AI cost tracker to create transparency to those costs. So far this month I've used $0.88 of my $10.00 allotment. So I don't feel like I'm in any danger of going over. If I wasn't already paying for this, I'd be pretty interested in an option that was pay-as-you-go.

    Looking at your pricing I find the credit model a bit confusing. It feels like credit card points and I don't really have a concept of what that will get me. Tokens are a bit abstract, but that's the currency of AI, so it is what it is. Adding credits as an intermediary between tokens and dollars may have been done with the goal of simplify things, but it my head it makes it harder to understand and leaves more places for hidden fees to hide.

    Giving some idea of how much usage someone could expect to get out of a 1,000 tokens or 100 credits or $1 would be useful. I can do the math and see I can do 20 web searches for $1, but does that include follow up questions? Is every question a web search? Kagi shows that I've used 15 searches so far today, and it's cost me less than 2¢ for the almost 19k tokens. So I'm a bit confused.

    More generally on the chat-only umbrella tools, I do miss some of the nice-to-have options of going directly with the big players (interactive code editors, better images generation, etc), but not enough to be paying $20+/month/service.

    [0] https://kagi.com/assistant

    • metrix 10 hours ago
      I've been using openrouter.ai to use "all llm's". No subscription, and can be tied to your editor of choice
      • pyman 9 hours ago
        For free? How's that possible when one AI prompt uses 10x more energy than a Google search [1]?

        [1] Source: https://kanoppi.co/search-engines-vs-ai-energy-consumption-c...

        • symboltoproc 9 hours ago
          10 google searches are also free
          • pyman 8 hours ago
            You didn't click on the link I shared. I'm talking about the cost to produce the response, not the request. One AI prompt uses around 10 times more CPU and energy than a Google search.

            If ChatGPT handles 1 billion queries a day, that's like the energy cost of 10 billion Google searches every single day.

            Someone has to pay the electricity bill. We all know it's not free like you claim.

            • 112233 4 hours ago
              you also didn't click on the link the poster you replied to shared...

              seconding openrouter and fal, having to muck around with idiosyncrasies of each vendor just to try their "bestest model" and find out it does not satisfy your requirements is a chore.

              • pyman 2 hours ago
                I'd stick with Google Search until Microsoft figures out how to handle a billion OpenAI requests a day without draining the water supply of entire cities. Because in Chile, for example, people are struggling.
  • mritchie712 10 hours ago
    > Couldn't find one, so I built one.

    there are no less than 100 of these.

    • exe34 10 hours ago
      They couldn't find one of their own. All those 100 others were built by others!
  • kwamenum86 10 hours ago
    Librechat seems perfect for your use case. It’s open source as well. Used by many of the big techcos to solve the problem you’re describing, so it’s battle tested https://www.librechat.ai
  • fredwu 9 hours ago
    > Couldn't find one, so I built one.

    > What do you think?

    You were lost between all the AI stuff... but have you not tried to simply use Google to find a bunch of similar services?

  • woodylondon 10 hours ago
    Good idea, and I also explored this idea and a year ago and also started building one, recognising a gap in the market for a solution that supports multiple LLMs, but also provides small businesses with a centralised managed AI client - billing, monitoring, logging, company prompts, etc.

    Ultimately, I discovered https://www.typingmind.com, which offers all of these features. I am sure there are others - I was amazed that not more of these came out. Might be worth to see what they have built. The more of these that come out the better - its a whole new market.

  • k9294 2 hours ago
    I'm a tech co-founder at loqus.ai, me and my team thought that it’s a great idea to make a one app “to rule them all”, but it appeared to be a bad idea.

    We decided to stop active development when realized that most people don't want different models, custom instructions, tools, etc..

    People want their work done, and no one cares how exactly it will be done - the less they have to think about the setup, the better (and chatgpt just works in most of the cases).

  • vinhnx 9 hours ago
    I also built VT (https://vtchat.io.vn), a secure, privacy-first AI chat platform focused on data sovereignty. It supports BYOK (Bring Your Own Key) models, OpenRouter integration, and local models through LM Studio and Ollama (currently in beta).
  • delduca 10 hours ago
    There is an open source alternative

    https://github.com/open-webui/open-webui

    • aae42 9 hours ago
      anyone could run this themselves with open web UI and litellm, in fact that's the stack I've been using
  • bearjaws 9 hours ago
    Should have used ChatGPT deep research to prevent you wasting time building the 41st version of this same product.

    Here's all the main competitors:

    1. You.com

    2. Poe

    3. Mammouth

    4. Magai

    5. TeamAI

    6. TypingHive

    7. WritingMate

    8. ChatHub

    9. Monica

  • Alifatisk 10 hours ago
    Isn't this what openrouter is for?
    • bob_theslob646 10 hours ago
      What's openrouter?
      • v5v3 10 hours ago
        https://openrouter.ai/

        edited to remove statement saying API only, as per comments.

        • metrix 10 hours ago
          They have a chat feature allowing you to talk with multiple models
        • boomskats 10 hours ago
          Not sure if you noticed, but the first CTA on the link you posted is a prompt to "Start a message", which then opens a multi-model capable chat interface.
        • kosolam 10 hours ago
          Openrouter has a chat built in as well. Also, since librechat was mentioned, the self hosted option I currently prefer is openwebui - connected to openrouter and gemini here.
  • yellow_lead 11 hours ago
    There's also https://t3.chat/
  • covercash 10 hours ago
    Tangentially related, but is there any sort of chat bot that will look at the query and suggest which LLM might be the best for that particular task?
  • lemming 9 hours ago
    I'll bite: what is the web search story like? This is the killer feature of e.g. chatgpt that none of the alternative or OSS options offer. Having the search be fast (i.e. not round tripping to the client on every search) and integrated into the thinking process is unbeatable, I use it constantly. The big API providers all provide search options in their APIs now, but they're very quirky - openai doesn't allow search via their API with thinking models. Gemini doesn't allow you to use search with any other tools available. Claude's just doesn't seem to work that well, even in their own web UI. I even paid for typing mind, which is nice, but I never use it and always just end up paying for chatgpt again because of this.
  • intellectronica 9 hours ago
    Cool and hats off to you for building an app ... but how come you couldn't find one?! There are so many. I use Raycast, which has access to all the models, there's Poe (from Quora) which is popular, Perplexity offers access to many models, and there are many more...
  • edmundsauto 4 hours ago
    AnythingLLM is great for this, it even enables you to setup RAG from your own set of docs, which it then retrieves via embedding and adds as context.

    I use that + OpenRouter which gives me API access to more models as well. Huge fan of this approach.

  • swores 9 hours ago
    Question to people talking about the various alternatives that already exist for this: does anyone know if there's something like OpenRouter that's open source and that, either as the only interface or preferably as an optional alternative to a web interface, lets you use a standard non-AI chat app (ideally Signal, more likely WhatsApp) as the interface?

    Edit: I'd still be grateful for a reply with any recommendations or other options, but ChatGPT has given me a few things to look into when I'm at my PC - https://chatgpt.com/share/6873a9b5-ea8c-800c-b111-96b5f27a09...

  • lvl155 10 hours ago
    I think in theory this type of approach is good but you know what’s going to happen eventually. Companies like OpenAI is already gatekeeping its best models. You need to pay and tier up on their platform to even use it. There’s no free lunch in AI landscape.
  • gcanyon 9 hours ago
    "If you're on the free plan, there's an 19.8% platform fee"

    I think you mean the "pay as you go" plan? If not, that's pretty confusing, and 19.8% of "free" should still be "free" :-)

  • shinycode 10 hours ago
    I think this does the same as well https://mammouth.ai/
  • gtech1 8 hours ago
    off-topic: anyone aware of a service where I could plugin my api keys for openai/gemini/claude and it asks the same question to all 3 and refines the answer by using one of them as the final arbiter ?
  • manx 10 hours ago
    I also created my own. Frontend-only, no signup, based on openrouter, written in rust+leptos (wasm):

    https://github.com/fdietze/tomatic

    https://tomatic.app

  • supriyo-biswas 10 hours ago
    I believe there's a couple of similar apps like https://msty.app and https://jan.ai that do the same and allow you to plug in your own API keys.
  • KMnO4 9 hours ago
    The pricing doesn’t render properly on mobile. The responsive design puts what is presumably a table into a vertical stack, which means I have no idea which price corresponds to which feature.
  • parsimo2010 10 hours ago
    If you truly couldn’t find anything then you clearly don’t know how to use a web search. Poe.com brings many models under one subscription, and they have you completely beat on features and the number of models they offer. They are owned by Quora, so have a decently large group backing them. You.com lets you use multiple models, and also offers more features and models than you. Perplexity lets you use multiple models to chat. Merlin.ai lets you use multiple models. The list goes on, but there are a variety of established players in this space.

    It looks like the only thing you offer over them is a “pay-as-you-go” option, where they are only subscription based. You kind of cheapen your differentiating factor by also offering a subscription. You need to show how you’re different from the competitors in this space, otherwise your growth will be very slow. You’re competing against OpenAI and Anthropic who are trying to sell their chat interfaces along with the other aggregator websites who have been around longer and have been developing features as integrating models from various providers this whole time. Do you think your pricing model will be enough, or do you have some killer features planned?

    • stoken 9 hours ago
      > If you truly couldn’t find anything then...

      ...you are clearly an engineer who has already decided to write their own so wasn't looking to hard, or did find something but went "I could do this better". Pretty much how many projects start.

      • parsimo2010 9 hours ago
        OP said, in their post, they “Couldn't find one”

        It’s totally fine to build your own implementation of something, especially if it’s for personal use or you’re not charging. OP is pitching a paid product. It’s not okay to ignore the sea of competitors and pitch your product on HN with a marketing blurb that isn’t true.

        Anybody can slap together a chat UI and integrate a few LLM APIs. We need more than that if you’re charging money.

  • stavros 9 hours ago
    I use Librechat and OpenWebUI for that, but lately I've gotten a Claude subscription because A) deep research is amazing, B) their "canvas" UI is great, and C) Claude Code.
  • deadbabe 9 hours ago
    In the age of LLMs, why use something someone else built when an LLM could just build you your own!
  • warthog 10 hours ago
    Just tried it but the messages keep disappearing after submission and streaming.
  • iLoveOncall 10 hours ago
    You should integrate with AWS bedrock, you'd be able to offer 60 models instantly without having to spend any time integrating with separate APIs: https://docs.aws.amazon.com/bedrock/latest/userguide/models-...
  • fl0id 10 hours ago
    this already existed with things like POE. like I mean for normal users, with a subscription for multiple models. same with frontends for api keys.
  • PetrBrzyBrzek 9 hours ago
    > Couldn't find one, so I built one.

    I’m sorry, but I find it hard to believe that you didn’t find any. I personally know at least 5 services that offer this.

  • Simulacra 11 hours ago
    One improvement that would interesting, if not in there already, is to compare the answers from the different LLMs, and then combine them into the highest probability statement.
  • john_the_writer 10 hours ago
    Reminds me of the xkcd https://xkcd.com/927/
  • 17swagat 9 hours ago
    [dead]