Ask HN: What's a standard way for apps to request text completion as a service?

If I'm writing a new lightweight application that requires LLM-based text completion to power a feature, is there a standard way to request the user's operating system to provide a completion?

For instance, imagine I'm writing a small TUI that allows you to browse jsonl files, and want to create a feature to enable natural language parsing. Is there an emerging standard for an implementation agnostic, "Translate this natural query to jq {natlang-query}: response here: "?

If we don't have this yet, what would it take to get this built and broadly available?

47 points | by nvader 5 days ago

10 comments

  • lcian 13 hours ago
    When I'm writing a script that requires some kind of call to an LLM, I use this: https://github.com/simonw/llm.

    This is of course cross-platform and works with both models accessible through an API and local ones.

    I'm afraid this might not solve your problem though, as this is not an out of the box solution, it requires the user to either provide their own API key or to install Ollama and wire it up on their own.

    • kristopolous 11 hours ago
      I've been working on a more unixy version of his tool I call llcat. Composable, stateless, agnostic, and generic:

      https://github.com/day50-dev/llcat

      It might help things get closer..

      It's under 2 days old and it's already really fundamentally changing how I do things.

      Also for edge running look into the LFM 2.5 class of models: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct

      • mirror_neuron 10 hours ago
        I love this concept. Looks great, I will definitely check it out.
        • kristopolous 1 hour ago
          Please use it and give me feedback. I'm going to give a lightning talk on it tonight at sfvlug
    • nvader 11 hours ago
      I think this is definitely a step in the right direction, and is exactly the kind of answer I was looking for. Thank you!

      `llm` gives my tool a standard bin to call to invoke completions, and configuring and managing it is the user's responsibility.

      If more tools started expecting something like this, it could become a defacto standard. Then maybe the OS would begin to provide it.

  • WilcoKruijer 13 hours ago
    MCP has a feature called sampling which does this, but this might not be too useful for your context. [0]

    In a project I’m working on I simply present some data and a prompt, the user can then pipe this into a LLM CLI such as Claude Code.

    [0] https://modelcontextprotocol.io/specification/2025-06-18/cli...

    • brumar 12 hours ago
      Sampling seemed so promising, but do we know if some MCPs managed to leverage this feature successfully?
  • netsharc 11 hours ago
    That's interesting, on Linux there's the $EDITOR variable (a quick search of the 3 distros Arch, Ubuntu, Fedora show me they respect it) for the terminal text editor.

    Maybe you can trailblaze and tell users your application will support the $LLM or $LLM_AUTOCOMPLETE variables (convene the committee for naming for better names).

  • billylo 4 days ago
    Windows and macOS does come with a small model for generating text completion. You can write a wrapper for your own TUI to access them platform agnostically.

    For consistent LLM behaviour, you can use ollama api with your model of choice to generate. https://docs.ollama.com/api/generate

    Chrome has a built-in Gemini Nano too. But there isn't an official way to use it outside chrome yet.

  • jiehong 9 hours ago
    This might work through a LSP server?

    It’s not exactly the intended use case, but it could be coerced to do that.

    I’ve seen something else like that, though: voice transcription software that have access to the context the text is in, and can interact with it and modify it.

    Like how some people use super whisper modes [0] to do some actions with their voice in any app.

    It works because you can say "rewrite this text, and answer the questions it asks", and the dictation app first transcribes this to text, extract the whole text from the focused app, send both to an AI Model, get an answer back and paste the output.

    [0]: https://superwhisper.com/docs/common-issues/context

  • cjonas 11 hours ago
    I asked a similar question a while back and didn't get any response. Some type of service is needed for applications that want to be AI enabled but not deal with usage based pricing that comes with it. Right now the only option is for the user to provide a token/endpoint from one of the services. This is fine for local apps, but less ideal for we apps.
  • Sevii 10 hours ago
    Small models are getting good but I don't think they are quite there yet for this use case. For ok results we are looking at 12-14GB of vram committed to models to make this happen. My MacBook with 24GB of total ram runs fine with a 14B model running but I don't think most people have quite enough ram yet. Still I think it's something we are going to need.

    We are also going to want the opposite. A way for an LLM to request tool calls so that it can drive an arbitrary application. MCP exists, but it expects you to preregister all your MCP servers. I am not sure how well preregistering would work at the scale of every application on your PC.

  • joshribakoff 11 hours ago
    I have been using an open source program “handy”, it is a cross platform rust tauri app that does speech recognition and handles inputting text into programs. It works by piggybacking off the OS’s text input or copy and paste features.

    You could fork this, and shell out to an LLM before finally pasting the response.

  • tpae 9 hours ago
    You can check out my project here: https://github.com/dinoki-ai/osaurus

    I'm focused on building it for the macOS ecosystem

  • TZubiri 10 hours ago
    Not at all natural language, but linux has readline for exact character matches, it's what powers tab completion in the command line.

    Maybe it can be repurposed for natural language in a specific implementation