This post shows “concept algebra” on language model: inject, suppress, and compose human-understandable concepts at inference time (no retraining, no prompt engineering).
There’s an interactive demo on the post.
Would love feedback on:
(1) what steering tasks you’d benchmark,
(2) failure cases you’d want to see,
(3) whether this kind of compositional control is useful in real products.
Hi! Have you published the concept dictionary yet? I’m looking into using Steerling to investigate how different moral scenarios elicit various responses in LLMs (using Haidt MFT concepts mostly), and my first few inference runs have been hamstrung by not having a canonical mapping of concepts to IDs. Thanks!
I would personally like some quantification of how good this is compared to just replacing the system prompt of an off the shelf 8B parameter language model.
The suppression bit is very powerful. I would like to see a quantification of how often a steered 'normal' language model will mention things you asked it to suppress vs how often this one does
We will share a technical write-up soon that addresses both of your questions: (1) steering vs. prompt engineering, and (2) how effectively our steering suppresses undesired generations.
If you have joined our waitlist, we will notify you as soon as it is available.
We haven’t benchmarked our steering for scaffolding function-calling in an agent loop yet (and the model we are using is just a base model), so I can’t give a quantitative claim. But concept-based steering should be a good fit for keeping the agent on task and enforcing behavioral guardrails around tool use.
In practice, you can treat concepts as soft/hard constraints to bias the agent toward: (1) calling tools only when needed, (2) selecting the right tool/function, or (3) using the correct argument schema.
This post shows “concept algebra” on language model: inject, suppress, and compose human-understandable concepts at inference time (no retraining, no prompt engineering).
There’s an interactive demo on the post.
Would love feedback on: (1) what steering tasks you’d benchmark, (2) failure cases you’d want to see, (3) whether this kind of compositional control is useful in real products.
Related: https://news.ycombinator.com/item?id=47131225
We haven’t published the concept dictionary yet.
We plan to release it in soon with other important artifacts.
The suppression bit is very powerful. I would like to see a quantification of how often a steered 'normal' language model will mention things you asked it to suppress vs how often this one does
If you have joined our waitlist, we will notify you as soon as it is available.
In practice, you can treat concepts as soft/hard constraints to bias the agent toward: (1) calling tools only when needed, (2) selecting the right tool/function, or (3) using the correct argument schema.