Such a database of rewrite rules only doesn't make much sense because the semantic part is missing (what does a rule mean, and why is it correct?), and so you will run into inconsistencies quickly as the database grows.
But if you take the idea of a universal system for rewrite rules seriously, and add just enough semantics to make it work as a logic, you might arrive at abstraction logic [1]. That is actually a great way of explaining abstraction logic to computer scientists, come to think of it!
I don't always care about consistency between rule sets. Depends what I'm trying to do.
The question at hand is motivating and getting benchmarks for different approaches or engines to equational reasoning / rewriting. I sometimes lose sight of motivations and get discouraged. Doing associativity and commutativity and constant fusion for the nth time I start to become worried it's all useless.
But to be a bit more specific, I've been involved in the egraphs community https://github.com/philzook58/awesome-egraphs and we don't currently have a shared database of rewrite rules for benchmarking, nor do we collect the rules from different projects. Seeing the actual files is helpful.
On a slightly different front, I'm also trying to collect rules or interesting theories up in my python interactive theorem prover Knuckledragger https://github.com/philzook58/knuckledragger . Rewrite rule or equational theory looking things are easier to work with than things with deeply nested notions of quantifiers or tons of typeclass / mathematical abstraction like you find when you try to translate out of Coq, Lean, Isabelle.
There are also different approaches to declarative rewrite rules in major compilers. GCC, LLVM, and cranelift have their own declarative rewrite rule systems for describing instruction selection and peephole optimizations but also of course lots of code that programatically does this. I also want to collate these sorts of things. Working on fun clean systems while not confronting the ugly realities of the real and useful world is ultimately empty feeling. Science is about observing and systematizing. Computer science ought to include occasionally observing what people actually do or need.
Me too. I went DEEP into mod_rewrite, mod_proxy and friends in the early 2000's. It's a dangerous place to put too much logic, but also incredibly powerful. On multiple occasions, I used it to solve problems in a couple hours that would've taken the engineering team weeks or months to address.
I'm curious if there is a useful connection between Apache's rule and other term rewriting. Some sort of static analysis? If there is a interesting database of them, I'd add it.
In general, for any invertible function (or partially invertible function if we can know the input is in the right subset) we need to recognize a rule f⁻¹(f(x)) === x; note that this usually does not apply when floats are involved.
One interesting case for `f` is the gray code conversion, whose inverse requires a (finite) loop. This can be generalized to shifts other than 1; this is common in components of RNGs.
I have been thinking about implementation oriented rewrite rules that for example express that you can add two numbers smaller than 2^2n with three addition operations (and some other operations) on numbers smaller than 2^n or with two addition operations that take an additional integer smaller than 2 (usually called a carry in hardware implementations).
this isn't deep - there's a reason why ZFC stands out amongst all possible sets of "foundations" - it's because in lieu of an analytic cost function (which isn't possible) we have consensus.
i don't know what you're trying to say. i'm saying that "all" the rewrite rules don't matter, only the "interesting" ones matter but interesting is related to opinion/taste/cost.
But if you take the idea of a universal system for rewrite rules seriously, and add just enough semantics to make it work as a logic, you might arrive at abstraction logic [1]. That is actually a great way of explaining abstraction logic to computer scientists, come to think of it!
[1] http://abstractionlogic.com
The question at hand is motivating and getting benchmarks for different approaches or engines to equational reasoning / rewriting. I sometimes lose sight of motivations and get discouraged. Doing associativity and commutativity and constant fusion for the nth time I start to become worried it's all useless.
1. JoGo's OBJ and Maude: algebras on a rewrite engine
https://cseweb.ucsd.edu/~goguen/sys/obj.html
2. SPECWARE: a Category Theory approach from SRI Kestrel Institute
https://www.specware.org/research/specware/
https://github.com/KestrelInstitute/Specware/blob/master/Lib...
But to be a bit more specific, I've been involved in the egraphs community https://github.com/philzook58/awesome-egraphs and we don't currently have a shared database of rewrite rules for benchmarking, nor do we collect the rules from different projects. Seeing the actual files is helpful.
On a slightly different front, I'm also trying to collect rules or interesting theories up in my python interactive theorem prover Knuckledragger https://github.com/philzook58/knuckledragger . Rewrite rule or equational theory looking things are easier to work with than things with deeply nested notions of quantifiers or tons of typeclass / mathematical abstraction like you find when you try to translate out of Coq, Lean, Isabelle.
There are also different approaches to declarative rewrite rules in major compilers. GCC, LLVM, and cranelift have their own declarative rewrite rule systems for describing instruction selection and peephole optimizations but also of course lots of code that programatically does this. I also want to collate these sorts of things. Working on fun clean systems while not confronting the ugly realities of the real and useful world is ultimately empty feeling. Science is about observing and systematizing. Computer science ought to include occasionally observing what people actually do or need.
https://fosdem.org/2025/schedule/event/fosdem-2025-6839-ligh...
Also see main working page:
https://www.earth.org.uk/RSS-efficiency.html
I can point you to bits of config (it's a bit scattered in my blogs at the moment) like this:
https://www.earth.org.uk/note-on-site-technicals-83.html#202...
One interesting case for `f` is the gray code conversion, whose inverse requires a (finite) loop. This can be generalized to shifts other than 1; this is common in components of RNGs.
https://en.wikipedia.org/wiki/Inverse_function
https://en.wikipedia.org/wiki/Gray_code
1: use rust
x = x + x - x
x = x + x - x + x - x
...
you get the point
this isn't deep - there's a reason why ZFC stands out amongst all possible sets of "foundations" - it's because in lieu of an analytic cost function (which isn't possible) we have consensus.
It's decidable whether a TRS is confluent, i.e. if a fixpoint exists, exactly one fixpoint exists. Or what do you mean?
What you're saying is akin to "not all functions are interesting". Well, yes...
But that's true for any model of computation.