It's nice that people are taking this up, and one of the main benefits of open source in the first place. I have my doubts that this will succeed if it's just one guy, but maybe it takes on new life this way and I would never discourage people from trying to add value to this world.
That said I increasingly have a very strong distaste of these AI generated articles. They are long and tedious to read and it really makes me doubt that what is written there is actually true at all. I much prefer a worse written but to the point article.
I agree completely. I know everyone is tired of AI accusations but this article has all of the telltale signs of LLM writing over and over again.
It’s not encouraging for the future of a project when the maintainer can’t even announce it without having AI do the work.
It would be great if this turns into a high effort, carefully maintained fork. At the moment I’m highly skeptical of new forks from maintainers who are keen on using a lot of AI.
I just get my agent to read them for me and present a few options for comments as derived from the vibes of any existing comments. If I time out, it posts a random option, then at the end of the week I get it to summarise all the content I (royal) read and distill it into a take-aways note in my (royal) journal. It's been a huge productivity boost. When ever I think I might want to think about something I just ask the agent to find a topic I (royal) read within some timeframe and have it synthesise a few new dot points in my (royal) journal. I'm hoping to reach 10,000 salient points by the end of the year.
I have nothing against a skilled maintainer with attention to detail using AI tools for assistance.
The important part is the human who will do more than just try to get the LLM to do the hard work for them, though. Once software matures the bugs and edge cases become more obscure and require more thoughtful input. AI is great at getting things to some high percentage of completeness, but it takes a skilled human to keep it all moving in the right direction.
I would cite this blog post as an example of lazy LLM use: It's over-dramatic, long, retains all of the poor LLM output styling that most human editors remove, and suggests that the maintainer isn't afraid to outsource everything to the LLM.
> it really makes me doubt that what is written there is actually true at all
Indeed, the whole "Ironically, switching from Apache 2.0 to AGPL irrevocably makes the project forkable" section seems misguided. Apache 2.0-licensed software is just as forkable.
I'll plug that Chainguard has been maintaining a fork for awhile and seems to have a history with supporting forks like this: https://github.com/chainguard-forks/minio
I switched to rustfs this week though and am not looking back. I'd recommend it to others as well for small scale usage. Its maturing rapidly and seems promising.
> MinIO as an S3-compatible object store is already feature-complete. It’s finished software.
I don't see how these two lines can be written together.
The goal is either to remain S3-compatible or to freeze the current interface of the service forever.
As it stands this fork's compatibility with S3, and with the official MinIO itself, will break as soon as one of them pushes an API update. Which works fine for existing users, maybe, but over time as the projects drift further apart no new ones will be able to onboard.
The S3 API is quite stable and most new features are opt-in (e.g. ApplyIfModified) or auxiliary (e.g. S3Tables). It’s highly unlikely that S3 proper will break backwards compatibility for clients with any future API change. So if all you need is basic object storage that works with existing S3 clients, then MinIO is enough. The fork just needs to keep CVEs patched and maintain community hygiene (accept new PRs for small bug fixes, etc.). And as the author points out, this is much easier in the age of AI than it might have been previously.
From whatI can tell, "s3 compatibility" usually means compatibility with some subset of the actual s3 API. And what subset that is varies a fair amount between projects.
I can't see how Amazon is incentivized to avoid making any changes that break compatibility for their imitators, so long as their first party SDKs continue working. Standardized feels like it should be suffixed with "as long as Amazon doesn't ever feel like evolving the product further".
If amazon changes the API they've angered their entire customer base that relies on the API. Sure, some will stick around if they're fully entrenched by the ecosystem, but others will be able to leave, and they will, because hey, S3 is a standard-ish API.
• This is translated from my original Chinese post. I used Claude to polish the English — not a native speaker. Fair criticism on the LLM-ese; I'll tighten it.
• This fork exists because MinIO is a production dep in my PG distribution (Pigsty) and I needed working binaries + CVE patches. It's primarily for my own use; sharing it because others may have the same problem.
• We're deliberately conservative — no new features, just a drop-in replacement that behaves like the last OSS release with the console restored. Early commits will look thin. That's by design.
Moved to Garage, it's actually pretty easy to run and use.
Would be even nicer if the official Docker image would support initializing a default bucket and access key from env variables instead of having to exec into the container and follow https://garagehq.deuxfleurs.fr/documentation/quick-start/ but that's not a dealbreaker.
Note: I only needed the single-node install, it was either this or SeaweedFS. Also used MinIO and Zenko in the past, but even the latter seems pretty much dead.
I never understood why one would use MinIO over Ceph for serious (multi-node) use. Sure, it might be easier to setup initially, but Ceph would be more likely to work.
For the single node use-case, I'm working on https://github.com/uroni/hs5 . The S3 API surface is large, but at this point it covers the basics. By limiting the goals I hope it will be maintainable.
Seems like a very balanced take on forking Minio. I don't have high hopes for the future Minio, but as mentioned it is more or less feature complete, good enough for most use-cases.
I was searching for a fairly simple replacement for s3 for testing. I'd been using Minio for a while now, and simply ended up implementing my own on top of Postgres. Fun intersection given the post. (Note, I know it isn't optimal, but as I always have Postgres available it fits well, and I don't have high storage needs, just the api compatibility)
I considered it a while ago, but I wasn't totally clear on Read-After-Write. Which was the primary reason why I choose to just implement my own for testing.
I'll probably give GarageHQ a more serious look again.
There are 3 new commits, and the only actually fixes are: Go update and revert to earlier version of console.
But there are a bunch of changes to docs, CI workflows and issue templates. Which is what is the easy part of managing a fork, and I've seen a bunch of forks that ended up only updating readme-s, CI, etc.
I'll have more faith in the fork when the maintainers do actual fixes.
Although, to be fair, getting too aggressive off the bat would be concerning. A clean fork that is bit for bit compatible with the last open source version is definitely an attractive proposition from a software supply chain perspective.
Wish the effort well. I has plans to self host s3 with minio that took some time to actually get around to and when I did they had done the enterprise rug pull. I do think one maintainer may be able to pull it off with AI assistance if the scope is limited to security bug fixes. Minio is one of the nastiest rug pulls I can think of.
> A company that raised $126M at a billion-dollar valuation spent five years methodically dismantling the open-source ecosystem it built.
Sounds like Puppet's story. $180M raised, ~$1B valuation ca. 2019, sold to Perforce in 2022, public repo taken private and builds commercialized by Perforce in 2024, community fork shipped early 2025.
I am wondering if Minio Inc has rewritten the software in a clean room. Otherwise wouldn't they need to publish the source anyways? Since it is AGPL anyone might potentially be interacting with the software. Do they do that?
- Code written by the Minio team, which they have full ownership of and can relicense as they wish
- Code written by third party contributors, where Minio required the contributors to provide Minio a BSD license to use the contributions but only published it to other people under AGPL.
So the AGPL doesn't bind Minio themselves because of their licensing policy. (Which is why while pure AGPL might be the open source maximalist license, AGPL + CLA is almost at the opposite end of the scale)
Question , can MinIO the company assert AGPL copyright against the fork - i see in the writeup they mentioned trademarks as far as the fork is concerned.
Whats the situation for a AGPL fork , were one to use it can the company assert rights like they did to Nutanix.
As long as the fork complies with the terms of the AGPL, Minio can't stop them from using the code. As the article acknowledge,s hey could potentially rely on trademarks to make them rename it.
There are several companies I've seen that use a CLA primarily to sell AGPL exceptions so they can actually fund development, Element for example [1]. Some even word the CLA to require them to keep contributions available under an OSI-approved license.
I'm a fan of that model. IIt allows for a path to funding, a legal framework to keep contributed code open, and also allows them license agility to more permissive license ass needed. I've started using that for my own larger projects too.
the FSF position is that GPL is unenforceable without a single copyright owner, which is why almost all gnu projects, linux, canonical/redhat/etc projects have a CLA or something functionally similar
Sometimes that’s far more work than it’ll ever be worth.
If I get my patches upstream, then I don’t have to waste time reintegrating patches and rebuilding packages when I could instead be doing productive things.
It’s nice to see people taking this on, but for a project like this I’d prefer to wait and see if the maintenance continues.
This blog post is extremely heavy on LLM written content, which isn’t a promising early sign
> Normally this is where the story ends — a collective sigh, and everyone moves on.
> But I want to tell a different story. Not an obituary — a resurrection.
I’ve seen several announcements of forked open source projects from people who thought that maintaining a fork is easy now that they can have an LLM do all the work. Then their interest trails off when they encounter problems the AI can’t handle for them or the community tires of doing all of the testing and code review for a maintainer who just wants to prompt the LLM and put their name on the project. When someone can’t even write their own announcement without an LLM it’s not an encouraging sign.
Still, I would probably abandon the name for trademark enforcement reasons. It's low hanging fruit for them if they want to kill you.
(this is also why the Pentium was called the Pentium instead of the numbers that processors used to be called.. and why the gameboy copyright text was embedded into the ROMs)
I very much appreciate the sentiment, and wish him well. However, one guy maintaining a fork as a side project from his core work is not very promising.
He seems to believe AI will help lessen the burden. I hope he's able to find other maintainers.
This had a ton of LLM-ese in it, so, here's an LLM explaining it. I read it, agreed, then read it again for LLM-ese, then shared it. I recommend this pattern when using LLMs. Especially when claiming you'll replicate the role of a 9 figure company with an LLM.
LLM generated TL;DR: The factual sections read like a real person who knows what they're doing. The rhetorical flourishes read like someone pasted their draft into Claude and said "make it more compelling." The work deserves better than the prose it got.
LLM output given "<DOC>X</DOC> Identify parts written by an LLM"
Here are the passages that read as LLM-generated rather than naturally written:
*Overwrought dramatic pivots (LLMs love the "Not X — Y" antithesis):*
- "Not an obituary — a resurrection."
- "Not 'unmaintained' — officially, irreversibly, done."
- "That demand doesn't disappear — it just finds its way out."
*Explicitly labeling rhetoric that should speak for itself:*
- "The ironic part:" — just show the irony, don't announce it.
- "The consensus in the international community is clear:" — "international community" is overbearing. "is clear" is LLM throat-clearing.
- "That's the beauty of open-source licensing by design" — "That's the beauty of" is a hallmark LLM filler phrase.
*Grandiose one-liners that try too hard:*
- "git clone is the most powerful spell in open source."
- "a digital tombstone"
- "If December was the clinical death, this February commit was the death certificate." — the metaphor was already established in the heading; extending it here is overworked.
*LLM vagueness / filler:*
- "Things are different now." — says nothing.
- "Consider:" as a standalone transition into the Elon/Twitter example.
- "I believe the maintenance workload is manageable." — the hedging "I believe" adds nothing; just say it's manageable.
*Cliché deployment:*
- "the dragon-slayer has become the dragon" (in the related-article blurb)
- "Eating your own dog food is the best QA." — explaining the idiom ("dogfooding") one sentence before, then restating it as a maxim, is the LLM pattern of using a phrase and then making sure you understood it.
*The AI-hype paragraph is the worst offender:*
> "With tools like Claude Code, the cost of locating and fixing bugs in a complex Go project has dropped by *more than an order of magnitude*. What used to require a dedicated team to maintain a complex infrastructure project can now be handled by *one experienced engineer with an AI copilot*."
This reads like an LLM writing about itself — vague quantification ("order of magnitude"), the buzzword "copilot," and the utopian framing are all telltale. The Elon/Twitter analogy that follows ("Consider:") makes it worse, not better.
*Overall pattern:* The technical/factual sections (the timeline table, the build instructions, the console revert explanation) read like a real person. The editorializing and rhetorical flourishes — especially the intro, the "But Open Source Endures" section, and the "AI Changed the Game" section — are where the LLM voice creeps in most heavily.
AGPL is "a plague" by design (viral). It has the explicit goal that any improvements flow back to the community project and the virality is a necessary building block for this. It is an elegant solution to a tragedy of the commons problem.
Companies like MinIO extending the virality beyond the single software/work, even though not intended by license, gives it a bad reputation. They have fixed https://min.io/compliance now, but I guess it does not matter anymore.
Intentions of the license aside, the abusive stance of how far the license extends is an unsettled matter in the US court system. And that means when a project wants to assert that any software which talks to a minIO instance over s3 is included in this license expectation, it’s on you to decide if you want to go the distance defending yourself. And even then, they can just drop the suit whenever in that long process and continue the status quo world of ambiguity.
That said I increasingly have a very strong distaste of these AI generated articles. They are long and tedious to read and it really makes me doubt that what is written there is actually true at all. I much prefer a worse written but to the point article.
It’s not encouraging for the future of a project when the maintainer can’t even announce it without having AI do the work.
It would be great if this turns into a high effort, carefully maintained fork. At the moment I’m highly skeptical of new forks from maintainers who are keen on using a lot of AI.
I mean, I'm more worried about the AI writing itself than people calling it out.
The AI articles on HN are an absolute disease. Just write your own damn articles if you're asking the rest of us to read them.
The important part is the human who will do more than just try to get the LLM to do the hard work for them, though. Once software matures the bugs and edge cases become more obscure and require more thoughtful input. AI is great at getting things to some high percentage of completeness, but it takes a skilled human to keep it all moving in the right direction.
I would cite this blog post as an example of lazy LLM use: It's over-dramatic, long, retains all of the poor LLM output styling that most human editors remove, and suggests that the maintainer isn't afraid to outsource everything to the LLM.
Indeed, the whole "Ironically, switching from Apache 2.0 to AGPL irrevocably makes the project forkable" section seems misguided. Apache 2.0-licensed software is just as forkable.
For a web GUI, I had been using this project: https://github.com/huncrys/minio-console
I switched to rustfs this week though and am not looking back. I'd recommend it to others as well for small scale usage. Its maturing rapidly and seems promising.
In fact, if you run software in production, assume security is compromised.
Edit:
https://hub.docker.com/r/pgsty/minio
From the OP's link
I don't see how these two lines can be written together.
The goal is either to remain S3-compatible or to freeze the current interface of the service forever.
As it stands this fork's compatibility with S3, and with the official MinIO itself, will break as soon as one of them pushes an API update. Which works fine for existing users, maybe, but over time as the projects drift further apart no new ones will be able to onboard.
With so many things offering S3 compatibility, I’d say it’s de-facto standardized.
E.g. the last implementation I saw was by DuckDB https://github.com/duckdb/duckdb-httpfs/blob/main/src/s3fs.c...
https://www.chainguard.dev/unchained/secure-and-free-minio-c...
You wouldn't get the other changes in this post (e.g., restoring the admin console) but that's a bit orthogonal.
• This is translated from my original Chinese post. I used Claude to polish the English — not a native speaker. Fair criticism on the LLM-ese; I'll tighten it.
• This fork exists because MinIO is a production dep in my PG distribution (Pigsty) and I needed working binaries + CVE patches. It's primarily for my own use; sharing it because others may have the same problem.
• We're deliberately conservative — no new features, just a drop-in replacement that behaves like the last OSS release with the console restored. Early commits will look thin. That's by design.
There are a lot of other options when it comes to locally hosted S3, minio has not been the best option for a long while.
It was used the most in introductory articles/examples maybe but there were better options
Would be even nicer if the official Docker image would support initializing a default bucket and access key from env variables instead of having to exec into the container and follow https://garagehq.deuxfleurs.fr/documentation/quick-start/ but that's not a dealbreaker.
Note: I only needed the single-node install, it was either this or SeaweedFS. Also used MinIO and Zenko in the past, but even the latter seems pretty much dead.
* Yes you can absolutely spin up a DIY S3 server
* When you run your server against a credible bench suite it throws a bunch of issues (ceph s3 - is disheartening 5 pass out of 800)
* Vibe coding can address the core issues & make significant progress on the 800 issues. Most of those 800 don't actually matter
* Low trust in resulting outcome, but I do plan on running some personal infra off DIY s3 - shopping list etc.
* Planning to roll some personal infra onto said S3, but with low confidence on
For the single node use-case, I'm working on https://github.com/uroni/hs5 . The S3 API surface is large, but at this point it covers the basics. By limiting the goals I hope it will be maintainable.
I was searching for a fairly simple replacement for s3 for testing. I'd been using Minio for a while now, and simply ended up implementing my own on top of Postgres. Fun intersection given the post. (Note, I know it isn't optimal, but as I always have Postgres available it fits well, and I don't have high storage needs, just the api compatibility)
I'll probably give GarageHQ a more serious look again.
Same goes for AWS markup on rented hardware. ;)
Man I sometimes miss having physical servers.
But there are a bunch of changes to docs, CI workflows and issue templates. Which is what is the easy part of managing a fork, and I've seen a bunch of forks that ended up only updating readme-s, CI, etc.
I'll have more faith in the fork when the maintainers do actual fixes.
Sounds like Puppet's story. $180M raised, ~$1B valuation ca. 2019, sold to Perforce in 2022, public repo taken private and builds commercialized by Perforce in 2024, community fork shipped early 2025.
Um what? Opentofu was forked from the last MPL version of terraform, not a BUSL licensed version. This seems like an AI hallucination.
- Code written by the Minio team, which they have full ownership of and can relicense as they wish
- Code written by third party contributors, where Minio required the contributors to provide Minio a BSD license to use the contributions but only published it to other people under AGPL.
So the AGPL doesn't bind Minio themselves because of their licensing policy. (Which is why while pure AGPL might be the open source maximalist license, AGPL + CLA is almost at the opposite end of the scale)
Whats the situation for a AGPL fork , were one to use it can the company assert rights like they did to Nutanix.
Could you not have a CLA that only allows the project to use a specific license?
If Minio just wanted to use the changes under AGPL, the contributor could just license them under AGPL, no CLA needed.
I'm a fan of that model. IIt allows for a path to funding, a legal framework to keep contributed code open, and also allows them license agility to more permissive license ass needed. I've started using that for my own larger projects too.
https://element.io/blog/synapse-now-lives-at-github-com-elem...
If I get my patches upstream, then I don’t have to waste time reintegrating patches and rebuilding packages when I could instead be doing productive things.
This blog post is extremely heavy on LLM written content, which isn’t a promising early sign
> Normally this is where the story ends — a collective sigh, and everyone moves on.
> But I want to tell a different story. Not an obituary — a resurrection.
I’ve seen several announcements of forked open source projects from people who thought that maintaining a fork is easy now that they can have an LLM do all the work. Then their interest trails off when they encounter problems the AI can’t handle for them or the community tires of doing all of the testing and code review for a maintainer who just wants to prompt the LLM and put their name on the project. When someone can’t even write their own announcement without an LLM it’s not an encouraging sign.
(this is also why the Pentium was called the Pentium instead of the numbers that processors used to be called.. and why the gameboy copyright text was embedded into the ROMs)
He seems to believe AI will help lessen the burden. I hope he's able to find other maintainers.
Best luck!
The most famous one I can think of right now is xz.
But we have to rally around something.
LLM generated TL;DR: The factual sections read like a real person who knows what they're doing. The rhetorical flourishes read like someone pasted their draft into Claude and said "make it more compelling." The work deserves better than the prose it got.
LLM output given "<DOC>X</DOC> Identify parts written by an LLM"
Here are the passages that read as LLM-generated rather than naturally written:
*Overwrought dramatic pivots (LLMs love the "Not X — Y" antithesis):* - "Not an obituary — a resurrection." - "Not 'unmaintained' — officially, irreversibly, done." - "That demand doesn't disappear — it just finds its way out."
*Explicitly labeling rhetoric that should speak for itself:* - "The ironic part:" — just show the irony, don't announce it. - "The consensus in the international community is clear:" — "international community" is overbearing. "is clear" is LLM throat-clearing. - "That's the beauty of open-source licensing by design" — "That's the beauty of" is a hallmark LLM filler phrase.
*Grandiose one-liners that try too hard:* - "git clone is the most powerful spell in open source." - "a digital tombstone" - "If December was the clinical death, this February commit was the death certificate." — the metaphor was already established in the heading; extending it here is overworked.
*LLM vagueness / filler:* - "Things are different now." — says nothing. - "Consider:" as a standalone transition into the Elon/Twitter example. - "I believe the maintenance workload is manageable." — the hedging "I believe" adds nothing; just say it's manageable.
*Cliché deployment:* - "the dragon-slayer has become the dragon" (in the related-article blurb) - "Eating your own dog food is the best QA." — explaining the idiom ("dogfooding") one sentence before, then restating it as a maxim, is the LLM pattern of using a phrase and then making sure you understood it.
*The AI-hype paragraph is the worst offender:* > "With tools like Claude Code, the cost of locating and fixing bugs in a complex Go project has dropped by *more than an order of magnitude*. What used to require a dedicated team to maintain a complex infrastructure project can now be handled by *one experienced engineer with an AI copilot*."
This reads like an LLM writing about itself — vague quantification ("order of magnitude"), the buzzword "copilot," and the utopian framing are all telltale. The Elon/Twitter analogy that follows ("Consider:") makes it worse, not better.
*Overall pattern:* The technical/factual sections (the timeline table, the build instructions, the console revert explanation) read like a real person. The editorializing and rhetorical flourishes — especially the intro, the "But Open Source Endures" section, and the "AI Changed the Game" section — are where the LLM voice creeps in most heavily.
And I say this because minIO started to actively engage on the ugly parts of the license
Companies like MinIO extending the virality beyond the single software/work, even though not intended by license, gives it a bad reputation. They have fixed https://min.io/compliance now, but I guess it does not matter anymore.