> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize.
I never heard that. It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it? Units per hour and dollars per unit was never its strength. It was always going to be small things (and if anything big grew out of it, those would naturally transition to the more efficient manufacturing at scale).
Vibe coding, on the other hand, is competing against hand coding, and for many use cases is considerably more efficient. It’s clearly replacing a lot of hand coding.
BTW, I think a lot of people were/are greatly overestimating the value of coding to business success. It’s fungible from a macro perspective, so isn’t a moat by itself. There’s certainly a cost, but hardly the only one if you’re trying to be the next big startup (for that, the high cost of coding was useful — something to deter potential competitors; you’ll have to make up the difference in some other way now).
Also, software is something that already scaled really well in the way businesses need it to — code written once, whether by human or LLM, can be executed billions of times for almost nothing. Companies will be happy to have a way to press down the budget of a cost center, but the delta won’t make or break that many businesses.
As always, the people selling pick-axes during the gold rush will probably do the best.
> BTW, I think a lot of people were/are greatly overestimating the value of coding to business success.
Fully agree - We already saw dev prices drop significantly when offshore dev shops spun up. I've had great, and also horrible experiences working with devs that could produce lines of code at a fraction of the price of any senior type dev.
The higher paid engineers i've worked with are always worth their salary/hourly rate because of the way they approach problems and the solutions they come up with.
Agents are great at building out features, i'm not so sure about complex software that grows over time. Unless you know the right questions to ask, the agent misses alot. 80/20 doesn't work for systems that need 100% reliability.
I think it's really dependent on the software. And frankly, with the current rate of development, I feel like this continues to shift.
No, a non-engineer can't just spin up the next great app. Even with the newest models and a great prompting/testing system, I don't think you can just spit out high quality, maintainable, reliable code. But as a generalist - I'm absolutely able to ship software and tools that solve our business problems.
Right now, my company identified an expensive software platform that was set to cost us around $250k/year. People in the industry are raving about it.
I've spent 1-2 weeks recreating the core functionality (with a significantly enhanced integration into our CRM and internal analytics) in both a web app and mobile application. And it's gone far smoother than I expected. It's not done - and maybe we'll run into some blocker. But this would have taken me 6 months, at least, to build half as well.
I was an AI skeptic for most of last year. It provided value, sure, but it felt like we were plateauing. Slowing down.
I'd hoped we might be slowing down to some sort of invisible ceiling. I was faster than ever - but it very much required a level of experience that felt reasonable and fair.
It feels different now.
I'd say ~70% of my Claude Opus results just work. I tweak the UI and refactor when possible. And it runs into issues I have to solve occasionally. But otherwise? If I'm specific, if I have it brainstorm, then plan, and then implement - then it usually just works.
> No, a non-engineer can't just spin up the next great app. Even with the newest models and a great prompting/testing system, I don't think you can just spit out high quality, maintainable, reliable code
I think most engineers vastly overestimate how important high quality, maintainable, reliable code is to product success. Yes, you need an experienced engineer to steer Claude into making good high-quality code. But your customer doesn't see your code, they don't see how many servers you need or how often an on-call engineer is woken up. They just see how well the app meets their needs
I predict we will see a lot of domain experts without engineering background spin up incredibly successful apps. Just like the Tea app many of them will crash and burn from poor engineering. But there will also be enough people who've grown wise to this and after reaching some success with their app spend the resources to have others mitigate all the unknown-to-them issues
> I think most engineers vastly overestimate how important high quality, maintainable, reliable code is to product success.
Rather: Many software developers overestimate how important high quality, maintainable, reliable code is to initial product success.
Once the product is highly successful, a high quality, maintainable, reliable code pays huge dividends - and I have a strong feeling that most business people vastly underestimate this dividend.
Vibecoding to production and $1mil ARR (random number) now proves out the application basics and market value which pays for it to be redone correctly :)
There won't be a re-do, there will be a feature request pipeline. Correct is a term of art and unlikely to come into it. If you start losing customers because of reliability, they'll ask Claude to fix it. If that doesn't work you're gonna be in trouble, because you won't have people.
> I think most engineers vastly overestimate how important high quality, maintainable, reliable code is to product success.
I agree, the only thing I can’t get past is the black box approach. For the majority of business stakeholders, they can’t/ don’t want to read the code that Opus, or any other agent produces. It will most likely work, but if it doesn’t, they have to rely on the agent to find & patch.
I’m with you though, it’s getting incredibly good at doing that, but that concept of “It works but I don’t know why” seems very dangerous at scale.
That last mile for apps isn’t trivial imo; to take them from “it’s cool and does exactly what I want”, to a scenario where all employees at our company can use it.
But who knows I might just be a naive dev lol, this stuff is changing too quickly.
> I predict we will see a lot of domain experts without engineering background spin up incredibly successful apps. Just like the Tea app many of them will crash and burn from poor engineering.
This rollercoaster is going to be wild to ride over the next decade. I've done a few experiments where I've intentionally "vibe coded" either a few features for an existing project of mine or for a few completely clean-sheet ideas.
I completely agree with you. There are going to be a lot of domain experts who do successfully spin stuff up.
> But there will also be enough people who've grown wise to this and after reaching some success with their app spend the resources to have others mitigate all the unknown-to-them issues
Here's the part that shocked me when I tried this... it did not take long for the no-engineering-guidance codebases to turn into complete disasters. Like... in an afternoon I had a pretty functional application that filled a gap for me. It was also... I don't think it'd remain even remotely maintainable for more than a week based on the direction it was going.
> I think most engineers vastly overestimate how important high quality, maintainable, reliable code is to product success. Yes, you need an experienced engineer to steer Claude into making good high-quality code. But your customer doesn't see your code, they don't see how many servers you need or how often an on-call engineer is woken up. They just see how well the app meets their needs
Your customers definitely see the quality of code, just by proxy. When features take forever to ship, and things fall over all the time, those are code quality and design problems.
Honestly, code quality is somewhat more important right now, because use common and clear patterns will help AI make better changes, and using a more resilient architecture will you hand more off without worry things will fall over.
> The higher paid engineers i've worked with are always worth their salary/hourly rate because of the way they approach problems and the solutions they come up with.
I'm honestly just happy at the moment, because our two junior admins/platform engineers have made some really good points to me in preparation for their annual reviews.
One now completed his own bigger terraform project, with the great praise of "That looks super easy to maintain and use" from the other more experienced engineers. He figured: "It's weird, you actually end up thinking and poking at a problem for a week or two, and then it actually folds into a very small amount of code. And sure, Copilot helped a bit with some boilerplate, but that was only after figuring out how to structure and hold it".
The other is working on getting a grip on running the big temperamental beast called PostgreSQL. She was recently a bit frustrated. "How can it be so hard to configure a simple number! It's so easy to set it in ansible and roll it out, but to find the right value, you gotta search the entire universe from top to bottom and then the answer is <maybe>. AAaah I gotta yell at a team". She's on a good way to become a great DBA.
> Agents are great at building out features, i'm not so sure about complex software that grows over time. Unless you know the right questions to ask, the agent misses alot. 80/20 doesn't work for systems that need 100% reliability.
Or if it's very structured and testable. For example, we're seeing great value in rebuilding a Grafana instance from manually managed to scripted dashboards. After a bit of scaffolding, some style instructions and a few example systems, you can just chuck it a description and a few queries, it just goes to successful work and just needs a little tweaking afterwards.
Similar, we're now converting a few remnants of our old config management to the new one using AI agents. Setup a good test suite first, then throw old code and examples of how the new config management does it into the context and modern models do that well. At that point, just rebuilding the system once is better than year-long deprecation plans with undecided stakeholders as mobile as a pet ferret that doesn't want to.
It's really not the code holding the platform together, it's the team and the experiences and behaviors of people.
It makes sense for junior admins and junior platform engineers to leverage LLM's but I'd be highly skeptical for the future skillset of any junior software engineer who leverages LLM's right off the bat, unless we have already moved that goalpost.
Depends how they use them as arguably was the case with stack overflow or other resources in the past. E.g. an LLM can be a valid and useful way to start discovering or understanding code and architecture. You can ask for a summary , distill important concepts and then read in more detail about them.
You’re not wrong. You’re not the only one saying this either. Though, I’m currently of the mind that the concern is overblown. I’m finding Opus 4.6 is only really capable of solving a problem when the prompt explains the fix in such concrete detail that coding is incredibly straightforward. For example, if the prompt has enough detail that any decent human programmer would read it and end up writing basically the same code then Claude can probably manage it too.
While I haven’t used other models like Codex and Gemini all that much recently, Anthropic’s is one of the top-tier models, and so I believe the others are probably the same in this way.
A junior’s mind will not rot because the prompt basically has to contain detailed pseudocode in order to get anywhere.
Also, I have been called a bit of a hard-ass for this, but if the junior author of some piece of code is not able to explain to me why it is written that way or how they would extend it in a few reasonable cases, I consider that a problem.
This is orthogonal to both if it is well thought-out/naive/really strange code, or LLM generated/LLM assisted/hand written code. If there is a good understanding of the task and the goals behind it, the tools become secondary. If skills are lacking, it will end up a mess no matter the tools and it needs teaching.
Most of us could run stable servers with just ssh and vi. Would suck a lot though.
> He figured: "It's weird, you actually end up thinking and poking at a problem for a week or two, and then it actually folds into a very small amount of code. And sure, Copilot helped a bit with some boilerplate, but that was only after figuring out how to structure and hold it".
Let me just get you that Fred Brooks quote, now where was it...? Ah, yes, here's one:
We didn't even have to offshore for lots of bad code to be written.
Looks at the scores of Ycombinator startups that wrote a shitload of awful code and failed. Good ideas, pretty websites, but not a lot of substance under the hood. The VC gathering aspect and online kudos was way more important to them than actually producing good code and a reliable product that would stand the test of time.
Pretty much the most detestable section of the HN community. IMNHSO. I notice they're much quieter than usual since the whole vibe coding thing kicked off.
> Looks at the scores of Ycombinator startups that wrote a shitload of awful code and failed.
This can also be restated as, look at all the startups that wrote a shitload of awful code and succeeded.
That’s an indicator code quality doesn’t matter at macro scales. We already knew this though even if we didn’t explicitly say it. It’s more about organization, coordination, and execution than code.
This seems like it's reading too much into things. I'm sure driving an ambulance slower vs faster doesn't make a difference to survival in most cases, but on the margins it absolutely does.
Startups are also quite different from ambulances; surviving and minimising patient harm isn't the most important thing for a startup. Instead, it's building a profitable and valuable business. You're not just worrying about the margins, you're also hoping to squeeze out every bit of growth you can.
> That’s an indicator code quality doesn’t matter at macro scales.
I think it can though. It just depends. Having high quality code and making good technical choices can matter in many ways. From improving performance (massively) and correctness, to attracting great talent. Jane Street and WhatsApp come to mind, maybe Discord too. Just like great design will attract great designers.
I also think it might matter even more in the age of AI Agents. Most of my time now is spent reviewing code instead of writing code, and that makes me a huge bottleneck. So the best way to optimize is to make the code more readable and having good automated checks to reduce the amount of work I need to do, like static types, no nulls, compilation, automated tests, secondary agent reviews, etc.
I mean, look at all the startups that succeeded despite being complete shitshows behind the scenes... the baseline for leadership, organization, coordination or, hell, execution for a startup to succeed isn't exactly high either.
Bad code is one possible axis - not a likely one at that stage though. Bad code gets you a few years down the line when competitors are moving faster than you to the point where they are cheaper and better.
One thought experiment I keep having when I see LLM hype: imagine if our outsourcing companies could be as blasé about copyright as OpenAI, and how profitable they could be.
I mean, rename some dudes over there to ‘transformer’, and let them copy & paste from GitHub with abandon… I know we could get a whole browser for less than a few grand.
We wouldn’t, because it’d be copyright-insane. But if we just got it indirect enough, maybe fed the info to the copiers through a ‘transforming’ browser to mirror the copyright argument, I bet we could outperform OpenAI in key metrics.
Coding is formalizing for the compiler. The other 99% of the job is softly getting the PHB not to fuck the entire company and being unique in not doing dumb shit everyone thinks is popular now but will regret soon. It’s all like IT tribal tattoos. Barely cool for a couple of years, and then a lifelong source of shielded regret.
Here's the thing though, and with all due respect I say this as someone who has worked with offshore teams.
They were only as good as the input they were given. They rarely went above and beyond, and most of the time getting something "good enough" was challenging. Yes, time zones, cultural differences/attitudes, and their exposure/opportunities play a big role.
What I'm saying is that teams who had bad onshore employees got horrible results. Teams that had actual systems engineers and people who could architect systems usually got great results.
For example, we were building a bleeding edge (at the time) e commerce site for one of the largest companies in the entertainment space. I made sure to work with the best people I knew at the company to design the system from the ground up. Then, we made sure the actual "functional" pieces were digestible and written plainly that we didn't need to clarify words. Nor did we write a fucking 300 page technical document. We kept things simple and effective, and all the work was broken down into as atomic pieces as possible.
The end result was that we used a team distributed between Ukraine and India to build this in about 4 months. We'd do weekly sprints, and the team had great spirits too because we actually gave a fuck about them and ensuring their success. I'm sure they're used to being scapegoats because of some lazy fucks onshore.
Now I use agents daily and have great success. However, the whole "write a sentence and AI will do it for you" is obviously bullshit. I even asked HN why I got wrong results to test what people would respond (sorry for playing you) and as I predicted they blamed me thus proving that this broader sentiment that's so prominent by "thought leaders" is stupid as fuck. So, that's where we are.
People who can actually build great systems know that it requires careful planning, deep understanding, and ability to fill in the gaps.
I did, a lot, maybe fifteen years ago. There was a lot of talk about a "3D printing revolution" and being years away from being able to make whatever you want at home. For a while, the "maker" moniker was strongly associated with home manufacturing maximalists.
I still don't get the point the article is making, though. That 3D printer thinking was obviously naive because it underestimated the difficulty of mechanical design and the importance of the economies of scale. Using AI to "write" or "code" is a lot easier than turning a vague idea for a household good into a durable and aesthetic 3D print, so it's apples to oranges.
There are other things that the vibecoding movement is underestimating - when you pay a SaaS vendor, you're usually not paying for code as much as for having a turnkey solution where functionality, security, infrastructure, and user support are someone else's problem. But I think that's pretty much where the parallels end.
Also hiring. It's easier to find people with JIRA experience than people in your vibe-coded ticket manager, even if it is technically superior for your application.
If there is any commonality between the 3D printing craze and vibe-coding, they're both renditions of "just because you can, doesn't mean you should".
I was a kid at the time, but adults, magazines, and other children convinced me that 3D printing at home would likely replace a huge number of products. This included extremely optimistic speculation, like printers producing smart phones or houses. Then I dated a boy who used his 3D printer to substitute The Container Store at a higher cost with greater effort and lower quality, and that soured me on the concept.
I think we'll see this slowly march along. I just made some custom-designed speaker tilt mount things for my desk. Sure, it's a trivially simple example, but a lot of things are. I was able to get the exact angle I wanted, bigger than most and in a design I liked, crafted by AI in 5 minutes, and on my desk by the next morning and for a fraction of the price of a Chinese made Amazon version.
It's no replicator, but give it 5 years and it might be surprising how useful it is.
> Then it was a lot of “self replicating printers” for quite a while, which never has been a real thing.
3D-printed 3D printers got quite far; the reason why this topic got out of perception by people who are not 3D printing nerds is rather that for mass production of 3D printers there exist much better processes.
What was realized was that up to a certain amount of parts, 3D printing these parts on a 3D printer works really well. You can find a lot of designs of such 3D printers on the internet.
Concerning the progress here, also observe that over the last years, home 3D printers got a lot better with respect to handling "engineering materials". These materials are very useful if you want to (partly) 3D-print a 3D printer, but this development is often not associated with "3D-printing 3D printers". :-)
Then you get to parts which can be printed on a 3D printer, but these parts will not be of the same quality as parts that can easily be bought, such as belts etc. The Mulbot is a design that takes this approach very far:
And then you get to parts that are nearly impossible to print on a 3D printer ...
So, after there was a consensus where the boundaries lie how much a 3D printer can sensibly be 3D-printed, people started looking at other manufacturing techniques that exist for producing parts of 3D printers, and started considering
1. could and how far could a machine for this process be 3D-printed (or produced on a 3D-printed machine)?
2. could we bring such a machine to home manufacturing, too (so that people can easily build such a machine at home)?
Machines that were considered for this were, for example, CNC mill (3, 4 and 5 axis), CNC lathe, pick and place machines (for producing PCBs), ...
There do exist partial implementations of such machines, just to give some examples:
- lots of designs of CNC mills that use 3D-printed parts. I won't give a list here, but just want to mention that the "Voron Cascade" project wants to do for home 3 axis CNC milling what the Voron did for 3D printing. Rumors on the internet say that the Voron Cascade is well on the way, but had quite a lot of delays with respect to announced release dates.
Thus: I hope I could give evidence that in the last years there still were a lot of developments towards the far goal of "self-replicating 3D printers", but these developments were rather silent, impressive developments instead of loud, obtrusive marketing stunts.
Just like with vibecoding complains, have you tried the latest models (of 3d printers)? Specifically, Bambu's latest models make the printer a device to just use rather than the project itself. It's the Apple of 3d printing. Previously, you'd spend hours on calibrating and leveling nonsense. Latest models don't have this problem. Open the app on your phone, (doom)scroll until you find something, and just hit print from your phone. You can make it more complicated as desired, but it's not necessary to get something out of your printer.
Which apes vibecoding. ChatGPT 3.5 was laughably bad compared to codex 5.3, but if you're basing your opinion on 3.5's performance, your opinion's out of date.
If the point is to make a plastic switch cover, why is it important that the person CADs it themselves, rather than hitting print of a model they found online? Does it fail some sort of piety test that the end result that comes out the other end is a functional piece of plastic in a specific shape?
I recently wrote a blog post about exactly this, and I agree with your perspective. Vibe coding helps with showing other people your idea and get them to understand it, try it and, most importantly, help you fail fast. But as the product matures, the gains of using LLM's and agentic engineering will go from 10000% efficiency to something like maybe 30(?)% productivity gain? Which is still awesome, of course.
"The real test of Vibe coding is whether people will finally realize the cost of software development is in the maintenance, not in the creation."
It's not awesome, not for us. 30% productivity gain would be enormous. Just imagine 30% of developers losing their jobs, in addition to outsourcing and all the new graduates flooding out of colleges after CS has been hyped so much in the recent years.
I really doubt that 30% productivity gain would result in 30% developers losing their jobs. Believing this would require an assumption that businesses and economies will never grow.
It also doesn't mathematically make any sense. If you now have 130% developer capacity, then the percentage of developers you need to keep is `x` defined by 130%*x = 100%, x ≈ 76.9% implying you'd lay off about 23.1% of developers.
Percentage increases are not the same as percentage losses.
Good tooling, high level languages, faster computers and sane standards also enabled enormous productivity gains. I predict very few positions lost to LLM's, rather I'd say that just with any technical "revolution" we'll just set a new baseline for productivity, get rid of some bottlenecks, and have a new situation where we need even more engineers to maintain upkeep.
Most jobs lost to AI is just companies that want / need to lay people off and shareholders like "Replaced 30% of our workforce with AI" more than any other conceivable reason.
I personally wouldn't trust the 3D-printing community. The pre-bambu lab days were pretty bleak.
Print quality is everything when it comes to 3D printing. The printing quality must keep increasing if 3D prints are to be used as finished products. People should stop printing STL artifacts into their prints. Layer lines must fade away into invisibility. Top surfaces must be impeccably smooth without any stepping. New coatings need to be developed for texturing 3d printed parts and the parts need to be ready for coating right from the print bed.
The layer lines are much less pronounced when you use a 0.25 mm nozzle with an appropriate layer height instead of a 0.4 mm nozzle (the possible quality is even on the brink to satisfy people who use 3D printing for producing miniatures). The prize you need to pay is of course the print time.
> Top surfaces must be impeccably smooth without any stepping.
In the last years there was a lot of progress on ironing features in slicers, which mitigates this issue:
Another very recent addition to mitigate the perceived problem is the recent addition of "fuzzy skin" features in slicers, which by making the surfaces look "more rough" hides the imperfections of the FDM printing process.
--
Another solution is to simply use resin printing instead of FDM printing for finished products if feasible.
Definitely a fantasy land ideal. Much like pitches from the Free Software Foundation of a world without copyright and IP. It's just never going to exist because reality just isn't that way.
> Much like pitches from the Free Software Foundation of a world without copyright and IP.
If there exists no copyright, you cannot force an entity to release the source code of their software.
A world without copyright and IP is for sure an interesting thought experiment, but very different from the FSF vision:
In such a world, there would be much more reverse-engineering and monkey-patching of existing (non-open) software that gets copied around very liberally.
On the other hand, because there exists no enforcable copyright, companies would of course invest a lot of ressources into developing hard to crack copy protection schemes. Similarly, freedom-loving hackers would invest serious ressources into cracking such copy protection schemes.
> It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it?
It didn’t and I’m not sure anyone who knew anything about at-scale manufacturing ever saw it that way. Injection molding is far cheaper per unit and more accurate.
But 3D printing has made a major impact on prototyping. Parts that would have taken serious machine shop work or outsourcing can be printed in a few hours. It really changed the game for mechanical engineers.
In terms of vibe coding, time to demo/prototype is greatly reduced. That definitely takes time and cost away from R&D. But I don’t know that it’s had much impact on transfer to manufacturing, which can easily be the hard final 20%.
> I never heard that. It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it?
It absolutely was the "promise" the media spun.
I had the relatively unique experience of moving from being an outsider to this field to being an insider. While I was an outsider, my impressions, formed by the media, was exactly that—3d printing would be the next big revolution, in a few years there'd be a printer in every home, etc.
I then joined a company that allocated a lot of resources to 3d printing. It only took me a month or two to realize that the big media claims were absolutely ridiculous, and didn't make any sense as stated. They misunderstood the state of the technology, and misunderstood basic economics and how regular manufacturing works.
That's not to say there's no value in 3d printing or the maker movement. There's a ton of value that's been uncovered. But the specific media dream of "people will be printing their plates at home instead of buying them in the store" was never real.
(Btw, IMO "vibe coding" is absolutely real and revolutionary, likely the biggest revolution in the software industry since, idk, the invention of the computer itself. And AI more generally is, even beyond vibe coding aspect, a revolutionary technology that will change the world in many ways.)
>> BTW, I think a lot of people were/are greatly overestimating the value of coding to business success. It’s fungible from a macro perspective, so isn’t a moat by itself.
Broadly true if you have $10M to throw at it, and know exactly what you want, or if what you want isn't something involving a "secret sauce".
But between competing startups doing something novel, original software is a moat. No moat is permanent; you leverage it into market share while you have time.
And no software itself is a secret, but the business logic and real-world operations it distills and caters to may be. The software is the least obfuscated part of encoding that set of operational logic, or even trade secrets, which are the DNA of a business and dictate the tools it goes into battle with.
Software being a moat (which it rarely is for long) is more of a question for the software industry. For other industries, software that amplifies best practices and crystalizes operational flow from the business logic can absolutely extend whatever moat the company already has.
In the small bore, if you have two midsized competing $100m companies in some arbitrary industry, the one that uses SaaS may be well behind the one that invested $1m in their own in-house software from the beginning, mostly because the one with SaaS must work their business logic around certain shortcomings, while the other can devise and deploy workflows for employees that may themselves create a new advantage the other company hasn't considered.
> if you have two midsized competing $100m companies in some arbitrary industry, the one that uses SaaS may be well behind the one that invested 1m in their own in-house software from the beginning, mostly because the one with SaaS must work their business logic around certain shortcomings, while the other can devise and deploy workflows for employees that may themselves create a new advantage the other company hasn't considered.
Counter anecdote: about a decade ago I was brought in by the new to the company director to lead the modernization of their in house Electronic Medical System software that was built on FoxPro in 1999 running with SQL Server 2000 and was maintained by two “developers” who had been their for a decade.
I led another project there first that was more pressing - in house mobile software maintained by two other “developers”. It was built on top of a mobile framework by a local startup. It was used by home health care nurses for special needs kids.
After I got my head around the business, what they were trying to do - PE owned and acquiring other companies whose systems they need to integrate and their margins were low - mostly Medicaid reimbursements - I decided the best thing I could do was put myself out of a job.
I told the director we have no business trying to build up a software development department. We moved everything to various SaaS products and paid consulting companies to make all of the customizations. Meaning they sign a statement of work and come back with a finished product.
Software development was never going to be this company’s competitive moat. They got rid of the two developers maintaining the mobile app and contracted that out. The two other developers who had maintained the FoxPro app became “data analysts” and report writers.
>> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize.
Interestingly, I am not aware that this book was really popular or well-known in Germany (I honestly hear about this specific book for the first time, though I am aware that some marketers (who in my opinion did not really understand the Maker scene or 3D printing) made such claims).
Instead, at that time, in Germany nerds were getting excited about understanding how to build 3D printers (in particular partially self-replicating ones (RepRap)) and how 3D printing
- could be used to make yourself much more independent of the discretion of part manufacturers (i.e. some part is broken? Use a CAD system to re-design it and 3D-print your re-design),
- makes you capable of building stuff in small scale "that should exist", but no manufacturer is producing,
- enables part designs that are (nearly) impossible to manufacture using any other existing technology, and thus basically enables you to completely reimagine and improve how nearly every produced part that you see around you is designed,
- ...
I would say that the mentioned nerd visions of this time have at least partially been implemented and/or are on a good way towards this goal. It's just that the practical implementations did not come with a spectacular change in the overarching mindet of society, but rather are highly important, but not (necessarily) revolutionary changes in the lifes of people who want these changes to be part of their life.
> BTW, I think a lot of people were/are greatly overestimating the value of coding to business success.
Personally, I don't believe the big changes will come from "coding costs less for businesses". I think it will come from "trying new businesses is now cheaper, both in time and money". Smaller and cheaper players will be entering a lot of spaces over the next 5 years IMO.
I think volume and cost was never really the issue. Even if 3D printing something was 3x the cost it could justify itself just by the sheer amount of overhead it can otherwise remove. Ultimately what limits 3D printing is what you can make with it, and the fact that it doesn't remove assembly as a manufacturing step. If you could 3D print full products then I think the promised revolution would have happened. (As it stands 3D printing has already had a massive impact on manufacturing. More stuff than you would think is 3D printed now, it's just not complete consumer items)
(Not to mention, it's only in the last few years where consumer-accessible 3D printers are more than hobbyist grade that required a huge amount of tinkering to actually work properly)
"One Print, Multiple Components: Pick & Place Tool
Some technical prints require additional components, such as magnets, threaded inserts, or bearings, to be placed during the build. Without automation, this typically means you have to pause the print and insert the part(s) by hand. Although PrusaSlicer made this process easier a while ago, The Pick & Place toolhead can do it for you, completely autonomously. This reduces manual intervention and improves placement accuracy.
We’ve co-developed the toolhead with the Zurich University of Applied Sciences (ZHAW) and it’s designed for models that combine 3D-printed models with off-the-shelf components. We’re currently targeting late 2026 with its implementation."
> Almost nobody talked about “getting manufacturing back to the US”.
I guess the President of the United States is an almost nobody. Obama's 2013 State of the Union hyped up 3-D printing explicitly as a tech that would be bringing manufacturing back to the U.S. The U.S. government made public-private partnerships with maker spaces and fab facilities in hollowed out Rust Belt cities, and Obama mentioned it by name in the most important and viewed policy speech the President gives each year.
> “A once-shuttered warehouse is now a state-of-the art lab where new workers are mastering the 3-D printing that has the potential to revolutionize the way we make almost everything,” Obama said. [...] Obama announced plans for three more manufacturing hubs where businesses will partner with the departments of Defense and Energy “to turn regions left behind by globalization into global centers of high-tech jobs.” (https://edition.cnn.com/2013/02/13/tech/innovation/obama-3d-...)
It was promised but it never materialised. Everyone was saying we'd all have a 3D printer at home and there'd be no market for niche products any more because we'd just print them on demand.
I heard the CEO of Autodesk giving a talk saying that. As a stockholder, I was disappointed. Just because his daughter could make dollhouse toys with a 3D printer didn't mean it was going to take over manufacturing.
> BTW, I think a lot of people were/are greatly overestimating the value of coding to business success.
I've frequently argued to my organization's leadership that the product could be open source on GitHub with a flashing neon sign above it and it wouldn't change anything about the business. A competitor stealing our codebase would probably be worse off than if they had done anything else. Conway's law and all that.
The problem wouldn’t be your competitors cribbing your ideas, it would be more like letting anyone with a bone to pick audit you for minor compliance violations, customers relying on internal implementation details or judging you unfairly for legacy horrors, or devs getting self conscious about their sloppy 2am fix and prolonging an outage for rational public image/ego reasons
There was certainly a contingent who believed that 3d printing was going to replace all other forms of manufacturing. It was even going to make custom food for us on order.
If you balked at the idea, then you were the bad guy, or treated with pity for being so out of touch. Usually you got the Kubler-Ross Stages thrown at you.
> There was certainly a contingent who believed that 3d printing was going to replace all other forms of manufacturing. It was even going to make custom food for us on order.
Yes. Met those guys in my TechShop days. They also insisted that 3D printers should be made with 3D printers, which resulted in a generation of flimsy, inaccurate machines.
The current generation of serious 3D printers is very impressive. Take a look at Space-X's Raptor engine. A rocket engine is mostly one piece of complicated metal with a lot of internal voids. That's something 3D printers are good at. Once 3D printing was able to print stainless steel and titanium, it could be used for hard jobs like that. PLA just isn't much of a structural material, even with 100% fill.
Serious 3D printers are found in machine shops, not homes and libraries.
> Yes. Met those guys in my TechShop days. They also insisted that 3D printers should be made with 3D printers, which resulted in a generation of flimsy, inaccurate machines.
I do believe that this vision is basically correct, but the implementation of these eager 3D printing enthusiasts was deeply flawed:
There exist lots of designs of really good 3D printers on the internet that are at least partly 3D-printed. So at least a relevant subset of the parts of a 3D printer can be 3D-printed. The reason why commercial 3D printers are typically not 3D-printed is rather aesthetics and the fact that for large-scale manufacturing there typically exist much cheaper production techniques.
As people by now have realized (and some of these points were told to these eager 3D printing enthusiasts from beginning on), the correct approach to get towards an exceptional "mostly 3D-printed 3D printer" is rather:
- Improve 3D printers so that even more parts of a 3D printer can be 3D-printed in high quality (e.g. by improving sensors and software to increase precision; make the 3D printer capable of handling engineeering materials; ...)
- Use a 3D printer to produce parts for machines that can be used to produce parts for a 3D printer, such as CNC mill, CNC lathe, pick and place machine (for populating the PCBs) etc.
Both of these aspects are hot topics that people work on.
In other words: Accept for now that many, but not all parts of a 3D printer can currently sensibly be 3D-printed, and invest serious efforts to develop solutions how 3D printing can be used to enable a high-quality production of these remaining parts.
Sure, and that's useful but not revolutionary nor exclusive to 3D printers. You can use a milling to mill a bunch of pieces for a milling machine. You can use a PCB printer to print the PCBs for a PCB printer. A 3D printer is much, much closer to this than it is to a self-replicating machine.
A classic manual Bridgeport mill, a foundry for making castings, a heat-treating furnace, a steel planer, a lathe, a drill press, a grinder, and a supply of steel is enough for a master machinist to reproduce all that. That's what was used to make machine tools in the first half of the 20th century.
>Companies will be happy to have a way to press down the budget of a cost center, but the delta won’t make or break that many businesses.
Software companies spend a huge amount of money on having software written. Why would significantly altering the cost structure not make or break companies?
> Vibe coding, on the other hand, is competing against hand coding, and for many use cases is considerably more efficient. It’s clearly replacing a lot of hand coding.
It seems like a lot of vibe coders are people who otherwise wouldn't be coding at all.
> Good example! 90 percent ( or even more) of code do not need NASA level code.
I mean if you're saying that 90 percent of code is hobby level only, but I don't really agree that is the case.
I mean, take something like NPM and the JavaScript ecosystem. Every js project has mountains of dependencies which are included without a second thought or auditing of the code. Both in hobby projects and enterprise software alike. What happens when people vibe code those NPM modules? Is it a hobby? Maybe for them, but publishing it to an "official" source gives it implied credibility.
This is dangerous, because the line between production grade and hobby grade can get blurry real fast.
Way more than 90% of code doesn’t need to be up to NASA standards. But there are many levels of quality between vibe coded and NASA engineered. Most code that makes money is in the middle, but should probably be closer the NASA end of the spectrum.
If 90% of code is for hobbies that don't cost anyone anything when it breaks, great - but launching rockets into space with million or billion dollar payloads is akin to software that makes millions or billions of dollars, and vibeslop is simply a liability at best in any real use case past a weekend hobby project.
The great thing about vibecoding is we're at the point where people like me have to come in to fix core problems for apps and platforms that non-domain experts are outputting as slop.
Those problems span from fundamental architecture flaws, to issues anyone who spent 5 minutes reading the docs would never do, like create an entire app that slows to a crawl when more than one user uses it, because all parallel work gets serialized due to a complete misunderstanding of how concurrency, async/await and threads work in the language they're "writing".
People with too much money build entire apps on foundations that crumble and significantly hold them back from doing simple things, and I love it.
> It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it?
There was a point of time where some people looked at 3d printers and said "Wow, imagine how great this technology will be in 20 years." There was some amount of anticipation for multi-material printers to come around and for home printers to begin replacing traditional consumer goods. Compared to crypto, vr, and ai it doesn't look like much but 3d printing did go through a hype bubble.
People thought 3d printing would be democratized like the inkjet printer when it first came about. And that would be powerful because so many trips to the store would be eliminated, so many lines of business put out, so many things changed from taking all that plastic junk at walmart or spare parts for your car plus everything in between and letting you snap your fingers and having it appear in your home, in every persons home.
Seems like today they are still stuck in the tracks they were in 2016. A couple nerds own them personally. Maybe you'd find them in a maker space or a library or school. Not in your boomer parent's office though.
> It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it?
It's really hard to beat injection molding for scale.
However, what 3D printing did shift was building molds and prototypes. And that shifted small volume manufacturing--one offs and small volumes are now practical that didn't used to be. In addition, you can iterate more easily over multiple versions.
The limiting factor, however, has always been the brain power designing the thing. YouTube is littered with videos that someone wants to build a "thing" and then spends 10-20 iterations figuring out everything they didn't know going into the project. This is no different from "real" projects, but your experienced engineering staff probably only take 5 iterations instead of 20.
Once the predictions of a magical future turn out to be false, techies suddenly don't remember. Kind of like when the cult leader's prediction of doomsday doesn't show, there's always another magical prediction of a new future coming. Here are just a few major mainstream sources:
2012, Cornell Prof and Lab Director, in CNN: "We really want to print a robot that will walk out of a printer. We have been able to print batteries and motors, but we haven’t been able to print the whole thing yet. I think in two or three years we’ll be able to do that." (https://www.cnn.com/2012/07/20/tech/3d-printing-manufacturin...)
2013, World Economic Forum: "the world can be altered further if home-based 3D printing becomes the norm. In this world, every home is equipped with a printer capable of making most of the products it needs. Supply chains that support the flow of products and parts to consumers will vanish, to be replaced by supply chains of raw material." (https://www.weforum.org/stories/2013/08/will-3d-printing-kil...)
2013, President of the United States of America Barack Obama hypes up 3-D printing in the State of the Union as a technology that will bring manufacturing back to the U.S.: “A once-shuttered warehouse is now a state-of-the art lab where new workers are mastering the 3-D printing that has the potential to revolutionize the way we make almost everything..." Obama announced plans for three more manufacturing hubs where businesses will partner with the departments of Defense and Energy “to turn regions left behind by globalization into global centers of high-tech jobs.” (https://edition.cnn.com/2013/02/13/tech/innovation/obama-3d-...)
2012, Cover story and special issue of The Economist predicting another Nth industrial revolution:
"THE first industrial revolution began in Britain in the late 18th century, with the mechanisation of the textile industry. Tasks previously done laboriously by hand in hundreds of weavers’ cottages were brought together in a single cotton mill, and the factory was born. The second industrial revolution came in the early 20th century, when Henry Ford mastered the moving assembly line and ushered in the age of mass production. The first two industrial revolutions made people richer and more urban. Now a third revolution is under way. Manufacturing is going digital. As this week’s special report argues, this could change not just business, but much else besides.
A number of remarkable technologies are converging: clever software, novel materials, more dexterous robots, new processes (notably three-dimensional printing) and a whole range of web-based services. The factory of the past was based on cranking out zillions of identical products: Ford famously said that car-buyers could have any colour they liked, as long as it was black. But the cost of producing much smaller batches of a wider variety, with each product tailored precisely to each customer’s whims, is falling. The factory of the future will focus on mass customisation—and may look more like those weavers’ cottages than Ford’s assembly line." (archive: https://communicateasia.wordpress.com/2012/04/20/manufacturi...)
At some scales, Obama was right... a lot of companies that do plastic extruded parts also do 3D printing for lower volume fulfillment. You can also do some types of parts that you couldn't make through extrusion.
It's especially funny because HN commenters are some of the most likely people to make wild, sweeping claims then once they don't come true, turn back around and say "well no one was actually saying that anyway."
Or I just realized that if they are a 22 year old college graduate, they were in elementary school when the 2012-2014 3-D printing hype cycle was at its peak.
> I never heard that. It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it? Units per hour and dollars per unit was never its strength. It was always going to be small things (and if anything big grew out of it, those would naturally transition to the more efficient manufacturing at scale).
There were articles posted on HN hyping exactly that, with comments debating whether 3D-printing would eventually replace conventional manufacturing at scale, and how people would no longer shop at stores like Walmart for their cheap products.
Maybe its replacing the simplistic forms of backend web development and the keast capable frontend devs. If your job was building with DaisyUI/Tailwind you're prob replaceable by this tech. People building their first SaaS are amazed (its literally heroin for non technical idea guys). But serious engineers I know, old heads, don't seem to be that impressed and neither am I.
I don't see it competing with anyone doing anything serious, outside of ML engineers and lets be honest, they always sucked at writing code, hated writing code so its not surprising how much they sing it's praise.
I agree with you. To me the maker movement has always been about people wanting to tinker and create things for themselves. If anything "vibe coding" makes the maker movement more accessible because people who couldn't (or didn't want to) code can try to have AI code the thing they're building.
And there are plenty of people in the maker movement who enjoy writing code, and will write it whether other people are vibe coding or not.
I also don't think the "maker movement" disappeared, it's just that the bar for making stuff is so much lower now that anyone and their grandmother can do it.
In the past weeks I:
- 3D printed custom cups that fit onto a pet feeder to prevent ants from getting to our cat food
- 3D printed custom mounts to mount 3W WS2812 LEDs to illuminate Chinese New Year lanterns and connected them to an ESP32 WLED box connected to home assistant
- Connected an vision language model to a security camera that can answer questions about how many times a cat has eaten, drank water, used the toilet, and inform us about any things in the room that look abnormal
- Custom laser cutted a wall fitting for a portable heat pump input and output condenser hoses and added a condensate pump to the contraption, it saves us $200/month in heating costs
- Custom designed a retrofit for a sliding door that accepts a Nuki smart lock that wasn't designed for this type of door.
- Custom laser cutted a valentines day card in Chinese paper cutting style that was generated with many rounds of back and forth prompting with Gemini, then converted to SVG and cut
- My wife and I thought IKEA SKADIS pegboards would look better if they were made out of bamboo plywood, so I shoved a sheet of bamboo into my laser cutter and had it cut out a pegboard that looked much nicer, sprayed it with lacquer, then attached it to the wall with 3D printed mounting hardware. The SVG for the pegboard was generated by a script written by Cursor and took a couple of minutes.
- Having an ESP32 feed a camera image to an LLM and then do something with the result is a piece of cake. A box that "sprays water to deter the cat if the cat jumps on the kitchen counter" is a 1-hour job after you order the components from Amazon, and an LLM will build that parts list for you, too.
- Reverse enginereed the firmware of a Unifi Chime to upload more chime sounds than the UI limits you to, so that I can have Unifi Protect announce if there is an intruder somewhere late at night and where. Cursor reverse-engineered the firmware .bin for me.
A lot of this could have been worth sharing 10 years ago. Now all of this is just "normal life in 2026" so you don't hear about it much. I'm used to thinking of something and then physically having it <12 hours later. It's no longer an undertaking. It's not news anymore.
The bar for "news-worthiness" for makers these days? This guy built an entire city for his cats, with a full functional subway system and everything ...
3D printing is giving my company many benefits over injection molding. We have 6 variations of the case for our device and we're always coming up with improvements and new functionality, and new products. I only see us expanding our in-house resin print farm instead of building out injection molds. No, we aren't selling millions of units, but injection molding is just too expensive for anything but a 1-size-fits-all solution.
> BTW, I think a lot of people were/are greatly overestimating the value of coding to business success.
I think I have a conversation at least weekly where I have to explain to someone that using an LLM to convert COBOL to Java (or whatever) will not actually save much effort. I don’t know how many ways to explain that translating the literal instructions from one language to another is not actually is not that hard for someone fluent in both and the actual bottleneck is in understanding what sort of business logic the COBOL has embedded in it and all the foundational rearchitecting that will involve.
> Vibe coding, on the other hand, is competing against hand coding, and for many use cases is considerably more efficient. It’s clearly replacing a lot of hand coding.
Vibe coding, like 3D printing, is great for little small batch runs of boutique code. Small toy apps and throwaway projects.
Vibe coding is shit for doing actual maintenance on important projects that actually run the world. It is shit for creating anything that is of robust long lasting quality. It is shit for creating code you can trust. It is shit for creating code that won’t suddenly reveal flaws and inefficiencies at scale and require an entire proper rewrite just when your product is finally gaining traction. Vibe coding has not been around long enough to make these problems obvious yet, but the time is coming. A few high profile failures will hit the media and then suddenly everyone starts coming out of the woodwork with their own vibe coding horror stories and thus the AI bubble collapse begins.
What people will eventually realize, is that if you’re building a serious business with software that must run reliably for years, it really doesn’t give you any advantage being able to vibe code something in a week vs carefully building something out over a few months. Being unable to vibe code your way out of non-trivial maintenance issues is a death sentence for your business, you will need people who know what they are doing eventually.
Relying on vibe coding causes you to have a talent debt, and though you won’t feel it when you’re first rolling out a business, eventually, the bill comes due…
One of the odd things people do with tech is taking someone else's random projections at face value?
What does it mean to say "we were promised flying cars", or "every city would have micro-factories, that 3D printing would decentralize production"?
The people creating these narratives may a) truly believe it and tried to make it a reality, but failed b) never believed it at all, but failed anyway, c) or be somewhere else on this quadrant of belief vs actuality.
Why not just treat it as, "a prediction that went wrong". I suppose it's because a narrative of promise feels like a promise, and people don't like being lied to.
It's a strange narrative maneuver we keep doing with tech, which is more future-facing than most fields.
We do have flying cars, and we do have printers that print other printers, but both were some combination of really expensive/poor quality. Technically speaking, if you take it that most cities have 3D printers, most cities then do have micro factories, however that says nothing about general feasability...
Technology requires infrastructure and resources, and our infrastructure is strained and our resources are even more so... Until the costs become pocket change for the average person, technology will just remain generally unavailable.
> What does it mean to say "we were promised flying cars"...
I don't know about the other things you mentioned, but I think you have this in the wrong category. "We were promised flying cars" is one half of a construction contrasting utopian promises/hype with dystopian (or at lest underwhelming) outcomes. I think the most common version is:
> They promised us flying cars, instead we got 140 characters.
Translation: tech promised awesome things that would make our life better, but instead we actually got was stuff like the toxicity of social media.
IMHO, this insight is one of the reasons there's so much negativity around AI. People have been around the block enough to have good reason to question tech hype, and they're expecting the next thing to turn out as badly as social media did.
I've got to be honest: my complete skepticism that the maker movement is somehow past tense makes it extremely difficult for me to take this tenuous comparison to LLM coding particularly seriously.
The author talks about lowered barriers to prototyping as though they represent a failure state; that's absurd, and it has absolutely nothing to do with whether most people have membership-based maker spaces nearby.
Meanwhile, we're in a golden era of tool access. It's now possible for people to buy affordable CNCs, laser cutters and UV printers. I have a freaking pick and place in my home.
Also, you can have custom PCBs shipped to you in a week for about $10.
Having LLMs available at the same time as all of these tools are rapidly evolving means that anyone with an idea can prototype just about anything. In my worldview, anyone not excited about this either has no original ideas or a cynical agenda.
I'd say more but I have to get back to work on my maker projects.
The maker movement probably is a failure if you're an economist. Nothing could be worse for the economy than people buying less domestic products in favor making their own stuff, and sending more of their paychecks to China to get more cheap circuit boards, machines, and components.
And of course I'm not going to be setting up a "mini factory", I don't feel like it and I already got the one thing I made that I wanted, which almost certainly would never have been profitable for anyone to make at quantity in the first place. In the unlikely event someone does want one, they can just make their own following the same process as above.
I don't love that my career seems to be evaporating and perhaps no one will have a use for me soon, but, LLMs have made making even easier and more fun than ever. My sense of what I can take on has been amplified so much, it feels like a super power. Reverse engineering things used to be intimidating to take on, but now it feels like a couple afternoons of exploring with Claude. Understanding the scope of ideas is way more accessible, and often more constrained than it used to be.
I learn so much more than I used to, I get more done than I used to. I love it.
I am quite tired of skeptics and naysayers telling me that I'm only imagining learning, only imagining finishing projects, only imagining having more time for the fun parts.
I mean, I've expanded what I use my programming skills for dramatically in the last year. I've suddenly got several personal macOS apps I'm building and maintaining for myself, for my small business, or for fun. That never happened before (I've generally been full stack on the web). I've built far more useful and complex firmware and finally begun designing and building custom circuits. I rarely had the time for that before.
I'm able to take on way more interesting and challenging projects in my business because the logistics and legwork required for implementing them are greatly simplified by being able to actually implement the specs for the software I've had in my head for years.
This is a stark contrast to pre-2024 or so. I've always been an explorer, a fairly prolific software developer I guess, but now it's so much more than that. And it's leaking into hardware and other physical ventures. I'm typically limited by funds more than anything.
Are some of my projects lower quality than if they were done by someone more qualified? Yeah, totally. Though I think I still do a solid job. I don't care though; these things have opened my eyes and mind so much and made creating so much more inviting and exciting.
It still burns with the 'career careening into the dirt' vibes I get most days, but what the hell, it was good while it lasted. If I was smart enough to make the computer do the thing, maybe I'll be smart enough to do something else that's useful. And I've got some years left before it's truly end of the line, I think.
I sympathize your your feelings of existential dread without further qualifiers. Big hug!
I personally think that as amazing as LLMs for coding are, LLMs for coding and electronics is like activating the powered robotic exoskeleton for your mind.
As for whether your projects are lower quality (the kids would say "mid"; catch up!) or not... they are higher quality than the ones you made last year, and I heard that the actual best way to learn is by doing. Ideally also asking a ton of questions at each step.
Some nights I lay in bed just relentlessly interrogating ChatGPT in audio mode about transformers and op-amps.
A huge number of YouTube channel creators I subscribe to have received Carvera CNC mills over the past few months. Anecdotal but striking. It seems like everyone who left LTT to go solo has a CNC now, even if they usually review graphics cards.
Opulo, the makers of the Lumen PnP cannot achieve less than a 1 month lead time, seemingly no matter how many people they hire or how much factory space they acquire. And that's a pretty niche device, relatively speaking.
No, you did a great job arguing my point: it's a market in the tens of billions of dollars and hundreds of thousands of units, so somewhere between saltwater aquaria and golf
I think you’ve done a good job disproving your point that no one is buying them. Golf is a very popular hobby.
The point of this thread: the maker movement isn’t dead simply because most people don’t care about CNC machines. In reality, there are loads of makers & people who love tinkering and building things for themselves, it’s easier than ever to do so (and to build very non-trivial products for yourself), and more and more people are able to get into this.
If the maker movement was actually dead, we wouldn’t be seeing an explosion of powerful, easy-to-use manufacturing tools available at lower & lower price points.
I guess your point is that it’s not exactly mainstream, not that no one is buying them. Which is true, but who cares.
Also, there is a massive tropical aquarium store in a random strip mall a few blocks from my house. It takes up like three units.
I don't know why I let myself get triggered by some rando who has convinced himself that just because he doesn't do something it must not be popular, but here I am.
Of the people I know, about 80% of households have a 3D printer. Now, I'm sure I don't know that many people, but that's a lot more 3D printers than they had a decade ago.
I would put "you can make anything" -> "I will print guns!" strongly in the "no creative ideas" category.
Honestly, it's baffling that anyone would put real effort into printing guns when it seems as though some countries cough make it easy to pick one up at Walmart.
> Maybe the answer is a tentative yes, given news like the recent case about guns and 3D printing.
In my observation these news lead to maker nerds "prepper-buying" (get such a machine before they become forbidden) quite a lot of such machines recently. :-)
I disagree with your framing cynicism as an "agenda". For the record, I agree that the maker movement hasn't actually ended, and most of your points are correct; however, the idea of LLMs teaching Electronics worries me about as much as people using LLMs to learn Chemistry.
A little while ago I had to dissuade someone from learning Chemistry via an LLM, because the advice that they had been given by the LLM would have very literally either blown up the glassware, throwing molten chemicals all over their clothing, or killed them when they tried to taste whatever they were trying to synthesize. There was no consideration of safety protocol, PPE, proper glassware, or correctly dealing with chemical reactions, and nary a mention of a fucking fume hood. NileRed and a few other chemistry youtubers have utterly woeful approaches to laboratory safety (NileRed specifically I have a chip on my shoulder about — I've seen him practice bad lab work on a number of occasions and violate many of the common safety practices from e.g. Vogel's), but even then they do still take precautions! Let it not be forgotten that safety practices are born through bloodshed. Now we have a whole new wave of people who are excited to learn, and that's great, but one stray hallucination will kill them. I'm sure that the LLM will be more than happy to write an "Oh I'm sorry, it's my bad that I forgot to tell you to double glove when handling organic mercury!" but by then it is too late.
The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly. Likewise, the idea of someone following an LLM's instructions and then blowing themselves up in a shower of capacitors or chemical glassware is also utterly terrifying to me.
Yes, you could do all these things before. But at least the most commonly available learning materials to you were trustworthy and written by experts!
I guess we have to agree to disagree, because I am not particularly interested in chemistry and ChatGPT has been extraordinarily helpful in demystifying electronics. Having 24/7 access to a patient person who can unpack the difference between TTL and CMOS logic or when you'd choose a buffer instead of a Schmitt trigger without belittling you for not already knowing what they know is awesome and not going to get anyone even slightly killed.
Reasonable question and hopefully an interesting answer...
The simple lack of reasons to use TTL logic in 2026 was exactly why I didn't know what the deal was. It'd never come up, but I'd see it referenced.
I'm self-taught and in defiance of the people who insist that LLMs turn our brains to passive mush, the more things I learn the more things I have to be curious about.
LLMs remove the gatekeeping around asking "simple" questions that tend to make EEs roll their eyes. I didn't know, so I asked and now I know!
I'm actually pretty thrilled that you asked, because I think that this chat is an extremely solid example of LLM usage in the EE domain, and I'm happy to share.
I definitely led some questions to try and squeeze new-to-me perspectives out of it; for example, there could be tricks that make the active high variant more useful in some scenarios.
I think it does a good job of surfacing adjacent questions you might not realize you were eager to ask, as well as showing how it's able to critically evaluate real-world part suitability. I do find that ChatGPT in particular does better with a screengrab of the most likely parts vs a URL to the search engine.
Electronics can kill too. IIRC capacitors in CRTs are particularly deadly. Though I suppose someone using LLMs only as a first step, much like Wikipedia, is probably at much less risk than someone using it as their only source.
Yeah, okay but... look, I concede that someone who shouldn't be doing anything except watching passive entertainment could absolutely take insane advice from an LLM (or a sociopathic human) and seriously hurt themselves.
But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?
If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.
I'm glad that you brought that up, because I actually hovered on my response precisely because of those words. Specifically, I wondered if I could reliably count on someone showing up to say something patronizing and unnecessary.
This particular combination of snark, faux-concern and pedantry doesn't help the point you're trying to make about my loving AI wife.
It was not my intention to be patronising nor snarky, nor was I the least bit concerned for you (faux or otherwise). Though on a reread I do understand how my reply can be understood as unkind. I regret that and apologise for it. It was not my intention but it was my mistake. I should’ve made it shorter:
> It’s not a person, It’s a tool. There’s no reason to anthropomorphise it.
Without wanting to be argumentative, I would push back and say that I really did stop to consider my implied assignment of personhood before committing to it. I went with it because it reflects both the role it plays - you'll be relieved that I stopped short of deploying "mentor" - and the fact that English is highly adaptable and already the linguistic tug to use They feels very comfortable in relation to LLMs. Buckle up!
> you'll be relieved that I stopped short of deploying "mentor"
Funnily enough, I think that might’ve been better. I don’t think a mentor has to necessarily be human; one can learn from nature or pets. Or even a machine: Stockfish can teach you to play better chess and give context as to why you fumbled and how to do better next time.
I just don’t think LLMs are people and that we should avoid anthropomorphising them (for a whole plethora of reasons which are another discussion). I’m not even saying I think there could never be a robot which is a person. Just not what we have now.
> The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly.
Can't wait for the load-bearing drywall recommendations coming from LLMs that were trained on years of Groverhaus content.
> When you spend two years making useless Arduino projects, you develop instincts about electronics, materials, and design that you can’t get from a tutorial. When vibe coding goes straight to production, you lose that developmental space. The tool is powerful enough to produce real output before the person using it has developed real judgment.
The crux of the problem. The only way to truly know is to get your hands dirty. There are no shortcuts, only future liabilities.
Then again, sophisticated manufactured electronics had long been cheap and available by the time somebody thought to create Arduino as a platform in the first place.
And even today, people hack on assembly and ancient mainframe languages and demoscene demos and Atari ROMs and the like (mainly for fun but sometimes with the explicit intention of developing that flavor of judgment).
I predict with high confidence that not even Claude will stop tinkerers from tinkering.
All of our technical wizardry will become anachronistic eventually. Here I stand, Ozymandius, king of motorcycle repair, 16-bit assembly, and radio antennae bent by hand…
There are corners of the industry where people still write ASM by hand when necessary, but for the vast, vast majority it's neither necessary (because compilers are great) or worthwhile (because it's so time consuming).
Most code is written in high-level, interpreted languages with no particular attention paid to its performance characteristics. Despite the frustration of those of us who know better, businesses and users seem to choose velocity over quality pretty consistently.
LLM output is already good enough to produce working software that meets the stated requirements. The tooling used to work with them is improving rapidly. I think we're heading towards a world where actually inspecting and understanding the code is unusual (like looking at JVM/Python bytecode is today).
Future liabilities? Not any more than we're currently producing, but produced faster.
Compilers take a formal language and translate it to another formal language. In most cases there is no ambiguity, it’s deterministic, and most importantly it’s not chaotic.
That is changing one word in the source code doesn’t tend to produce a vastly different output, or changes to completely unrelated code.
Because the LLM is working from informal language, it is by necessity making thousands of small (and not so small) decisions about how to translate the prompt into code. There are far more decisions here than can reasonably fixed in tests/specs. So any changes to the prompt/spec is likely to result in unintended changes to observable behavior that users will notice and be confused by.
You’re right that programmers regularly churn out unoptimized code. But that’s very different than churning out a bubbling morass where ever little thing that isn’t bolted down is constantly changing.
The ambiguity in translation from prompt to code means that the code is still the spec and needs to be understood. Combine that with prompt instability and we’ll be stuck understanding code for the foreseeable future.
Humans are also non-deterministic, though. Why does replacing one non-deterministic actor with another matter here?
I'm not particularly swayed by arguments of consciousness, whether AI is currently capable of "thinking", etc. Those may matter right now... but how long will they continue to matter for the vast majority of use cases?
Generally speaking, my feeling is that most code doesn't need to be carefully-crafted. We have error budgets for a reason, and AI is just shifting how we allocate them. It's only in certain roles where small mistakes can end your company - think hedge funds, aerospace, etc. - where there's safety in the non-determinism argument. And I say this as someone who is not in one of those roles. I don't think my job is safe for more than a couple of years at this point.
It has nothing to do with whether small mistakes are allowable or not. It’s about customers needing a consistent product.
The in-code tests and the expectations/assumptions about the product that your users have are wildly different. If you allow agents to make changes restricted only by those tests, they’re going to constantly make changes that break customer workflows and cause noticeable jank.
Right now agents do this at a rate far higher than humans. This is empirically demonstrable by the fact that an agent requires tests to keep from spinning out of control when writing more than a few thousand lines and a human does not. A human is capable of writing tens of thousands as of lines with no tests, using only reason and judgement. An agent is not.
They clearly lack the full capability of human reason, judgment, taste, and agency.
My suspicion is that something close enough to AGI that it can essentially do all white dollar jobs is required to solve this.
> Generally speaking, my feeling is that most code doesn't need to be carefully-crafted. We have error budgets for a reason, and AI is just shifting how we allocate them. It's only in certain roles where small mistakes can end your company - think hedge funds, aerospace, etc. - where there's safety in the non-determinism argument.
That's a bit shortsighted. There have been cries of software becoming needlessly bloated and inefficient since computers have existed (Wirth, of course, but countless others too). Do you visit any gamer communities? They are constantly blaming careless waste of resources and lack of optimization in games for many AAA games performing badly in even state of the art hardware, or constantly requiring you to upgrade your gaming rig.
I don't think the only scenario is boring CRUD or line of business software, where indeed performance often doesn't matter, and most of it can now be written by an AI.
> adversarial AI reviewers, runtime tests (also by AI), or something else?
And spec management, change previews, feedback capture at runtime, skill libraries, project scaffolding, task scoping analysis, etc.
Right now this stuff is all rudimentary, DIY, or non-existent. As the more effective ways to use LLMs becomes clearer I expect we'll see far more polished, tightly-integrated tooling built to use LLMs in those ways.
Agents require tests to keep from spinning out of control when writing more than a few thousand lines, but we know that tests are wildly insufficient to describe the state of the actual code.
You are essentially saying that we should develop other methods of capturing the state of the program to prevent unintended changes.
However there’s no reason to believe that these other systems will be any easier to reason about than the code itself. If we had these other methods of ensuring that observerable behavior doesn’t change and they were substantially easier than reasoning about the code directly, they would be very useful for human developers as well.
The fact that we’ve not developed something like this in 75 years of writing programs, says it’s probably not as easy as you’re making it out.
"users seem to choose velocity over quality pretty consistently"
When do they have a real choice, without vendor lock-in or other pressure?
Windows 11 is 4 years old but until a few months ago barely managed to overtake Windows 10. Despite upgrades that were only "by choice" in the most user hostile sense imaginable (those dark patterns were so misleading I know multiple people who didn't notice that they "agreed" to it, and as it pop ups repeatedly it only takes a single wrong click to mess up). It doesn't look like people are very excited about the "velocity".
In the gaming industry AAA titles being thrown on the market in an unfinished state tends to also not go over well with the users, but there they have more power to make a choice as the market is huge and games aren't necessary tools, and such games rarely recover after a failed launch.
If you didn't catch it, this is a joke calling out the comment above it for using a couple obvious LLM-isms. The comment above may have been a joke, too. It's hard to tell any more.
"You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. It's crawling toward you. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not without your help. But you're not helping. Why is that?"
Tortoise have been observed righting other tortoise that have become stuck.
https://www.youtube.com/shorts/DZ57D608fiM (two tortoises helping a third)
this has a terrible voiceover but you get the idea
But crucially they used "--" and not "—" which means they're safe. Unless it's learning. I may still be peeved that my beloved em dash has been tainted. :(
Published by Anthropic. It's a bit like a "study" by Coca Cola "proving" that one can lose weight by just drink their product rather than doing sport. Sure, it's not impossible, but that's definitely not the normal usage.
The article addresses this by making the point that prototypes != production. Arduino is great for prototyping (authors opinion; I have limited experience) but not for production-level manufacturing.
LLMs are effectively (from this article's pov) the "Arduino of coding" but due to their nature, are being misunderstood/misrepresented as production-grade code printers when really they're just glorified MVP factories.
They don't have to be used this way (I use LLMs daily to generate a ton of code, but I do it as a guided, not autonomous process which yields wildly different results than a "vibed" approach), but they are because that's the extent of most people's ability (or desire) to understand them/their role/their future beyond the consensus and hype.
I think even calling them MVP factories is a bit much. They're demo factories. Minimum Viable Products shouldn't have glaring security vulnerabilities and blatant inefficiency, they just might me missing nice-to-have features.
I might be tilting at a strawman of your definition of vibe coding - apologies in advance if so.
But LLM-aided development is helping me get my hands dirty.
Last weekend, I encountered a bug in my Minecraft server. I run a small modded server for my kids and I to play on, and a contraption I was designing was doing something odd.
I pulled down the mod's codebase, the fabric-api codebase (one of the big modding APIs), and within an hour or so, I had diagnosed the bug and fixed it. Claude was essential in making this possible. Could I have potentially found the bug myself and fixed it? Almost certainly. Would I have bothered? Of course not. I'd have stuck a hopper between the mod block and the chest and just hacked it, and kept playing.
But, in the process of making this fix, and submitting the PR to fabric, I learned things that might make the next diagnosis or tweak that much easier.
Of course it took human judgment to find the bug, characterize it, test it in-game. And look! My first commit (basically fully written by Claude) took the wrong approach! [1]
Through the review process I learned that calling `toStack` wasn't the right approach, and that we should just add a `getMaxStackSize` to `ItemVariantImpl`. I got to read more of the codebase, I took the feedback on board, made a better commit (again, with Claude), and got the PR approved. [2]
They just merged the commit yesterday. Code that I wrote (or asked to have written, if we want to be picky) will end up on thousands of machines. Users will not encounter this issue. The Fabric team got a free bugfix. I learned things.
Now, again - is this a strawman of your point? Probably a little. It's not "vibe coding going straight to production." Review and discernment intervened to polish the commit, expertise of the Fabric devs was needed. Sending the original commit straight to "production" would have been less than ideal. (arguably better than leaving the bug unfixed, though!)
But having an LLM help doesn't have to mean that less understanding and instinct is built up. For this case, and for many other small things I've done, it just removed friction and schlep work that would otherwise have kept me from doing something useful.
The enormous difference between vibe-coding and 3D printing is that vibe-coding is improving exponentially at a rapid rate, while 3D printing is improving linearly at a slow rate. Very little that we say about vibe-coding today is likely to be valid even six months' from now, whereas a 3D printer sold 5 years from now will probably be very similar to one sold today.
Did the maker movement end? I dont think so, its just as niche as its always been. We have plenty of maker type posts on here. I dont think “vibe” coding is going away. Especially with so many open source models you can run on a simple Mac.
It didn't end, it just failed to commercialize, which IMO is a better outcome anyway. Many more communities today have something akin to a maker space than before the movement. It succeeded to a point that it became mundane.
The commercialization is still ongoing, though the market is small enough it's been a struggle for any company pushing towards proprietary solutions and ecosystems to capture the whole market.
Which as you say, is a good thing. I still fear what will happen if 3D printing commoditizes into a similar structure as 2D printing.
I think it stunted out. Outside of only the densest areas, maker spaces never really formed. The stuff remains accessible as a hobby only to the wealthy who can afford all these tools and machines in the majority of the country. I'm a nearly 40 minute drive to the closest maker space and I'm in one of the 10 densest populated cities in the country. The last city I lived in, the maker space was too popular and raised their fees so high that it is also impossibly inaccessible to most people.
I'm not trying to defend maker spaces, though they make more sense to me in a college setting. My college had (has?) one and one of our professors really made sure to always use it, and have students use it and learn. Immense value there, even if only a dozen or less use it every year, its still an avenue for inspiration.
I'm a member of a local maker space that has been around a while and it has changed so much over the years in response to what people are asking for and what gets used.
I don't know if it's a local trend or what but the last 5-7 years the most in demand thing by far are sewing machines, knitting machines, and sergers. They ended up completely scrapping the woodworking area to fit a digital jacquard loom and that thing is booked around the clock, you have to plan 4-5 weeks in advance to get a session. Jeweler's bench is similarly busy.
In contrast the soldering and electronics workstations get regular use but I can usually just walk in and get a spot without scheduling or waiting much, which is almost never the case with the fabric stuff.
I saw that happen in a decent sized college town near where I live. They had a maker space spring up when 3D printing was the hottest thing. It didn't last very long though. I'm a bit surprised that 3D printer machines haven't become cheaper. Like solid machines sub-$100. 3D printer pens are the only thing that came close to doing that.
They're already very cheap, almost free when you buy used. I got one for $50 that makes pretty good prints. For $300 you can buy an Elegoo Centauri Carbon that is a really high end consumer printer. Don't forget that we're talking about CNC machine tools with precision movements here. An entry level manual milling machine from Precision Matthews in Taiwan will cost you $250 shipping alone. Even good linear rails by themselves are more than $300 on ebay. A lot of innovation has been happening in the 3D printer space to make all these machine components cheaper which has also benefited other applications like hobbyist milling.
Nowadays, we are so used to all the injection molded plastic crap, and also so much poorer, that we can't understand why precisely manufactured products made from solid metal or wood are so expensive.
You can get 3D printers from BestBuy(!) for $200 retail. At that point, the cost of the filament is going to quickly exceed the cost of the machine.
At the $200 price point, your Bill of Materials is roughly $65 (about 1/3 of the retail cost). I challenge you to buy the raw materials of a 3D printer for under $100 let alone $65.
To me the maker movement is alive as ever. Sure the arduino has died a death, but pico, esp32 and various other microcontrollers evolved the entire system, and with wifi too.
The author of this article gives a more balanced POV than mine. I think most (maybe overwhelming majority) of publicized vibe coding projects are complete technical virtue signaling.
With agentic loops, you specify what you want and it continues to do stuff until ‘it works’. Then publish. Its takes less time and attention. So projects are less thought out and less tested as well.
In the end, I think it’s not about how a project was created. But how much passion and dedication went into it. It’s just that the bar got lowered.
Every market on some level can be analogized to a common and simple market.
One of the common examples in management books is the signage industry. You can have custom logos custom molded, extruded, embossed, carved, or at least printed onto a large, professional-looking billboard or marquee size sign. You can have a video billboard. You can have a vacuum formed plastic sign rotating on top of a pole. At the end of the day, though, your barrier to entry is a teenager with a piece of posterboard and some felt-tipped markers.
What has happened is that as the coding part has become easier, the barrier to entry has lowered. There are still parts of the market for the bespoke code running in as little memory and as few CPU cycles as possible, with the QA needed for life-critical reliability. There’s business-critical code. There’s code reliable enough for amusement. But the bottom of the market keeps moving lower. As that happens, people with less skill and less dedication can make something temporary or utilitarian, but it’s not going to compete where people have the budget to do it the higher-quality way.
How much an LLM or any other sort of agent helps at the higher ends of the market is the only open question. The bottom of the market will almost certainly be coded with very little skilled human input.
There's definitely a trend towards flashy projects prioritizing style over substance, which can overshadow more practical applications. It's easy to get caught up in the hype and overlook the real problems that need solving.
I think it's often genuine excitement to share a thing - without quite processing that anybody with the same idea can now build it (for simple- to mid-complexity projects).
The novelty of "new thing! That would have been incredibly hard a decade ago!" hasn't worn off yet.
This isn't the first time something like this has happened.
I would imagine that people had similar thoughts about the first photographs, when previously the only way to capture an image of something was via painting or woodcutting.
When movies first came out they would film random stuff because it was cool to see a train moving directly at you. The novelty didn't wear off for years.
There was something someone said in a comment here, years and years ago (pre AI), which has stuck with me.
Paraphrased, "There's basically no business in the Western world that wouldn't come out ahead with a competent software engineer working for $15 an hour".
Once agents, or now claws I guess, get another year of development under them they will be everywhere. People will have the novelty of "make me a website. Make it look like this. Make it so the customer gets notifications based on X Y and Z. Use my security cam footage to track the customer's object to give them status updates." And so on.
AI may or may not push the frontier of knowledge, TBD, but what it will absolutely do is pull up the baseline floor for everybody to a higher level of technical implementation.
And the explosion in software produced with AI by lay-people will mean that those with offensive security skills, who can crack and exploit software systems, will have incredible power over others.
I believe the security vulnerability issues will be addressed with companies using cloud based vibe-code platform or a ai security auditor agent that runs through the code base and flags security issues.
Sure it is. AI software development is here. It's not good enough for everything, but it's good enough for a majority of the changes made by most software engineers.
That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
You can always find some person saying that it'll destroy all jobs in a year, or make us all rich in a year, or whatever, but your cynicism blinds you to the actual advances being made. There is an endless supply of new goalpost positions, they will never all be met, and an endless supply of chartalans claiming unrealistic futures. Don't confuse that with "and therefore results do not exist".
No, it isn't. There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
Mixing the two up is how we get a massive company like Microsoft to continually produce such atrocious software updates that destroy hardware or cause BSODs for their flagship Operating System.
That's not replacing software development. That's dysfunction masquerading as capability.
And none of what I said is goalpost moving. They are the goalposts constantly made by the AI industry and their hype-men. The very premise of replacing a significant amount of human labor underlies the exorbitant valuation AI has been given in the market.
It appears that your understanding of AI code generation reflects the state of 1-2 years ago. In which case of course it seems like what people are describing as reality, feels 1-2 years away.
> There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
This is exactly the goalpost moving I am talking about. I said 80% of code could be AI-written, you agreed, and followed up with "oh but it doesn't matter because now we're measuring by % of commits".
> That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
Technically 100% of the code they could produce could be created by a ton of very specific AI prompts. At that level of control it would be slower than typing the code out though.
Just throwing out random numbers like this is complete nonsense since there's about a million factors which determine the effectiveness of an LLM at generating code for a specific use case. And it also depends on what you consider producing by hand versus LLM output. Etc.
Today I fed to Opus 4.6 five screenshots with annotations from the client and told it to implement the changes. Then told it to generate real specs, which it did. I never even looked at the screenshots, I just checked and tested against the generated specs. Client was happy.
I have a similar feeling to people who upload their AI art to sites like danbooru. Like I guess I can understand making it for yourself but why do you think others want to see it
xkcd turned stick figure drawings into an art form. sometimes it is not about how something was created, but about the story being told.
some people build apps to solve a problem. why should they not share how they solved that problem?
i have written a blog post about a one line command that solves an interesting problem for me. for any experienced sysadmin that's just like a finger painting.
do we really need to argue if i should have written that post or not?
Even if status-signaling through this vector loses it's lustre, AI slop (agentic or otherwise) will not, and some of that slop will take on the guise of "vibe-coding" projects.
What's new is this concept of the "maker movement" as a distinct counterculture. It's relatively easy to go buy parts and materials and make things. People 30 or 40 years ago who built stuff instead of buying it didn't really identify as anything because that was just what you did when you wanted something. Whereas nowadays you can buy pretty much anything on Amazon, even things that are fit for a very specific purpose.
For example, if you wanted a pretty dress with a specific fabric and cut, you would likely have had to sew it yourself or pay a tailor because your off-the-rack options would be limited, costly, or ill-fitting. But people just did that without fanfare and it wasn't a counterculture. Or if you wanted custom cabinets or resin-coated live-edge stair treads, etc. You'd just figure out how to make it if you wanted it. Or you could pay someone else to do it.
I think the severity of this is wildly overblown in an effort to make it fit the thesis.
Like… if the maker thing was less of an insane cult that died out than genuine excitement about things that actually did matter… well the whole thing falls apart.
We’re just not required to accept the (false, I think) premise this depends on, even if we’re inclined to agree with what it says about vibecoding.
Yeah, I have no idea what this guy is talking about. I still get Make magazine full of people making projects every month. My youtube feed is similarly full of people making stuff and sharing it with the community.
Check out the Maker Project Lab weekly video showcasing awesome stuff from the maker community, it's inspiring and fun to see. https://www.youtube.com/@MakerProjectLab
The hype cycle of 3D printers has probably plateaued into productivity now. Certainly the Maker movement is alive and well but it's not the hot new thing like it was a decade or a dozen years ago. Makerspaces aren't sprouting like mushrooms like they were before (partly because critical mass was already reached, partly because the pandemic reduction of physicality I'd guess), you don't see gimmicky 3D-printing kiosks at the mall anymore.
For people that have been doing something for some time, it's kind of funny when their old thing becomes new. Old things are now suddenly becoming internet famous and starts trending, so it suddenly becomes "new". Eventually, those new comers that only came along as trend followers fall away. That leaves the OG people plus some of the new comers that will stick around. Eventually, a new generation will discover it and it becomes "new" in whatever circles they run.
If you see it through a cynical capitalist lens you could argue the maker movement is just an engineered market segment, how many people bought raspberry pis, arduino, 3d printers and barely use them? Do they actually make things or do they watch videos of influencers making things and selling them the dream (and tools)
Making isn't dead, but the movement is. There is no longer a large gap of people who are gaining interested in it but who haven't yet figured out how to get started. Now, everyone who wants to make it is already doing it.
I feel like the "maker movement" was more a corporate effort to commoditize tools and supplies to sell to makers. Not to mention selling the lifestyle of "maker".
Plenty of people fall in both camps of DIY and vibe coding. Just last week I used codex to write me so great scad file so that I now have a token generator for my multi color 3d printer. Vibe Coding can allow makers to go further quicker.
I disagree with too much philosophizing around both Makers and vibe coding. The actual incentives are curiosity and a desire to build what one cannot buy (and using that for teaching initiative in kids) - not AGI or transforming society.
Physical making is hard: you run up against the limits of plastic or the difficulty of cnc planning for various materials, as well as the limited value for small projects: people rarely make entire projects, instead making parts. So there is an upper bound for the utility of making. (btw, anyone have a laser welder or steel-capable CNC's they're tired of?)
Software making is what you make it, subject to the laws of complexity, and as valuable as its integration (computers, robotics). These in theory are limiting, but in practice there are effectively an infinite supply of valuable projects when the cost of production reduces. Deployments will be limited by access to customers, which is not a problem when people make software for themselves.
The maker movement evolved. It didn't disappear. Once the tools became accessible to a much wider audience, such as children, it became an integrated aspect of education. It also became a cultural tool. The author is focusing on a very narrow path to monetization and manufacturing. That wasn't the goal of the movement at all. That was how startup pitches tried to capture the movement and extract value. I see 3D printing machines that create structures out of adobe now. Huge ones. I see whole niche industries coming from laser cutters and CnC machines. People who started on Arduino boards now build music synthesizers and modular synth components. That movement continues and now offers a wide array of dividends.
Nah. The most universal rule of human nature is humans be lazy. Makers do extra effort for no real gain. Vibe coders do less effort for more gain. Vibe coding is what everyone wanted computers to be from the beginning. Tell it what to do, it does it.
Actually, the future isn't vibe coding, it's vibe agenting. GPT 5.3 is so advanced, you don't need to write a program to do something. You tell the agent what you want, and it does it for you by "using" desktop apps like a person. If it can't do it manually, it'll write a program to do it. That's where we're headed.
At the same time, the quality of all this is absolute dogshit poor, so the market for things that actually work properly is probably still there. Which CEO recently had OpenClaw delete all their mail?
It's really not bad quality. The code written by AI is pretty decent now and fixing it is easy too. There are people making poor decisions with the technology (like OpenClaw); that doesn't make the technology bad.
With AI you can build tools fast. You can then version and release those tools, and improve them, fast. Then the AI can use that version of that tool. This gives the AI a fixed set of deterministic functionality that works the same way every time.
The CSO that had all their mail deleted happened because the tools they have right now aren't very good. Whatever that mail tool was, could be easily modified to have a limiter added that stops attempts to mass-delete emails. Hell, your own email client already will prompt you to confirm if you really want to "delete all emails" - because humans are stupid, like AI, and make mistakes, like AI. They just have to build the guardrails in, rather than hoping and praying that the AI will "behave itself". If the AI is a monkey at a joystick, we still control all the machinery attached to the joystick.
The author writes as if he didn't know 'aider' even existed. "Vibe coding skipped that phase entirely" is dead wrong. What may be different is that the cycle was incredibly short before Anthropic made it mainstream with Claude Code. Gemini CLI, definitely a Claude Code imitator, existed long before The New York Times knew what Claude Code was. Openclaw -- a decidedly different agentic AI application -- is part of another period where weirdos are playing with tools.
The failure mode split nobody's naming: Claude gets regexes right about 95% of the time, which is annoying but catchable. Gets auth logic or state management right 95% of the time and you've got silent data corruption showing up 3 months later on an edge case nobody tested.
Vibe coders treating those as the same category is what actually worries me. Even in regular software there's a feedback mechanism - unit tests go red, CI breaks. Vibe coding skips that too. You get working code that passes the happy path and nothing that tells you which 5% failure rate is the dangerous one. That judgment about problem category severity is the thing that's hard to develop without breaking things first.
I have a feeling that the maker movement specific being talked here was with meetups for showcasing things (fairs?) and with local hackerspaces at the age of the makerbot as the “game changer” 3D printer. If that is the case that one was captured by corporations - and for makerbot, the Stratasys “takeover”. I guess the AI/vibe coding was born from corporations but with local models there is this promise to move it to easier/more open access. I feel it’s too soon to tell to trace part of the parallels. I also feel the Maker movement cited was at a better age for Blogs, so lots of the vibe coding may just be happening without an audience.
I don't understand this. I use agentic coding to do things more quickly. And it's not just toys. I end up with software that both works and is useful. Assuming AI models powerful enough to drive that process continue to be available, why would I stop doing it?
Th is isn't discussing individuals, it's discussing trends as a whole. There are still plenty of makers getting value out of 3d printers as well, but it's not everyone like we talk about everyone becoming a software developer with vibe coding.
Software is just a collection of functions. Some input returns some output.
That is what people do every day with LLM. When they ask LLM to do something, they are being software developers without them or you even realizing it. But what are they doing but building something with the LLM that takes digital input and returns digital output. It is software. Summarize this email for me gpt. That is a tool.
There is no getting around the fact that projects that used to take man-years are possible in an afternoon now.
And this is the worst this technology is ever going to be :)
Don't take my word for it, go try building something that you always wanted to build but did not have time for. If you do not have something like that at the back of your head, I doubt you have to be concerned about this topic.
It implements the (pretty large) subset of the tool I personally need.
But everyone needs a different subset, and maintaining a coherent codebase that supports everyone's need is difficult.
What could put Adobe and others out of business is everyone reaching for Claude code, cursor or codex the next time they need complex software tailored to their use case instead of the one size fits all software that is commercially available.
Ditto. I have done 6 projects over the last 12 months, and wrote up 3 of them on my web site, I also usually post a link either here, or hackaday, or the other maker sites, most of my work these days is repurposing broken commercial or consumer electronics by replacing the PCB's to give things a second life (eg <https://rodyne.com/?p=3380>). I've been making things since 1981, vibe coding just makes it easier for me to work with more complicated stuff.
To be clear I’m not sure what I’m doing is vibe coding because I write some of the code and read/understand what the LLM writes.
I think I’m learning less (about the code) but making more. Maybe that’s okay? There are other things to learn about. My code has users, it processes money. I user test, I iterate, I see what works and what they need.
Yeah, the term "vibe coding" is really overloaded these days. I, too, make detailed plans for the LLM, but that's just what works for me, I don't care enough to give it an exact name. I'm having fun, and that's what counts.
Usually people would try to become rich so they could pay others to make stuff for them, now you can just spend a moderate amount of money having an LLM make it for you.
Ok, i just generally disagree with the premise. Why does it have to be "100% vibe coded" or "0% vibe coded"? There is a very happy medium that is getting ignored here. As a coder with various language experiences, i can just get like a good kick and a template with Claude and continue in any language i want and have the LLM do the redundant parts. As someone with some soldering experience, i could have an LLM cook up and explain a circuit that might have taken me months trying to mangle myself. I think LLMs empower creativity more than ever, and creative people can have a wonderful time with LLMs softening the initial headbanging and tedious redundancies of any project.
Never has it been more exciting to be a builder (software)! So much momentum and so little getting blocked. I am learning faster than ever even with LLMs doing so much of the heavy lifting. It is so fast to iterate and just MAKE STUFF!!!
> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize. What happened instead follows a pattern that Joel Spolsky described years ago in his essay on commoditizing your complement: cheap 3D printers and Arduinos made prototyping nearly free, which was genuinely useful. But the deep, compounding knowledge of how to actually manufacture things at scale continued to accumulate in industrial bases like Shenzhen. Prototyping got democratized. The cheap tools commodified one layer of the stack and made the layer beneath it more valuable by comparison.
> You can watch something structurally similar happening with vibe coding right now. People are rapidly prototyping tools that threaten to displace entire SaaS business models. But the value generated by all that rapid iteration and prototyping flows upward. It accumulates at the model layer, in the training data, in the infrastructure. The vibe coders themselves risk becoming interchangeable, each one spinning up impressive demos without accumulating durable value of their own. The pattern rhymes: cheap tools democratize one layer, and the layer beneath captures the surplus.
I agree the central promise of 3D printers was they would get cheaper, better, and more like industrial grade and we would end up with this thing that could build replacement parts for anything in our home.
Instead what we were left with was an endless hunt for 'models', and no companies publishing their specs. Everything had to be done custom, and at best some niche manufacturing for weird side quests like adds ons for OneWheels, or cases for raspberry pis.
The closest thing to practical I have 3D printed is a wedge to better aim my google doorbell. I used to make some beautiful planters. I certainly am not 3D printing a droid, or a dishwasher impeller, or a fan blade for my 30 year old fridge.
So yes, while Claude code is fun, and you can build neat prototypes, it takes a lot of work to build a full product and then maintain it, scale it, deploy it. That takes persistent joy in what you're doing because you're not necessarily claude coding everything.
I've had plenty of use for mine, but I wish I had a library of mechanisms that work well that I could put together to build easily.
Learning modelling is a huge time sink, learning to make threaded parts, or anything modular to not have to re-print everything for changes. It's great but the printing is the easy part
I don't understand the fascination and focus on Vibe Coding.
Sure, you can do that, it's an option, but no serious engineering effort is being left entirely up to the AI.
Vibe coding is essentially the Jackson Pollock approach to software building. Throw a bunch of paint down, with very little control, and look, we have something novel.
It doesn't mean your going to replace all the ways of making art with paint throwing.
I'd love to start seeing more discussions about alternative approaches to working with AI. The recent Vinext article was great https://blog.cloudflare.com/vinext/. This seems to be "the way" for working with AI in a high stakes production environment, but what other ways are there.
I fear the focus on vibe coding is diluting and taking focus away from far better alternatives. Maybe because the narrative around those aren't quite so dramatic?
The maker movement is not dead but it's a far more niche audience. Don't get me wrong, get a 3d printer and an arduino(or arduino like equivalent), endure a week of suffering and you are hooked for life: this was my own experience and anyone that I know that has ever gone down that road. ~~vibe~~ Slop coding won't die either but there are a lot of people will get a cold shower sooner or later: some already have. All ai slop is a russian roulette where the players may not even know they are playing and the gun is a backwards revolver. I can't say whether slop coding will professionally die before or after the burst of the AI bubble, but everyone is starting to realize that slop is unmaintainable, inefficient and full of bugs when you factor in all the edge cases no slop machine will ever cover. AI can exist in non-professional spaces and hobby projects, though I'd argue it may be equally as dangerous for the people that use it and those around them: you are only one firewall-cmd away from leaking all your personal data.
As for the parallels with the maker movements, here's one example: drones are one of my hobbies. I love drones and I've built countless fpv ones. For anyone that hasn't done that, the main thing to know is that no two self-build drones are the same - custom 3d printed parts, tweaks, tons of fiddling about. The main difference is that while I am self-taught when it comes to drones, I have some decent knowledge in physics, I understand the implications of building a drone and what could go wrong: you won't see me flying any of my drones in the city - you may find me in some remote, secluded area, sure. The point is I am taking precautions to make sure that when I eventually crash my drone(not IF but WHEN), it will be in a tree 10km from anything that breathes. Slop code is something you live with and there are infinite ways to f-up. And way too many people are living in denial.
The maker movement comparison is interesting but I think it breaks down in one key way: the marginal cost of software distribution is basically zero. 3D printing still requires physical materials and shipping. Vibe coded apps can reach users instantly if there's a discovery mechanism.
The real parallel might be the early web era where anyone could make a website but finding them required Yahoo directories and later Google. Right now vibe coded apps have the same discovery problem - they exist but there's no effective way to find or evaluate them.
There was also something subtle that happened, and it seemed to happen quite rapidly, a little over a decade ago. "Maker" started being used to mean more than just 3D printing hackers and started to refer to engineers, and then others "making" things.. but the watering down wasn't the end of it, it became a way to praise a certain class of employee. The resentment that generated (say, sales, marketing, etc) and the bizarre uses of "Maker", I believe contributed to it's demise.
The title of the linked article is "Vibe Coding and the Maker Movement" but the title on Hacker News is "Will vibe coding end like the maker movement?" - I think the original title should be restored.
The Maker Movement didn’t die, it evolved. Look at STEAM and assistive technology for examples. The failure in monetization of Tech Shops were heartbreaking, but next-generation manufacturing techniques have changed the concept of a mass-produced “one size fits all” product.
It almost felt like a well poisoning those that were preaching towards casual audiences how 3d printing would bring in this era of having a little factory in your garage. A set of machines that'd make anything and everything without any expertise on the user's end, replacing most overseas production.
I’m not sure a I agree with any of the first few paragraphs.
I’m not remotely thinking about AGI. I dabbled in the maker movement and it just doesn’t compare in the sheer velocity we have with GenAI and mass production of code.
No. AI assisted coding ("vibe coding") will not go away, but the hype around it will as it becomes incorporated into development like any other tool. You'll be expected to use it at work (for "productivity" reasons), but if you enjoy the act of coding and problem solving, you still won't have to for personal projects.
> You’re left reaching inward for something that the process never required you to develop, and the gap between the effort you expected to invest and the effort that was actually needed starts to feel like a personal failure rather than a feature of the technology.
I wasn’t aware the maker movement ended. There are all sorts of cool things we can do with on-device ML that have major privacy and convenience benefits over Claude in the cloud. In fact with hardware improvements I think integrated intelligence will be heating up.
The comparison feels off to me. The Maker Movement was an actual movement with a shared ideology of self-transformation through building. People identified with it. Vibe coding is just a description of a practice.
The term covers a broad range of people: developers building components in languages they don't know, people trying to ship something fast and cash out, enthusiasts, and plenty of developers who are just too lazy to do their job. Any generalizations about what this "means for society" are going to be strained by definition.
The author partially senses this. He writes that vibe coding "skipped the scenius phase" but misses why. I think there was no scenius phase because there was no movement in the first place. The tool just became available to everyone at once.
Vibe coding isn't so much a movement as a big fat tool that was air dropped from space after the megacorps decided to dump billions of dollars into LLMs and LLM companies.
It's like comparing Christianity to water wheels or gay pride to to the Saturn V rocket. It's just not really analogous in any way.
I do agree with the author about commoditization, however.
The most likely outcome is that software will be commoditized and software developers commoditized even harder. If we still need software engineers to prompt, you'll find plenty of people in India able to do those tasks, not necessarily with great quality until they too are replaced by better AI.
This whole situation inspired me to actually dive harder into Maker type stuff such as learning how to design PCBs, but one thing I found is that this TOO is very close to being automated by AI. To actually get hardware made, even prototyping PCBs, you NEED to go to China, and the Trump tariffs cut into the cost of doing these activities hard.
Circuit design has been on the cusp of being automated ever since there were computers. It's been over twenty years of autolayout tools and they're still not very good.
Maybe you could research how to make your own PCBs? It can be done at home with a little equipment and then you can offer it as a service to others.
The thing is, the cost of making them at home is both laborious, poorly reproducible, and lower quality than getting it done at JLPCB. In the old days, it was done with an exacto and rubylith. Today it requires a laser setup for about $4000 and you still need to manually apply and remove a resin layer. It's mostly not worth it to do things this way, and you lose the ability to have more than 2 layers in your board.
The bigger issue with PCBs is that even with a nice prototype, actual manufacturing needs to be done in Shenzen for any sort of cost competitiveness if you want to step outside of the hobby realm (just as the author of this essay stated).
It's really, really hard to see where the USA stands in the value chain any more once LLMs have been deployed. If all the physical manufacturing lives in China, where the manufacturing supply chain also lives, and Chinese AI companies can very very easily distill US LLM models, the two remaining US advantages is actually just the dollar's role as reserve currency --- something that crypto bros and the US president is working on eroding at a record pace, and the fact that this overvalued dollar gets all the smart people in China and India to emigrate to the US -- also becoming politically less viable.
The outsource-prompting-to-India stage will almost entirely be skipped (it already has been).
Developing nations that were looking to tech to climb the economic ladder, are watching that ladder be pulled up.
Most of the upside will go to the US and China. Europe is lagging shockingly on AI spend, they're extremely far behind (but with constant plan announcements). If you didn't know any better, you'd think Europe believed the year was 2010.
Hard disagree with this take. Mass adoption of any technology is almost always a good thing; the more people are looking at the sane problem, the more clever/elegant/innovative solutions come out of it.
Im also not sure if “vibe coding” did not have a phase where early adopters were mucking around? I saw the early versions of gpt much earlier than chatgpt and a lot of folks were using transformers for coding before claude.
edit: I read this title wrong, thought it said "end the maker movement"
personally I enjoy creation and writing code so I'm not going to vibe code my hobby/passion project, I don't care if theoretically it'll save me x amount of time, the code is rote for me anyway but I have to be actively engaged in it to enjoy it
The "consumption" frame is more honest than "craft," agreed. But it falls into the exact same hole. Taste accumulation, attention capture, gift economy, signal fortress—they're all variations of "how do I assetize the byproducts?" The frame changed, but the question didn't: what do I get out of this?
The author already touched on a better answer. Scenius worked because of the "permission to fuck around." Nobody expected your Arduino to ship. But the conclusion hands you four value-capture strategies and quietly revokes that permission. "Play freely, but collect the exhaust" isn't permission—it's a conditional license.
I once learned songwriting from an indie musician who refused autotune and wrote by hand. He said the point of busking isn't playing because there's an audience. It's playing when nobody stops. You play anyway. That's how you find your sound.
This gets at the root of "evaluative anesthesia." It's not that our tools are too powerful. It's that we're asking "is this valuable?" at every step. A busker doesn't ask that. Taste and judgment accumulate as a residue of immersion, not deliberate capture.
What vibe coding needs isn't a smarter consumption strategy. It might just be the courage to play to an empty street.
News to me. denhac has grown from 500 members to 600 in the last year since I joined. The space is constantly evolving and they'll be moving to a another larger location (again).
Participating in the maker movement achieved a few things: it signalled you had intellectual curiosity, that you were a man who could do things with his hands, and that you fixed things, rather than bought new - thereby increasing your green credentials.
These promotional articles get more refined: They start with the negatives and then refute them in the last paragraphs.
None of these sophisticated articles mention that you could already steal open source with the press of a button before LLMs. The theft has just been automated with what vibe coders think is plausible deniability.
> and it has to do with how the Maker Movement actually ended.
> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize.
This version of the Maker Movement only ever existed in news articles and hype bubbles.
The Maker Movement was never about building small factories and consumer 3D printing was never about manufacturing things at scale. Everyone who was into 3D printing knew that we weren't going to be 3D printing all of our plastic parts at home because the limitations of FDM printing are obvious to anyone who has used one. At the time, consumer 3D printers were rare so journalists were extrapolating from what they saw and imagined a line going up and to the right until they could produce anything you wanted in your home.
The Maker Movement where people play with Raspberry Pi, Arduino, and cheap 3D printers is possibly stronger than ever. Everything is so cheap and accessible now. 10 years ago getting a 3D printer to produce parts was a chore that required a lot of knowledge and time. Now for a couple hundred dollars anyone can have a 3D printer at home that is mostly user friendly and lets them focus on printing things.
The real version of the Maker Movement just isn't that interesting to mainstream because, well, it's a bunch of geeks doing geeky things. There's also sadly a lot of unnecessary infighting and drama that occurs in maker-related companies, like the never ending Arduino company drama, the recent Teensy drama that goes back years, or the way some people choose their 3D printer supplier as their personal identity would rather argue about them online than print.
>> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize.
> This version of the Maker Movement only ever existed in news articles and hype bubbles.
That version of the Maker Movement was heavily pushed by city and the state government in Massachusetts. They put money into it; foundations funded it.
It was seen as a way to give students another pathway for those who weren't interested in going to college. I've seen first hand how some kids who weren't interested school or academics really got into the Maker thing, which got them into STEM.
Some of them ended up going to college to study engineering and related fields. Some of them ended up working in related fields and started their own businesses.
As time went on, it became clear to me that the Maker Movement wasn’t going to go mainstream, although 3D printing has found another niche audience recently in the home lab space. Many home-labbers on YouTube 3D print their own cases and other parts.
There will be normies that take up vibe coding like some knit their own sweaters or grow their own food because they enjoy it.
And there will be Fortune 500 companies that will vibe code certain products.
3D printing does bear some similarity to vibe coding/LLM-generated code. I do occasionally see "product" 3D printed items but the bigger value-add for 3D printing has been rapid prototyping and then running that design through actual production testing.
An example 3D workflow:
Prototype design -> 3D print -> test/break -> production design -> real manufacturing process
The equivalent vibe code
Vibecobe -> slop -> test/break -> real developers -> real development process
--
The real test for vibe coded stuff (much like 3D printed crap at craft fairs) will be if someone actually buys it. But much like those 'makers', vibe coders will have to go through the "real development process" if they want to make money at scale.
The 3D printer hype bubble wasn't as big as the current AI bubble, I'd even characterize it by enthusiasm rather than call it a hype. However, 3D printers have come a long way, they've become commoditized and affordable. More and more people jump in all the time and the maker movement continues, the niche is growing at a steay rate. I'd be curious to see how this evolves in the next 5 to 10 years.
I run a medialab in an art university. My suggestion is that it will have the opposite effect as I now see students who would have never dared to use code, use it. I have also seen a LLM suggest electrical advice to a student that would have reliably started a fire, but hey.
The maker movement comparison works on the surface but misses a key asymmetry: 3D printing failed partly because physical atoms still cost money to produce and ship. Code has zero marginal reproduction cost. Every vibe-coded tool that ships becomes infinitely cheap to distribute.
The more interesting question is what vibe coding actually democratizes. It's not engineering---it's implementation. The bottleneck shifts from 'can you write the code' to 'do you understand the domain well enough to specify what the code should do, and verify it's doing that correctly.'
I've watched domain experts---people with deep subject matter knowledge who previously couldn't build because they lacked CS fundamentals---suddenly able to ship working tools. Code quality is often brittle. But the problem understanding is sharp, because they're building something they actually needed.
The maker analogy would have been more accurate if 3D printers only failed when you asked them to print something you didn't fully understand. That's where vibe coding fails too.
Vibe coding is pretty much the total opposite of the maker movement. Vibe coding's appeal is in getting something for little or no effort by outsourcing the thinking bits to the cloud. The appeal of the maker movement is in getting something by building it yourself, and because you built it you understand and control how it functions.
If vibe coding ends, it will end because model collapse, diminishing returns, escalating costs as the VC money run out, etc. cause LLMs to fail to deliver the promised capacity to, per Dijkstra, "program if you cannot". There will be a culling as amateurs and dilettantes with no technical knowledge or interest lose interest in programming itself, and the field will collapse back into a niche. Amateurs and dilettantes crashed out early of the maker movement, if they got involved at all; "making" was for technically inclined people in the first place.
I totally get the point of the article but the analogy isn’t a good one. It’s got the vibe that it’s written by someone who hasn’t been following the 3D printing/maker scene in a long time, which is more popular than ever.
I realize that the wildest promises of 3D printing and maker stuff like Arduono never came to fruition, but maker spaces have matured greatly. If that is the analogy we are making, that means that vibecoding won’t reach “the masses” necessarily but it will be popular beyond the present audience.
I have not yet tried vibe coding but it is something I look forward to trying when I get some free time (kids growing up a little).
I assume some could use it to make for commercial sale products but when I heard of it I really just pictured it mainly for small personal projects mainly.
I have always had an interest in electronics but without going to college there was really obvious no path to get into creating small diy projects. Then years back came along Raspberry Pi. I bought one along with a big variety of different sensors and a breadboard and all the things one would need to create something. I pictures making things that would email my mom when her plants were getting dry and many other dreams with all the sensors.
But it was still overwhelming. Lots of knowledge you need before you even start so it felt hard. But eventually I set off to try something and with many hours of searching for how to code what I wanted and essentially copying code and maybe slightly altering it to my needs I did finish one project. It was basic but I was always proud of what I accomplished. I had an IR sensor that would detect if someone walked in front of it and when that happened I also had a power relay that was connected to a lamp. When motion detected the lamp would then blink SOS in Morse code and it would also send me an email saying motion detected. What a feeling when I ran it and it worked on the first try.
But that took so much time searching and trying to find the code I wanted. I see vibe coding and imagine I could do the same thing in minutes verses hours. I don't think I will ever make some project that is ever going to make me money but do imagine with vibe coding the barrier to creating some of those projects I dreamed up in my head for personal use is much closer and obtainable.
This misses the point that AI is not just vibe coding, but the same opus 4.6 is also exceptionally good at idea generation, content generation, research etc etc.
It is not just vibe coding that is being developed, but general intellegence.
I feel like every blog post like this marches up to a point and then abruptly stops before looking at the only thing that has improved working conditions in the US: organized labor movements.
The maker movement directly helped bring about AI. Likely every top OpenAi engineer did a blinky project with Arduino that helped them improve their general problem solving skills.
The "Maker" movement and "vibe coding" have changed the way I do things. I 3D print several things a month, and now I make PC boards with KiCad, etc. It's an incremental change, but a change nonetheless
right now I think there's just a backlog of things to build
from individual tinkerers and ideas guys cranking out all the projects they would have never subsidized, there's a lot of that
and with corporations I'm seeing there are lots of products that would have taken 8 quarters to do, all being compressed into one now. The flip side is that all 8 quarters wouldn't have been allowed to happen as priorities would have shifted before the product or feature roadmap was ever allowed to get that far, but instead now all of it is being built out and other iterations and directions are being done simultaenously
after all of this is shown not to be saving money, or creating much value because they're doing too much without market validation, then a more intelligent approach will occur and less vibe coding will occur
A mix of perspectives in here that feel inter-related. The maker movement state-side leaned more "fun or artsy" while the real maker movement you could argue was thriving in China. Another darker way of looking at it is: if the maker movement was really believed to be a way to bring manufacturing back, it was effectively cargo-culting that by focusing only on a narrow set of building blocks. Maybe it's similar to building your own PC from parts at Fry's back at the day: that felt good... and you did feel you were really making something. But you were really doing final assembly and abstracting out the complexity of building those building blocks that went into it.
Anyway I think we are seeing a scenius phase -- it's just happening everywhere all at once on a world stage. And it's exciting. As with any moment in time there's a ton of experimentation and a small number of break-out hits. Also the pace of change means there's less staying power for a break-out hit than there used to be.
But the quick break-out hit phenomenon is particularly applicable for things that are more about the attention economy and less about the boring hidden things that traditionally have been where the economy's silent toil is really centered.
All of this makes me feel the author is too close to the creative end-consumer layer e.g. "make something flashy and cool whether it's a 3d-printer in a 5th avenue dept. store window, or a new app front end" but perhaps less focused on the full depth of things that really exist around them.
This really resonates with me in that a lot of NYC's "tech" circa 2013 was 3d printing oriented, much more so than in Silicon Valley. And I wondered why? but then it was a reflection that tech in NYC then was more about marketing, story telling, and less about the depth...
Obviously you had the west coast makers, you had the burners, so I don't mean to conflate all these differnet things. But the idea that Maker Faires were really about bringing manufacturing back... I don't know I think it was more about the counterculture, about having fun. I think that's coming back to tech right now as well in a sense. Even if it's also got dystopian overtones
> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize.
There are plenty of products now that only exist because of what it did deliver on. Any one who spends time in the niche communities where it is thriving can see that... On the low end look at Apollo automation, the story of Grismo Knives, at the high end look a Hadrian Manufacturing.
Vibe coding is a terrible name, but what a skilled dev can do with a deeply integrated AI coding assistant is amazing. It changes the calculus of "Is it worth your time" (see: https://xkcd.com/1205/ ).
Is it helpful in my day to day: it sure is. Is it far more helpful in doing all the things that have been on the back burner for YEARS? My gods yes! But none of that is matching the hype thats out there around "vibe coding".
My general take on most vibe coding projects ("Hey, look, I built this over the weekend"), is general dismissiveness. Mostly because of the effort required, i.e. why should I care about something that someone did with almost zero effort, a few prompts?
If someone tells me they ran a marathon, I'm impressed because I know that took work. If someone tells me they jogged 100 meters, I don't care at all (unless they were previously crippled or morbidly obese etc.).
I think there are just a ton of none-engineers who are super hyped right now that they built something/anything, but don't have any internal benchmark or calibration about what is actually "good" or "impressive" when it comes to software, since they never built anything before, with AI or otherwise.
Even roughly a year ago, I made a 3D shooting game over an evening using Claude and never bothered sharing it because it seemed like pure slop and far too easy to brag about. Now my bar for being "impressed" by software is incredibly high, knowing you can few shot almost anything imaginable in a few hours.
I struggle with this feeling as well, a huge part of the Maker movement was excitement around people building and importantly learning how to build thing. Iterating and improving each time is a pretty common thread you'll see throughout the community. It's hard to have someone show you a thing they generated instead of made and to feel the same way. Yes, they played a part in that thing existing, and part of that person is reflected in the output, but I don't think most Makers would say the final output is goal, so what's there to be excited about?
It's hard to not be dismissive or gate-keeping with this stuff, my goal isn't to discourage anyone or to fight against the lower barriers to entry, but it's simply a different thing when someone prompts a private AI model to make a thing in an hour.
Why share something that anyone can just “prompt into existence”?
Architecture wise and also just from a code quality perspective I have yet to encounter AI generated code that passes my quality bar.
Vibe coding is great for a PoC but we usually do a full rewrite until it’s production ready.
————
Might be a hot take, but I don’t think people who can’t code should ship or publish code. They should learn to do it and AI can be a resource on the way.. but you should understand the code you “produce”. In the end it’s yours, not the AIs code.
Do people build to impress with an implementation that no one cares about really? Or to share the end product?
I think now you are freed up to make a shooter that people will actually want to play. Or at least attempt it.
We probably need to come to terms with the idea that no one cares about those details. Really, 2 years ago no one would have cared about your hand crafted 3d shooter either I think.
It doesn't matter, neither of those scenarios makes the effort impressive in this case. The vibe coded thing might even be useful - that does not make it impressive though. Effort does.
> The vibe coded thing might even be useful - that does not make it impressive though.
Then "impressive" shouldn't even be the benchmark. If someone gifted me $10K, I'm not going to care if they earned it in a competition or won it in a lottery. Value is value. I'm gratefully accepting it and not being snobby about it. I couldn't care less about how "impressive" anything is if it's useful to me.
This is what I think a lot of the people who advocate for 'AI generated images being art' don't get. There's no effort or intentionality into what's being created; it has the look and appearance of 'polished art' (that breaks down when you look closer) but behind it is nothing.
It's also why AI generated code is a nightmare to read and deal with, because the intention behind the code does not exist. Code outputting malformed input because it was a requirement two years ago, a developer throwing in a quick hack to fix a problem, these are things you can divine and figure out from everything else.
> I think now you are freed up to make a shooter that people will actually want to play. Or at least attempt it.
Taking this to an extreme, let's say vibe coding becomes real enough, and frictionless enough, that you can prompt a first person shooter into existence in a few minutes or hours.
If/when this becomes true, nobody will want to play your shooter. You'll share your shooter with people and if they care at all about shooters, they'll just go prompt their favorite AI tool and conjure their own into existence.
Admittedly this is a bit extreme, and we aren't there yet. But I've thought about this in relation to art, and how some people now go "well, this empowers people who didn't know how to make a movie/cartoon/painting/game, it's empowering and democratizing". But in my mind, art is a form of communication between humans. Without the exchange between humans, art cannot exist. If all of us are each lost in our own AI-powered projects, and if anything can be easily conjured out of thin air, then why bother with the next person's art project (game or whatever)? I don't care about your game, let me make my own in a few minutes.
I'm thinking about potential counterpoints: ah, yes, but it's about "ideas". While we can both make our ideas reality, my ideas are more inventive, so my AI-powered projects are more appealing. I'm not convinced about this; I think slop will dominate and invade public spaces, but also... why draw the line at ideas? Why is "skill with a pencil" replaceable with AI-slop, but ideas aren't? Ideas are often overrated, what matters is execution, anyway.
Quick answer: No.
Long answer: its the opposite; as an example, can use claude code to generate, build and debug ESP32 code for a given purpose; suddenly everyone can build smart gizmos without having to learn c/c++ and having knowledge of a ton of libraries.
I have Arduino and raspberry Pi boards. I am perfectly capable of hand writing code that runs on these machines. But they are sitting in the drawer gathering dust, because I don't have a use case -- everything I could possibly do with them is either not actually useful on a daily basis, or there are much better & reliable solutions for the actual issue. I literally spent hours going through other people's projects (most of which are very trivial), and decided that I have better things to do with my time. Lots and lots of people have the same issue.
And Claude Code is not going to change a single bit of that.
So, because you don't see value in it, you assume its the same for everyone. Got it.
Also, its not about if there are better or more reliable options; that's the opposite of the maker mentality - you do it because it is useful, it is fun or just because you enjoy doing it.
Such as designing some light fixture, printing it, and illuminating it with an esp32 and some ws2812 leds. Yah you could spend an afternoon coding color transitions. Or use claude code for that.
If you are vibe coding something it's not for the experience of learning, challenging yourself, or accomplishment - it's purely about the end result, the artefact. So asking "what is the purpose of this thing" is actually quite relevant in respect to vibe coding.
I think the reality is that the maker movement slowed down not because it’s hard to learn c++ but because people don’t care enough. Will maybe twice as many people participate now? Sure. But that’ll still be a small fraction of people.
I don't think it has slowed down; in fact,I think it has grown in the last few years. Sure, it is a niche - and will probably always be - but one never had such a low barrier of entry to build stuff and be creative; you have plenty of very powerful chips, somewhat usable SDKs, a ton of COTS ready to use components ranging from gps sensors to rotary encoders, and you can design your own pcbs and order them cheap from China; you can also design enclosures and 3d print parts in your own home with precision that was only accessible to specialized companies 15 years ago. LLMs are a great help not only on the code generation part, but also on the design part - as an example, I sometimes use ChatGPT to generate openscad functions, and it isn't half-bad.
Not sure I see it like that. Micropython removes most of the rough edges of doing embedded C.
If you prefer no code then I suggest ESPHome for your ESP IoT projects.
The other day I built a quick PoC to control 1024 rgb leds using RMT (esp32) and a custom protocol I was developing. Im pretty sure micropython would suck for that.
The other day I also developed a RGB-RGBW converter using a rp2040; claude did most of the assembly, so instead of taking a couple of days, it took a couple of hours.
I don't prefer no code; my point is software is a barrier on embedded systems, and if I - someone who can actually program in c/c++, python and assembly, see huge benefits in using LLMs, for someone at an entry level it is a life changer.
if youre using a pico, you can use PIO to have a bit more power. (I use it to control stepper motors with a smooth accel/decel ramp. Its doable with RMT, but not as easy.
Sure, and if it didn't is not complicated to add a new module. Thing is, the module does not support DMA. So, for the specific use case I gave, its not a good fit.
I'd take vibecoded iot code any day vs the typical hot mess of poorly written code by non-experts following online tutorials and the casual stackoverflow copy-paste :)
>You have no idea what codebases I've seen and worked in, so don't assume I have not.
Why not? You've been quite confortable assuming things so far, without actually contributing anything of substance to the conversation. Your opinions may even be well-formed, but if they are, your communication skills clearly aren't.
So, how has been your experience using LLMs as a maker (the actual topic) or in the context of IoT development (the topic I was replying to)? Mine has been quite positive, ranging from ensuring specific blocks of assembly code are deterministic (instead of having to check dozens of pages in a manual, and count instructions at every adjustment), to building both code, test fixtures and build infrastructure, to generating documentation, to actually hunt and fix security and logic issues on older codebases.
When people read "vibecode" they assume a clueless intern operating Cursor without any idea of what he's doing (in part because of the overhype of misshaps of LLM-generated code), opposed to the old fox with decades of experience that knows every detail by heart. Thing is, the clueless intern will produce much better code with LLMs than without (and fewer defects, too), and the old fox will produce much more because it will delegate some tasks to coding agents instead of less senior team mates,and have results in hours, not weeks.
The irony of this ai generated comment replying in defense of ai coding on hackernews. This entire vicchenai account has used llms to generate its entire comment history. What is the benefit to the owner of the account? What do they get out of this?
I never heard that. It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it? Units per hour and dollars per unit was never its strength. It was always going to be small things (and if anything big grew out of it, those would naturally transition to the more efficient manufacturing at scale).
Vibe coding, on the other hand, is competing against hand coding, and for many use cases is considerably more efficient. It’s clearly replacing a lot of hand coding.
BTW, I think a lot of people were/are greatly overestimating the value of coding to business success. It’s fungible from a macro perspective, so isn’t a moat by itself. There’s certainly a cost, but hardly the only one if you’re trying to be the next big startup (for that, the high cost of coding was useful — something to deter potential competitors; you’ll have to make up the difference in some other way now).
Also, software is something that already scaled really well in the way businesses need it to — code written once, whether by human or LLM, can be executed billions of times for almost nothing. Companies will be happy to have a way to press down the budget of a cost center, but the delta won’t make or break that many businesses.
As always, the people selling pick-axes during the gold rush will probably do the best.
Fully agree - We already saw dev prices drop significantly when offshore dev shops spun up. I've had great, and also horrible experiences working with devs that could produce lines of code at a fraction of the price of any senior type dev.
The higher paid engineers i've worked with are always worth their salary/hourly rate because of the way they approach problems and the solutions they come up with.
Agents are great at building out features, i'm not so sure about complex software that grows over time. Unless you know the right questions to ask, the agent misses alot. 80/20 doesn't work for systems that need 100% reliability.
No, a non-engineer can't just spin up the next great app. Even with the newest models and a great prompting/testing system, I don't think you can just spit out high quality, maintainable, reliable code. But as a generalist - I'm absolutely able to ship software and tools that solve our business problems.
Right now, my company identified an expensive software platform that was set to cost us around $250k/year. People in the industry are raving about it.
I've spent 1-2 weeks recreating the core functionality (with a significantly enhanced integration into our CRM and internal analytics) in both a web app and mobile application. And it's gone far smoother than I expected. It's not done - and maybe we'll run into some blocker. But this would have taken me 6 months, at least, to build half as well.
I was an AI skeptic for most of last year. It provided value, sure, but it felt like we were plateauing. Slowing down.
I'd hoped we might be slowing down to some sort of invisible ceiling. I was faster than ever - but it very much required a level of experience that felt reasonable and fair.
It feels different now.
I'd say ~70% of my Claude Opus results just work. I tweak the UI and refactor when possible. And it runs into issues I have to solve occasionally. But otherwise? If I'm specific, if I have it brainstorm, then plan, and then implement - then it usually just works.
A single engineer assigned to maintain your in-house solution will cost more than this.
I think most engineers vastly overestimate how important high quality, maintainable, reliable code is to product success. Yes, you need an experienced engineer to steer Claude into making good high-quality code. But your customer doesn't see your code, they don't see how many servers you need or how often an on-call engineer is woken up. They just see how well the app meets their needs
I predict we will see a lot of domain experts without engineering background spin up incredibly successful apps. Just like the Tea app many of them will crash and burn from poor engineering. But there will also be enough people who've grown wise to this and after reaching some success with their app spend the resources to have others mitigate all the unknown-to-them issues
Rather: Many software developers overestimate how important high quality, maintainable, reliable code is to initial product success.
Once the product is highly successful, a high quality, maintainable, reliable code pays huge dividends - and I have a strong feeling that most business people vastly underestimate this dividend.
I agree, the only thing I can’t get past is the black box approach. For the majority of business stakeholders, they can’t/ don’t want to read the code that Opus, or any other agent produces. It will most likely work, but if it doesn’t, they have to rely on the agent to find & patch.
I’m with you though, it’s getting incredibly good at doing that, but that concept of “It works but I don’t know why” seems very dangerous at scale.
That last mile for apps isn’t trivial imo; to take them from “it’s cool and does exactly what I want”, to a scenario where all employees at our company can use it.
But who knows I might just be a naive dev lol, this stuff is changing too quickly.
This rollercoaster is going to be wild to ride over the next decade. I've done a few experiments where I've intentionally "vibe coded" either a few features for an existing project of mine or for a few completely clean-sheet ideas.
I completely agree with you. There are going to be a lot of domain experts who do successfully spin stuff up.
> But there will also be enough people who've grown wise to this and after reaching some success with their app spend the resources to have others mitigate all the unknown-to-them issues
Here's the part that shocked me when I tried this... it did not take long for the no-engineering-guidance codebases to turn into complete disasters. Like... in an afternoon I had a pretty functional application that filled a gap for me. It was also... I don't think it'd remain even remotely maintainable for more than a week based on the direction it was going.
We live in interesting times.
Your customers definitely see the quality of code, just by proxy. When features take forever to ship, and things fall over all the time, those are code quality and design problems.
Honestly, code quality is somewhat more important right now, because use common and clear patterns will help AI make better changes, and using a more resilient architecture will you hand more off without worry things will fall over.
I'm honestly just happy at the moment, because our two junior admins/platform engineers have made some really good points to me in preparation for their annual reviews.
One now completed his own bigger terraform project, with the great praise of "That looks super easy to maintain and use" from the other more experienced engineers. He figured: "It's weird, you actually end up thinking and poking at a problem for a week or two, and then it actually folds into a very small amount of code. And sure, Copilot helped a bit with some boilerplate, but that was only after figuring out how to structure and hold it".
The other is working on getting a grip on running the big temperamental beast called PostgreSQL. She was recently a bit frustrated. "How can it be so hard to configure a simple number! It's so easy to set it in ansible and roll it out, but to find the right value, you gotta search the entire universe from top to bottom and then the answer is <maybe>. AAaah I gotta yell at a team". She's on a good way to become a great DBA.
> Agents are great at building out features, i'm not so sure about complex software that grows over time. Unless you know the right questions to ask, the agent misses alot. 80/20 doesn't work for systems that need 100% reliability.
Or if it's very structured and testable. For example, we're seeing great value in rebuilding a Grafana instance from manually managed to scripted dashboards. After a bit of scaffolding, some style instructions and a few example systems, you can just chuck it a description and a few queries, it just goes to successful work and just needs a little tweaking afterwards.
Similar, we're now converting a few remnants of our old config management to the new one using AI agents. Setup a good test suite first, then throw old code and examples of how the new config management does it into the context and modern models do that well. At that point, just rebuilding the system once is better than year-long deprecation plans with undecided stakeholders as mobile as a pet ferret that doesn't want to.
It's really not the code holding the platform together, it's the team and the experiences and behaviors of people.
For fully developed and experienced minds, both can be useful.
While I haven’t used other models like Codex and Gemini all that much recently, Anthropic’s is one of the top-tier models, and so I believe the others are probably the same in this way.
A junior’s mind will not rot because the prompt basically has to contain detailed pseudocode in order to get anywhere.
This is orthogonal to both if it is well thought-out/naive/really strange code, or LLM generated/LLM assisted/hand written code. If there is a good understanding of the task and the goals behind it, the tools become secondary. If skills are lacking, it will end up a mess no matter the tools and it needs teaching.
Most of us could run stable servers with just ssh and vi. Would suck a lot though.
Let me just get you that Fred Brooks quote, now where was it...? Ah, yes, here's one:
https://news.ycombinator.com/item?id=4560756
Looks at the scores of Ycombinator startups that wrote a shitload of awful code and failed. Good ideas, pretty websites, but not a lot of substance under the hood. The VC gathering aspect and online kudos was way more important to them than actually producing good code and a reliable product that would stand the test of time.
Pretty much the most detestable section of the HN community. IMNHSO. I notice they're much quieter than usual since the whole vibe coding thing kicked off.
This can also be restated as, look at all the startups that wrote a shitload of awful code and succeeded.
That’s an indicator code quality doesn’t matter at macro scales. We already knew this though even if we didn’t explicitly say it. It’s more about organization, coordination, and execution than code.
Startups are also quite different from ambulances; surviving and minimising patient harm isn't the most important thing for a startup. Instead, it's building a profitable and valuable business. You're not just worrying about the margins, you're also hoping to squeeze out every bit of growth you can.
I think it can though. It just depends. Having high quality code and making good technical choices can matter in many ways. From improving performance (massively) and correctness, to attracting great talent. Jane Street and WhatsApp come to mind, maybe Discord too. Just like great design will attract great designers.
I also think it might matter even more in the age of AI Agents. Most of my time now is spent reviewing code instead of writing code, and that makes me a huge bottleneck. So the best way to optimize is to make the code more readable and having good automated checks to reduce the amount of work I need to do, like static types, no nulls, compilation, automated tests, secondary agent reviews, etc.
I can't remember the last time I saw a '10 ways to fit 25 hours in 24 hours' type article on here, which were rife 10 years ago.
Not to say the crowd u speak of doesn’t exist, they do.
I mean, rename some dudes over there to ‘transformer’, and let them copy & paste from GitHub with abandon… I know we could get a whole browser for less than a few grand.
We wouldn’t, because it’d be copyright-insane. But if we just got it indirect enough, maybe fed the info to the copiers through a ‘transforming’ browser to mirror the copyright argument, I bet we could outperform OpenAI in key metrics.
Coding is formalizing for the compiler. The other 99% of the job is softly getting the PHB not to fuck the entire company and being unique in not doing dumb shit everyone thinks is popular now but will regret soon. It’s all like IT tribal tattoos. Barely cool for a couple of years, and then a lifelong source of shielded regret.
They were only as good as the input they were given. They rarely went above and beyond, and most of the time getting something "good enough" was challenging. Yes, time zones, cultural differences/attitudes, and their exposure/opportunities play a big role.
What I'm saying is that teams who had bad onshore employees got horrible results. Teams that had actual systems engineers and people who could architect systems usually got great results.
For example, we were building a bleeding edge (at the time) e commerce site for one of the largest companies in the entertainment space. I made sure to work with the best people I knew at the company to design the system from the ground up. Then, we made sure the actual "functional" pieces were digestible and written plainly that we didn't need to clarify words. Nor did we write a fucking 300 page technical document. We kept things simple and effective, and all the work was broken down into as atomic pieces as possible.
The end result was that we used a team distributed between Ukraine and India to build this in about 4 months. We'd do weekly sprints, and the team had great spirits too because we actually gave a fuck about them and ensuring their success. I'm sure they're used to being scapegoats because of some lazy fucks onshore.
Now I use agents daily and have great success. However, the whole "write a sentence and AI will do it for you" is obviously bullshit. I even asked HN why I got wrong results to test what people would respond (sorry for playing you) and as I predicted they blamed me thus proving that this broader sentiment that's so prominent by "thought leaders" is stupid as fuck. So, that's where we are.
People who can actually build great systems know that it requires careful planning, deep understanding, and ability to fill in the gaps.
I did, a lot, maybe fifteen years ago. There was a lot of talk about a "3D printing revolution" and being years away from being able to make whatever you want at home. For a while, the "maker" moniker was strongly associated with home manufacturing maximalists.
I still don't get the point the article is making, though. That 3D printer thinking was obviously naive because it underestimated the difficulty of mechanical design and the importance of the economies of scale. Using AI to "write" or "code" is a lot easier than turning a vague idea for a household good into a durable and aesthetic 3D print, so it's apples to oranges.
There are other things that the vibecoding movement is underestimating - when you pay a SaaS vendor, you're usually not paying for code as much as for having a turnkey solution where functionality, security, infrastructure, and user support are someone else's problem. But I think that's pretty much where the parallels end.
If there is any commonality between the 3D printing craze and vibe-coding, they're both renditions of "just because you can, doesn't mean you should".
Could be different this time around, or could be that the early naive optimism is just more widespread.
But the real magic happens in CAD while printers are good enough that it gets out of your way.
It's no replicator, but give it 5 years and it might be surprising how useful it is.
Then it was a lot of “self replicating printers” for quite a while, which never has been a real thing.
Certainly there’s utility in the technology, and much moreso if you’re making aircraft parts. And I love prototyping with my various machines.
But I agree, it has had far more than its fair share of hype at the home printer level.
3D-printed 3D printers got quite far; the reason why this topic got out of perception by people who are not 3D printing nerds is rather that for mass production of 3D printers there exist much better processes.
What was realized was that up to a certain amount of parts, 3D printing these parts on a 3D printer works really well. You can find a lot of designs of such 3D printers on the internet.
Concerning the progress here, also observe that over the last years, home 3D printers got a lot better with respect to handling "engineering materials". These materials are very useful if you want to (partly) 3D-print a 3D printer, but this development is often not associated with "3D-printing 3D printers". :-)
Then you get to parts which can be printed on a 3D printer, but these parts will not be of the same quality as parts that can easily be bought, such as belts etc. The Mulbot is a design that takes this approach very far:
> https://github.com/3dprintingworld/Mulbot
> https://www.printables.com/model/5995-mulbot-the-mostly-prin...
And then you get to parts that are nearly impossible to print on a 3D printer ...
So, after there was a consensus where the boundaries lie how much a 3D printer can sensibly be 3D-printed, people started looking at other manufacturing techniques that exist for producing parts of 3D printers, and started considering
1. could and how far could a machine for this process be 3D-printed (or produced on a 3D-printed machine)?
2. could we bring such a machine to home manufacturing, too (so that people can easily build such a machine at home)?
Machines that were considered for this were, for example, CNC mill (3, 4 and 5 axis), CNC lathe, pick and place machines (for producing PCBs), ...
There do exist partial implementations of such machines, just to give some examples:
- lots of designs of CNC mills that use 3D-printed parts. I won't give a list here, but just want to mention that the "Voron Cascade" project wants to do for home 3 axis CNC milling what the Voron did for 3D printing. Rumors on the internet say that the Voron Cascade is well on the way, but had quite a lot of delays with respect to announced release dates.
- an attempt to build a pick and place machine: https://hackaday.io/project/169354-3d-printed-pick-and-place...
Thus: I hope I could give evidence that in the last years there still were a lot of developments towards the far goal of "self-replicating 3D printers", but these developments were rather silent, impressive developments instead of loud, obtrusive marketing stunts.
They're not common by any means, but they do exist. Walls look pretty ugly though.
Which apes vibecoding. ChatGPT 3.5 was laughably bad compared to codex 5.3, but if you're basing your opinion on 3.5's performance, your opinion's out of date.
"The real test of Vibe coding is whether people will finally realize the cost of software development is in the maintenance, not in the creation."
https://blog.oak.ninja/shower-thoughts/2026/02/12/business-i...
Percentage increases are not the same as percentage losses.
Most jobs lost to AI is just companies that want / need to lay people off and shareholders like "Replaced 30% of our workforce with AI" more than any other conceivable reason.
Depends on where you stand. Maybe leet code won't be a common thing (can be solved with AI), maybe they'll look for different skills, etc.
If losing 30% means hiring the right people for the job you might have better chances. For a long time these were never aligned properly.
IT and coding was a good carrier for a long time, but times are changing.
No, it never seemed that way to the realists, but it was said to seem that way to the makerspheres.
Print quality is everything when it comes to 3D printing. The printing quality must keep increasing if 3D prints are to be used as finished products. People should stop printing STL artifacts into their prints. Layer lines must fade away into invisibility. Top surfaces must be impeccably smooth without any stepping. New coatings need to be developed for texturing 3d printed parts and the parts need to be ready for coating right from the print bed.
The layer lines are much less pronounced when you use a 0.25 mm nozzle with an appropriate layer height instead of a 0.4 mm nozzle (the possible quality is even on the brink to satisfy people who use 3D printing for producing miniatures). The prize you need to pay is of course the print time.
> Top surfaces must be impeccably smooth without any stepping.
In the last years there was a lot of progress on ironing features in slicers, which mitigates this issue:
> https://help.prusa3d.com/article/ironing_177488
Another very recent addition to mitigate the perceived problem is the recent addition of "fuzzy skin" features in slicers, which by making the surfaces look "more rough" hides the imperfections of the FDM printing process.
--
Another solution is to simply use resin printing instead of FDM printing for finished products if feasible.
If there exists no copyright, you cannot force an entity to release the source code of their software.
A world without copyright and IP is for sure an interesting thought experiment, but very different from the FSF vision:
In such a world, there would be much more reverse-engineering and monkey-patching of existing (non-open) software that gets copied around very liberally.
On the other hand, because there exists no enforcable copyright, companies would of course invest a lot of ressources into developing hard to crack copy protection schemes. Similarly, freedom-loving hackers would invest serious ressources into cracking such copy protection schemes.
Didn't the big AI vendors kinda bring that to fruition?
It didn’t and I’m not sure anyone who knew anything about at-scale manufacturing ever saw it that way. Injection molding is far cheaper per unit and more accurate.
But 3D printing has made a major impact on prototyping. Parts that would have taken serious machine shop work or outsourcing can be printed in a few hours. It really changed the game for mechanical engineers.
In terms of vibe coding, time to demo/prototype is greatly reduced. That definitely takes time and cost away from R&D. But I don’t know that it’s had much impact on transfer to manufacturing, which can easily be the hard final 20%.
It absolutely was the "promise" the media spun.
I had the relatively unique experience of moving from being an outsider to this field to being an insider. While I was an outsider, my impressions, formed by the media, was exactly that—3d printing would be the next big revolution, in a few years there'd be a printer in every home, etc.
I then joined a company that allocated a lot of resources to 3d printing. It only took me a month or two to realize that the big media claims were absolutely ridiculous, and didn't make any sense as stated. They misunderstood the state of the technology, and misunderstood basic economics and how regular manufacturing works.
That's not to say there's no value in 3d printing or the maker movement. There's a ton of value that's been uncovered. But the specific media dream of "people will be printing their plates at home instead of buying them in the store" was never real.
(Btw, IMO "vibe coding" is absolutely real and revolutionary, likely the biggest revolution in the software industry since, idk, the invention of the computer itself. And AI more generally is, even beyond vibe coding aspect, a revolutionary technology that will change the world in many ways.)
Broadly true if you have $10M to throw at it, and know exactly what you want, or if what you want isn't something involving a "secret sauce".
But between competing startups doing something novel, original software is a moat. No moat is permanent; you leverage it into market share while you have time.
And no software itself is a secret, but the business logic and real-world operations it distills and caters to may be. The software is the least obfuscated part of encoding that set of operational logic, or even trade secrets, which are the DNA of a business and dictate the tools it goes into battle with.
Software being a moat (which it rarely is for long) is more of a question for the software industry. For other industries, software that amplifies best practices and crystalizes operational flow from the business logic can absolutely extend whatever moat the company already has.
In the small bore, if you have two midsized competing $100m companies in some arbitrary industry, the one that uses SaaS may be well behind the one that invested $1m in their own in-house software from the beginning, mostly because the one with SaaS must work their business logic around certain shortcomings, while the other can devise and deploy workflows for employees that may themselves create a new advantage the other company hasn't considered.
Counter anecdote: about a decade ago I was brought in by the new to the company director to lead the modernization of their in house Electronic Medical System software that was built on FoxPro in 1999 running with SQL Server 2000 and was maintained by two “developers” who had been their for a decade.
I led another project there first that was more pressing - in house mobile software maintained by two other “developers”. It was built on top of a mobile framework by a local startup. It was used by home health care nurses for special needs kids.
After I got my head around the business, what they were trying to do - PE owned and acquiring other companies whose systems they need to integrate and their margins were low - mostly Medicaid reimbursements - I decided the best thing I could do was put myself out of a job.
I told the director we have no business trying to build up a software development department. We moved everything to various SaaS products and paid consulting companies to make all of the customizations. Meaning they sign a statement of work and come back with a finished product.
Software development was never going to be this company’s competitive moat. They got rid of the two developers maintaining the mobile app and contracted that out. The two other developers who had maintained the FoxPro app became “data analysts” and report writers.
Every company does need to know its numbers
> never heard that.
This book was a big deal, promised it ("Makers, the next industrial revolution") https://www.barnesandnoble.com/w/makers-chris-anderson/11109...
Interestingly, I am not aware that this book was really popular or well-known in Germany (I honestly hear about this specific book for the first time, though I am aware that some marketers (who in my opinion did not really understand the Maker scene or 3D printing) made such claims).
Instead, at that time, in Germany nerds were getting excited about understanding how to build 3D printers (in particular partially self-replicating ones (RepRap)) and how 3D printing
- could be used to make yourself much more independent of the discretion of part manufacturers (i.e. some part is broken? Use a CAD system to re-design it and 3D-print your re-design),
- makes you capable of building stuff in small scale "that should exist", but no manufacturer is producing,
- enables part designs that are (nearly) impossible to manufacture using any other existing technology, and thus basically enables you to completely reimagine and improve how nearly every produced part that you see around you is designed,
- ...
I would say that the mentioned nerd visions of this time have at least partially been implemented and/or are on a good way towards this goal. It's just that the practical implementations did not come with a spectacular change in the overarching mindet of society, but rather are highly important, but not (necessarily) revolutionary changes in the lifes of people who want these changes to be part of their life.
Personally, I don't believe the big changes will come from "coding costs less for businesses". I think it will come from "trying new businesses is now cheaper, both in time and money". Smaller and cheaper players will be entering a lot of spaces over the next 5 years IMO.
(Not to mention, it's only in the last few years where consumer-accessible 3D printers are more than hobbyist grade that required a huge amount of tinkering to actually work properly)
Prusa is working on a Pick & Place Toolhead for the Prusa XL to enable at least some very specific assembly steps to be done on this 3D printer:
> https://blog.prusa3d.com/xl-in-2026-new-toolheads-lower-pric...
"One Print, Multiple Components: Pick & Place Tool
Some technical prints require additional components, such as magnets, threaded inserts, or bearings, to be placed during the build. Without automation, this typically means you have to pause the print and insert the part(s) by hand. Although PrusaSlicer made this process easier a while ago, The Pick & Place toolhead can do it for you, completely autonomously. This reduces manual intervention and improves placement accuracy.
We’ve co-developed the toolhead with the Zurich University of Applied Sciences (ZHAW) and it’s designed for models that combine 3D-printed models with off-the-shelf components. We’re currently targeting late 2026 with its implementation."
Its also interesting how the author frames the results: Shenzhen is now better than it was ever before at manufacturing. The maker culture succeeded!
I guess the President of the United States is an almost nobody. Obama's 2013 State of the Union hyped up 3-D printing explicitly as a tech that would be bringing manufacturing back to the U.S. The U.S. government made public-private partnerships with maker spaces and fab facilities in hollowed out Rust Belt cities, and Obama mentioned it by name in the most important and viewed policy speech the President gives each year.
> “A once-shuttered warehouse is now a state-of-the art lab where new workers are mastering the 3-D printing that has the potential to revolutionize the way we make almost everything,” Obama said. [...] Obama announced plans for three more manufacturing hubs where businesses will partner with the departments of Defense and Energy “to turn regions left behind by globalization into global centers of high-tech jobs.” (https://edition.cnn.com/2013/02/13/tech/innovation/obama-3d-...)
I've frequently argued to my organization's leadership that the product could be open source on GitHub with a flashing neon sign above it and it wouldn't change anything about the business. A competitor stealing our codebase would probably be worse off than if they had done anything else. Conway's law and all that.
If you balked at the idea, then you were the bad guy, or treated with pity for being so out of touch. Usually you got the Kubler-Ross Stages thrown at you.
Yes. Met those guys in my TechShop days. They also insisted that 3D printers should be made with 3D printers, which resulted in a generation of flimsy, inaccurate machines.
The current generation of serious 3D printers is very impressive. Take a look at Space-X's Raptor engine. A rocket engine is mostly one piece of complicated metal with a lot of internal voids. That's something 3D printers are good at. Once 3D printing was able to print stainless steel and titanium, it could be used for hard jobs like that. PLA just isn't much of a structural material, even with 100% fill.
Serious 3D printers are found in machine shops, not homes and libraries.
I do believe that this vision is basically correct, but the implementation of these eager 3D printing enthusiasts was deeply flawed:
There exist lots of designs of really good 3D printers on the internet that are at least partly 3D-printed. So at least a relevant subset of the parts of a 3D printer can be 3D-printed. The reason why commercial 3D printers are typically not 3D-printed is rather aesthetics and the fact that for large-scale manufacturing there typically exist much cheaper production techniques.
As people by now have realized (and some of these points were told to these eager 3D printing enthusiasts from beginning on), the correct approach to get towards an exceptional "mostly 3D-printed 3D printer" is rather:
- Improve 3D printers so that even more parts of a 3D printer can be 3D-printed in high quality (e.g. by improving sensors and software to increase precision; make the 3D printer capable of handling engineeering materials; ...)
- Use a 3D printer to produce parts for machines that can be used to produce parts for a 3D printer, such as CNC mill, CNC lathe, pick and place machine (for populating the PCBs) etc.
Both of these aspects are hot topics that people work on.
In other words: Accept for now that many, but not all parts of a 3D printer can currently sensibly be 3D-printed, and invest serious efforts to develop solutions how 3D printing can be used to enable a high-quality production of these remaining parts.
Now that CNC mills get more affordable, people are starting to get vocal about their visions of a self-milling CNC mill. :-)
- how these machining processes can be automatized, and
- how the cost, space requirements and noise levels for these machines can be reduced so that every ambitious maker can have them in their apartment
Voila, the start of a home manufacturing revolution ...
Software companies spend a huge amount of money on having software written. Why would significantly altering the cost structure not make or break companies?
It seems like a lot of vibe coders are people who otherwise wouldn't be coding at all.
Just like a five dollar t shirt is enough for many many people
I mean if you're saying that 90 percent of code is hobby level only, but I don't really agree that is the case.
I mean, take something like NPM and the JavaScript ecosystem. Every js project has mountains of dependencies which are included without a second thought or auditing of the code. Both in hobby projects and enterprise software alike. What happens when people vibe code those NPM modules? Is it a hobby? Maybe for them, but publishing it to an "official" source gives it implied credibility.
This is dangerous, because the line between production grade and hobby grade can get blurry real fast.
Those problems span from fundamental architecture flaws, to issues anyone who spent 5 minutes reading the docs would never do, like create an entire app that slows to a crawl when more than one user uses it, because all parallel work gets serialized due to a complete misunderstanding of how concurrency, async/await and threads work in the language they're "writing".
People with too much money build entire apps on foundations that crumble and significantly hold them back from doing simple things, and I love it.
There was a point of time where some people looked at 3d printers and said "Wow, imagine how great this technology will be in 20 years." There was some amount of anticipation for multi-material printers to come around and for home printers to begin replacing traditional consumer goods. Compared to crypto, vr, and ai it doesn't look like much but 3d printing did go through a hype bubble.
Seems like today they are still stuck in the tracks they were in 2016. A couple nerds own them personally. Maybe you'd find them in a maker space or a library or school. Not in your boomer parent's office though.
It's really hard to beat injection molding for scale.
However, what 3D printing did shift was building molds and prototypes. And that shifted small volume manufacturing--one offs and small volumes are now practical that didn't used to be. In addition, you can iterate more easily over multiple versions.
The limiting factor, however, has always been the brain power designing the thing. YouTube is littered with videos that someone wants to build a "thing" and then spends 10-20 iterations figuring out everything they didn't know going into the project. This is no different from "real" projects, but your experienced engineering staff probably only take 5 iterations instead of 20.
Once the predictions of a magical future turn out to be false, techies suddenly don't remember. Kind of like when the cult leader's prediction of doomsday doesn't show, there's always another magical prediction of a new future coming. Here are just a few major mainstream sources:
2012, Cornell Prof and Lab Director, in CNN: "We really want to print a robot that will walk out of a printer. We have been able to print batteries and motors, but we haven’t been able to print the whole thing yet. I think in two or three years we’ll be able to do that." (https://www.cnn.com/2012/07/20/tech/3d-printing-manufacturin...)
2013, World Economic Forum: "the world can be altered further if home-based 3D printing becomes the norm. In this world, every home is equipped with a printer capable of making most of the products it needs. Supply chains that support the flow of products and parts to consumers will vanish, to be replaced by supply chains of raw material." (https://www.weforum.org/stories/2013/08/will-3d-printing-kil...)
2013, President of the United States of America Barack Obama hypes up 3-D printing in the State of the Union as a technology that will bring manufacturing back to the U.S.: “A once-shuttered warehouse is now a state-of-the art lab where new workers are mastering the 3-D printing that has the potential to revolutionize the way we make almost everything..." Obama announced plans for three more manufacturing hubs where businesses will partner with the departments of Defense and Energy “to turn regions left behind by globalization into global centers of high-tech jobs.” (https://edition.cnn.com/2013/02/13/tech/innovation/obama-3d-...)
2012, Cover story and special issue of The Economist predicting another Nth industrial revolution:
"THE first industrial revolution began in Britain in the late 18th century, with the mechanisation of the textile industry. Tasks previously done laboriously by hand in hundreds of weavers’ cottages were brought together in a single cotton mill, and the factory was born. The second industrial revolution came in the early 20th century, when Henry Ford mastered the moving assembly line and ushered in the age of mass production. The first two industrial revolutions made people richer and more urban. Now a third revolution is under way. Manufacturing is going digital. As this week’s special report argues, this could change not just business, but much else besides.
A number of remarkable technologies are converging: clever software, novel materials, more dexterous robots, new processes (notably three-dimensional printing) and a whole range of web-based services. The factory of the past was based on cranking out zillions of identical products: Ford famously said that car-buyers could have any colour they liked, as long as it was black. But the cost of producing much smaller batches of a wider variety, with each product tailored precisely to each customer’s whims, is falling. The factory of the future will focus on mass customisation—and may look more like those weavers’ cottages than Ford’s assembly line." (archive: https://communicateasia.wordpress.com/2012/04/20/manufacturi...)
There were articles posted on HN hyping exactly that, with comments debating whether 3D-printing would eventually replace conventional manufacturing at scale, and how people would no longer shop at stores like Walmart for their cheap products.
it's the people that sell the pickaxe pickaxes.
https://www.cbc.ca/news/canada/trump-canada-yukon-1.3235254
To the realists, 3D printing is specifically for small-scale manufacturing, rapid iteration on prototypes, etc.
I don't see it competing with anyone doing anything serious, outside of ML engineers and lets be honest, they always sucked at writing code, hated writing code so its not surprising how much they sing it's praise.
And there are plenty of people in the maker movement who enjoy writing code, and will write it whether other people are vibe coding or not.
In the past weeks I:
- 3D printed custom cups that fit onto a pet feeder to prevent ants from getting to our cat food
- 3D printed custom mounts to mount 3W WS2812 LEDs to illuminate Chinese New Year lanterns and connected them to an ESP32 WLED box connected to home assistant
- Connected an vision language model to a security camera that can answer questions about how many times a cat has eaten, drank water, used the toilet, and inform us about any things in the room that look abnormal
- Custom laser cutted a wall fitting for a portable heat pump input and output condenser hoses and added a condensate pump to the contraption, it saves us $200/month in heating costs
- Custom designed a retrofit for a sliding door that accepts a Nuki smart lock that wasn't designed for this type of door.
- Custom laser cutted a valentines day card in Chinese paper cutting style that was generated with many rounds of back and forth prompting with Gemini, then converted to SVG and cut
- My wife and I thought IKEA SKADIS pegboards would look better if they were made out of bamboo plywood, so I shoved a sheet of bamboo into my laser cutter and had it cut out a pegboard that looked much nicer, sprayed it with lacquer, then attached it to the wall with 3D printed mounting hardware. The SVG for the pegboard was generated by a script written by Cursor and took a couple of minutes.
- Having an ESP32 feed a camera image to an LLM and then do something with the result is a piece of cake. A box that "sprays water to deter the cat if the cat jumps on the kitchen counter" is a 1-hour job after you order the components from Amazon, and an LLM will build that parts list for you, too.
- Reverse enginereed the firmware of a Unifi Chime to upload more chime sounds than the UI limits you to, so that I can have Unifi Protect announce if there is an intruder somewhere late at night and where. Cursor reverse-engineered the firmware .bin for me.
A lot of this could have been worth sharing 10 years ago. Now all of this is just "normal life in 2026" so you don't hear about it much. I'm used to thinking of something and then physically having it <12 hours later. It's no longer an undertaking. It's not news anymore.
The bar for "news-worthiness" for makers these days? This guy built an entire city for his cats, with a full functional subway system and everything ...
https://www.youtube.com/watch?v=G4UEugp_mf0
I think I have a conversation at least weekly where I have to explain to someone that using an LLM to convert COBOL to Java (or whatever) will not actually save much effort. I don’t know how many ways to explain that translating the literal instructions from one language to another is not actually is not that hard for someone fluent in both and the actual bottleneck is in understanding what sort of business logic the COBOL has embedded in it and all the foundational rearchitecting that will involve.
Vibe coding, like 3D printing, is great for little small batch runs of boutique code. Small toy apps and throwaway projects.
Vibe coding is shit for doing actual maintenance on important projects that actually run the world. It is shit for creating anything that is of robust long lasting quality. It is shit for creating code you can trust. It is shit for creating code that won’t suddenly reveal flaws and inefficiencies at scale and require an entire proper rewrite just when your product is finally gaining traction. Vibe coding has not been around long enough to make these problems obvious yet, but the time is coming. A few high profile failures will hit the media and then suddenly everyone starts coming out of the woodwork with their own vibe coding horror stories and thus the AI bubble collapse begins.
What people will eventually realize, is that if you’re building a serious business with software that must run reliably for years, it really doesn’t give you any advantage being able to vibe code something in a week vs carefully building something out over a few months. Being unable to vibe code your way out of non-trivial maintenance issues is a death sentence for your business, you will need people who know what they are doing eventually.
Relying on vibe coding causes you to have a talent debt, and though you won’t feel it when you’re first rolling out a business, eventually, the bill comes due…
Uh, no they're not. Did you not see the recent announcement from unity. One short prompt and you get a whole AAA+ game in one shot.
/s
What does it mean to say "we were promised flying cars", or "every city would have micro-factories, that 3D printing would decentralize production"?
The people creating these narratives may a) truly believe it and tried to make it a reality, but failed b) never believed it at all, but failed anyway, c) or be somewhere else on this quadrant of belief vs actuality.
Why not just treat it as, "a prediction that went wrong". I suppose it's because a narrative of promise feels like a promise, and people don't like being lied to.
It's a strange narrative maneuver we keep doing with tech, which is more future-facing than most fields.
https://en.wikipedia.org/wiki/Moore%27s_second_law
We do have flying cars, and we do have printers that print other printers, but both were some combination of really expensive/poor quality. Technically speaking, if you take it that most cities have 3D printers, most cities then do have micro factories, however that says nothing about general feasability...
Technology requires infrastructure and resources, and our infrastructure is strained and our resources are even more so... Until the costs become pocket change for the average person, technology will just remain generally unavailable.
This promise did get fulfilled: helicopters do exist.
I don't know about the other things you mentioned, but I think you have this in the wrong category. "We were promised flying cars" is one half of a construction contrasting utopian promises/hype with dystopian (or at lest underwhelming) outcomes. I think the most common version is:
> They promised us flying cars, instead we got 140 characters.
Translation: tech promised awesome things that would make our life better, but instead we actually got was stuff like the toxicity of social media.
IMHO, this insight is one of the reasons there's so much negativity around AI. People have been around the block enough to have good reason to question tech hype, and they're expecting the next thing to turn out as badly as social media did.
The author talks about lowered barriers to prototyping as though they represent a failure state; that's absurd, and it has absolutely nothing to do with whether most people have membership-based maker spaces nearby.
Meanwhile, we're in a golden era of tool access. It's now possible for people to buy affordable CNCs, laser cutters and UV printers. I have a freaking pick and place in my home.
Also, you can have custom PCBs shipped to you in a week for about $10.
Having LLMs available at the same time as all of these tools are rapidly evolving means that anyone with an idea can prototype just about anything. In my worldview, anyone not excited about this either has no original ideas or a cynical agenda.
I'd say more but I have to get back to work on my maker projects.
And of course I'm not going to be setting up a "mini factory", I don't feel like it and I already got the one thing I made that I wanted, which almost certainly would never have been profitable for anyone to make at quantity in the first place. In the unlikely event someone does want one, they can just make their own following the same process as above.
I don't love that my career seems to be evaporating and perhaps no one will have a use for me soon, but, LLMs have made making even easier and more fun than ever. My sense of what I can take on has been amplified so much, it feels like a super power. Reverse engineering things used to be intimidating to take on, but now it feels like a couple afternoons of exploring with Claude. Understanding the scope of ideas is way more accessible, and often more constrained than it used to be.
I learn so much more than I used to, I get more done than I used to. I love it.
I am quite tired of skeptics and naysayers telling me that I'm only imagining learning, only imagining finishing projects, only imagining having more time for the fun parts.
I'm able to take on way more interesting and challenging projects in my business because the logistics and legwork required for implementing them are greatly simplified by being able to actually implement the specs for the software I've had in my head for years.
This is a stark contrast to pre-2024 or so. I've always been an explorer, a fairly prolific software developer I guess, but now it's so much more than that. And it's leaking into hardware and other physical ventures. I'm typically limited by funds more than anything.
Are some of my projects lower quality than if they were done by someone more qualified? Yeah, totally. Though I think I still do a solid job. I don't care though; these things have opened my eyes and mind so much and made creating so much more inviting and exciting.
It still burns with the 'career careening into the dirt' vibes I get most days, but what the hell, it was good while it lasted. If I was smart enough to make the computer do the thing, maybe I'll be smart enough to do something else that's useful. And I've got some years left before it's truly end of the line, I think.
I personally think that as amazing as LLMs for coding are, LLMs for coding and electronics is like activating the powered robotic exoskeleton for your mind.
As for whether your projects are lower quality (the kids would say "mid"; catch up!) or not... they are higher quality than the ones you made last year, and I heard that the actual best way to learn is by doing. Ideally also asking a ton of questions at each step.
Some nights I lay in bed just relentlessly interrogating ChatGPT in audio mode about transformers and op-amps.
Thank you!
> they are higher quality than the ones you made last year,
This is exactly it. Perfect is the enemy of good here. These projects have made significant impacts on how I get real things done.
And for the most part they just aren't
Either way, I suppose the answer is relative and subjective and Bambu Lab would not agree with you.
I didn't want to argue with you, so I did some research.
The global 3D printer market grew from USD$24B in 2024 to $30B in 2025. It's estimated to grow 24% y/y for the next decade: https://www.marketdataforecast.com/market-reports/3d-printer...
If you Google it, this is generally matched by 5-6 different research companies.
Laser cutters are a USD$7B global market today, and also expected to grow around ~8% y/y into the next decade: https://www.marketsandmarkets.com/Market-Reports/Laser-Cutti...
A huge number of YouTube channel creators I subscribe to have received Carvera CNC mills over the past few months. Anecdotal but striking. It seems like everyone who left LTT to go solo has a CNC now, even if they usually review graphics cards.
Opulo, the makers of the Lumen PnP cannot achieve less than a 1 month lead time, seemingly no matter how many people they hire or how much factory space they acquire. And that's a pretty niche device, relatively speaking.
In conclusion, you're wrong.
The point of this thread: the maker movement isn’t dead simply because most people don’t care about CNC machines. In reality, there are loads of makers & people who love tinkering and building things for themselves, it’s easier than ever to do so (and to build very non-trivial products for yourself), and more and more people are able to get into this.
If the maker movement was actually dead, we wouldn’t be seeing an explosion of powerful, easy-to-use manufacturing tools available at lower & lower price points.
I guess your point is that it’s not exactly mainstream, not that no one is buying them. Which is true, but who cares.
I don't know why I let myself get triggered by some rando who has convinced himself that just because he doesn't do something it must not be popular, but here I am.
Do a lot of people do it? Maybe the answer is a tentative yes, given news like the recent case about guns and 3D printing.
Honestly, it's baffling that anyone would put real effort into printing guns when it seems as though some countries cough make it easy to pick one up at Walmart.
In my observation these news lead to maker nerds "prepper-buying" (get such a machine before they become forbidden) quite a lot of such machines recently. :-)
A little while ago I had to dissuade someone from learning Chemistry via an LLM, because the advice that they had been given by the LLM would have very literally either blown up the glassware, throwing molten chemicals all over their clothing, or killed them when they tried to taste whatever they were trying to synthesize. There was no consideration of safety protocol, PPE, proper glassware, or correctly dealing with chemical reactions, and nary a mention of a fucking fume hood. NileRed and a few other chemistry youtubers have utterly woeful approaches to laboratory safety (NileRed specifically I have a chip on my shoulder about — I've seen him practice bad lab work on a number of occasions and violate many of the common safety practices from e.g. Vogel's), but even then they do still take precautions! Let it not be forgotten that safety practices are born through bloodshed. Now we have a whole new wave of people who are excited to learn, and that's great, but one stray hallucination will kill them. I'm sure that the LLM will be more than happy to write an "Oh I'm sorry, it's my bad that I forgot to tell you to double glove when handling organic mercury!" but by then it is too late.
The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly. Likewise, the idea of someone following an LLM's instructions and then blowing themselves up in a shower of capacitors or chemical glassware is also utterly terrifying to me.
Yes, you could do all these things before. But at least the most commonly available learning materials to you were trustworthy and written by experts!
(I’m not saying it’s not used, but the only thing I’d use TTL for is building old circuits out of the Forrest Mims books.)
The simple lack of reasons to use TTL logic in 2026 was exactly why I didn't know what the deal was. It'd never come up, but I'd see it referenced.
I'm self-taught and in defiance of the people who insist that LLMs turn our brains to passive mush, the more things I learn the more things I have to be curious about.
LLMs remove the gatekeeping around asking "simple" questions that tend to make EEs roll their eyes. I didn't know, so I asked and now I know!
I’m just curious at this point about what the quality of the answer is, just because you made a point about LLM use not turning your brain into mush.
I’ve not really used LLMs to answer questions, since it hasn’t gotten me the answers I wanted, but maybe I’m just set in my ways.
https://chatgpt.com/share/69a184b0-7c38-8012-b36d-c3f2cefc13...
I definitely led some questions to try and squeeze new-to-me perspectives out of it; for example, there could be tricks that make the active high variant more useful in some scenarios.
I think it does a good job of surfacing adjacent questions you might not realize you were eager to ask, as well as showing how it's able to critically evaluate real-world part suitability. I do find that ChatGPT in particular does better with a screengrab of the most likely parts vs a URL to the search engine.
But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?
If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.
It’s not a person. You understand that, right? I have to ask considering the amount of people who are “dating” and wanting to marry chatbots.
It’s a tool. There’s no reason to anthropomorphise it.
This particular combination of snark, faux-concern and pedantry doesn't help the point you're trying to make about my loving AI wife.
> It’s not a person, It’s a tool. There’s no reason to anthropomorphise it.
Without wanting to be argumentative, I would push back and say that I really did stop to consider my implied assignment of personhood before committing to it. I went with it because it reflects both the role it plays - you'll be relieved that I stopped short of deploying "mentor" - and the fact that English is highly adaptable and already the linguistic tug to use They feels very comfortable in relation to LLMs. Buckle up!
Funnily enough, I think that might’ve been better. I don’t think a mentor has to necessarily be human; one can learn from nature or pets. Or even a machine: Stockfish can teach you to play better chess and give context as to why you fumbled and how to do better next time.
I just don’t think LLMs are people and that we should avoid anthropomorphising them (for a whole plethora of reasons which are another discussion). I’m not even saying I think there could never be a robot which is a person. Just not what we have now.
Can't wait for the load-bearing drywall recommendations coming from LLMs that were trained on years of Groverhaus content.
The crux of the problem. The only way to truly know is to get your hands dirty. There are no shortcuts, only future liabilities.
And even today, people hack on assembly and ancient mainframe languages and demoscene demos and Atari ROMs and the like (mainly for fun but sometimes with the explicit intention of developing that flavor of judgment).
I predict with high confidence that not even Claude will stop tinkerers from tinkering.
All of our technical wizardry will become anachronistic eventually. Here I stand, Ozymandius, king of motorcycle repair, 16-bit assembly, and radio antennae bent by hand…
There are corners of the industry where people still write ASM by hand when necessary, but for the vast, vast majority it's neither necessary (because compilers are great) or worthwhile (because it's so time consuming).
Most code is written in high-level, interpreted languages with no particular attention paid to its performance characteristics. Despite the frustration of those of us who know better, businesses and users seem to choose velocity over quality pretty consistently.
LLM output is already good enough to produce working software that meets the stated requirements. The tooling used to work with them is improving rapidly. I think we're heading towards a world where actually inspecting and understanding the code is unusual (like looking at JVM/Python bytecode is today).
Future liabilities? Not any more than we're currently producing, but produced faster.
That is changing one word in the source code doesn’t tend to produce a vastly different output, or changes to completely unrelated code.
Because the LLM is working from informal language, it is by necessity making thousands of small (and not so small) decisions about how to translate the prompt into code. There are far more decisions here than can reasonably fixed in tests/specs. So any changes to the prompt/spec is likely to result in unintended changes to observable behavior that users will notice and be confused by.
You’re right that programmers regularly churn out unoptimized code. But that’s very different than churning out a bubbling morass where ever little thing that isn’t bolted down is constantly changing.
The ambiguity in translation from prompt to code means that the code is still the spec and needs to be understood. Combine that with prompt instability and we’ll be stuck understanding code for the foreseeable future.
Guess I just don't see how you can take the human out of the loop and replace them with non-deterministic AIs and informal prompts / specs.
I'm not particularly swayed by arguments of consciousness, whether AI is currently capable of "thinking", etc. Those may matter right now... but how long will they continue to matter for the vast majority of use cases?
Generally speaking, my feeling is that most code doesn't need to be carefully-crafted. We have error budgets for a reason, and AI is just shifting how we allocate them. It's only in certain roles where small mistakes can end your company - think hedge funds, aerospace, etc. - where there's safety in the non-determinism argument. And I say this as someone who is not in one of those roles. I don't think my job is safe for more than a couple of years at this point.
The in-code tests and the expectations/assumptions about the product that your users have are wildly different. If you allow agents to make changes restricted only by those tests, they’re going to constantly make changes that break customer workflows and cause noticeable jank.
Right now agents do this at a rate far higher than humans. This is empirically demonstrable by the fact that an agent requires tests to keep from spinning out of control when writing more than a few thousand lines and a human does not. A human is capable of writing tens of thousands as of lines with no tests, using only reason and judgement. An agent is not.
They clearly lack the full capability of human reason, judgment, taste, and agency.
My suspicion is that something close enough to AGI that it can essentially do all white dollar jobs is required to solve this.
That's a bit shortsighted. There have been cries of software becoming needlessly bloated and inefficient since computers have existed (Wirth, of course, but countless others too). Do you visit any gamer communities? They are constantly blaming careless waste of resources and lack of optimization in games for many AAA games performing badly in even state of the art hardware, or constantly requiring you to upgrade your gaming rig.
I don't think the only scenario is boring CRUD or line of business software, where indeed performance often doesn't matter, and most of it can now be written by an AI.
And spec management, change previews, feedback capture at runtime, skill libraries, project scaffolding, task scoping analysis, etc.
Right now this stuff is all rudimentary, DIY, or non-existent. As the more effective ways to use LLMs becomes clearer I expect we'll see far more polished, tightly-integrated tooling built to use LLMs in those ways.
You are essentially saying that we should develop other methods of capturing the state of the program to prevent unintended changes.
However there’s no reason to believe that these other systems will be any easier to reason about than the code itself. If we had these other methods of ensuring that observerable behavior doesn’t change and they were substantially easier than reasoning about the code directly, they would be very useful for human developers as well.
The fact that we’ve not developed something like this in 75 years of writing programs, says it’s probably not as easy as you’re making it out.
When do they have a real choice, without vendor lock-in or other pressure?
Windows 11 is 4 years old but until a few months ago barely managed to overtake Windows 10. Despite upgrades that were only "by choice" in the most user hostile sense imaginable (those dark patterns were so misleading I know multiple people who didn't notice that they "agreed" to it, and as it pop ups repeatedly it only takes a single wrong click to mess up). It doesn't look like people are very excited about the "velocity".
In the gaming industry AAA titles being thrown on the market in an unfinished state tends to also not go over well with the users, but there they have more power to make a choice as the market is huge and games aren't necessary tools, and such games rarely recover after a failed launch.
Wait, I think I have the answer!
"You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. It's crawling toward you. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not without your help. But you're not helping. Why is that?"
Bot detected
i can write like this if i want. or if i were a clever ai bot.
Or something. You're right.
https://arxiv.org/html/2601.20245v2
LLMs are effectively (from this article's pov) the "Arduino of coding" but due to their nature, are being misunderstood/misrepresented as production-grade code printers when really they're just glorified MVP factories.
They don't have to be used this way (I use LLMs daily to generate a ton of code, but I do it as a guided, not autonomous process which yields wildly different results than a "vibed" approach), but they are because that's the extent of most people's ability (or desire) to understand them/their role/their future beyond the consensus and hype.
But LLM-aided development is helping me get my hands dirty.
Last weekend, I encountered a bug in my Minecraft server. I run a small modded server for my kids and I to play on, and a contraption I was designing was doing something odd.
I pulled down the mod's codebase, the fabric-api codebase (one of the big modding APIs), and within an hour or so, I had diagnosed the bug and fixed it. Claude was essential in making this possible. Could I have potentially found the bug myself and fixed it? Almost certainly. Would I have bothered? Of course not. I'd have stuck a hopper between the mod block and the chest and just hacked it, and kept playing.
But, in the process of making this fix, and submitting the PR to fabric, I learned things that might make the next diagnosis or tweak that much easier.
Of course it took human judgment to find the bug, characterize it, test it in-game. And look! My first commit (basically fully written by Claude) took the wrong approach! [1]
Through the review process I learned that calling `toStack` wasn't the right approach, and that we should just add a `getMaxStackSize` to `ItemVariantImpl`. I got to read more of the codebase, I took the feedback on board, made a better commit (again, with Claude), and got the PR approved. [2]
They just merged the commit yesterday. Code that I wrote (or asked to have written, if we want to be picky) will end up on thousands of machines. Users will not encounter this issue. The Fabric team got a free bugfix. I learned things.
Now, again - is this a strawman of your point? Probably a little. It's not "vibe coding going straight to production." Review and discernment intervened to polish the commit, expertise of the Fabric devs was needed. Sending the original commit straight to "production" would have been less than ideal. (arguably better than leaving the bug unfixed, though!)
But having an LLM help doesn't have to mean that less understanding and instinct is built up. For this case, and for many other small things I've done, it just removed friction and schlep work that would otherwise have kept me from doing something useful.
This is, in my opinion, a very good thing!
[1]: https://github.com/FabricMC/fabric-api/pull/5220/changes/3e3...
[2]: https://github.com/FabricMC/fabric-api/pull/5220/changes
This is such high minded bullshit.
Which as you say, is a good thing. I still fear what will happen if 3D printing commoditizes into a similar structure as 2D printing.
I don't know if it's a local trend or what but the last 5-7 years the most in demand thing by far are sewing machines, knitting machines, and sergers. They ended up completely scrapping the woodworking area to fit a digital jacquard loom and that thing is booked around the clock, you have to plan 4-5 weeks in advance to get a session. Jeweler's bench is similarly busy.
In contrast the soldering and electronics workstations get regular use but I can usually just walk in and get a spot without scheduling or waiting much, which is almost never the case with the fabric stuff.
Nowadays, we are so used to all the injection molded plastic crap, and also so much poorer, that we can't understand why precisely manufactured products made from solid metal or wood are so expensive.
You can get 3D printers from BestBuy(!) for $200 retail. At that point, the cost of the filament is going to quickly exceed the cost of the machine.
At the $200 price point, your Bill of Materials is roughly $65 (about 1/3 of the retail cost). I challenge you to buy the raw materials of a 3D printer for under $100 let alone $65.
Bump.
Because we had our first high profile murder using a 3d printed weapon just last year.
In the end, I think it’s not about how a project was created. But how much passion and dedication went into it. It’s just that the bar got lowered.
One of the common examples in management books is the signage industry. You can have custom logos custom molded, extruded, embossed, carved, or at least printed onto a large, professional-looking billboard or marquee size sign. You can have a video billboard. You can have a vacuum formed plastic sign rotating on top of a pole. At the end of the day, though, your barrier to entry is a teenager with a piece of posterboard and some felt-tipped markers.
What has happened is that as the coding part has become easier, the barrier to entry has lowered. There are still parts of the market for the bespoke code running in as little memory and as few CPU cycles as possible, with the QA needed for life-critical reliability. There’s business-critical code. There’s code reliable enough for amusement. But the bottom of the market keeps moving lower. As that happens, people with less skill and less dedication can make something temporary or utilitarian, but it’s not going to compete where people have the budget to do it the higher-quality way.
How much an LLM or any other sort of agent helps at the higher ends of the market is the only open question. The bottom of the market will almost certainly be coded with very little skilled human input.
There are many people who code to make cool stuff and enjoy sharing, but there is even more people who code to look good on CV.
I’m not trying to be mean, this is just an anecdote I had from my time hiring.
JB: Yeah but guess who did write it, me!
KG: Yeah but did you write this?
JB: Dude, I did, I told you to do the bendy every once in a while!
https://www.youtube.com/watch?v=TLvOLjHt4S0
[Edit: no need for the downvote, folks, it was an honest question although it seemed otherwise. I think the answers below make sense.]
This isn't the first time something like this has happened.
I would imagine that people had similar thoughts about the first photographs, when previously the only way to capture an image of something was via painting or woodcutting.
Paraphrased, "There's basically no business in the Western world that wouldn't come out ahead with a competent software engineer working for $15 an hour".
Once agents, or now claws I guess, get another year of development under them they will be everywhere. People will have the novelty of "make me a website. Make it look like this. Make it so the customer gets notifications based on X Y and Z. Use my security cam footage to track the customer's object to give them status updates." And so on.
AI may or may not push the frontier of knowledge, TBD, but what it will absolutely do is pull up the baseline floor for everybody to a higher level of technical implementation.
How much longer do we have to put up with people saying this? It's been four years now.
The things I am saying are now a year away, are not the things people were saying were a year away two years ago.
And you're going to have to put up with it forever, because "a year in the future" has always and will always be a year away.
I understand one of the chief innovations the AI industry produces is rhetoric and hype, but it's insufferable and repetitive.
A better AI isn't good enough. "Closer" to a stated goal isn't good enough.
Deliver results that have value to more than just enthusiasts and academics.
That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
You can always find some person saying that it'll destroy all jobs in a year, or make us all rich in a year, or whatever, but your cynicism blinds you to the actual advances being made. There is an endless supply of new goalpost positions, they will never all be met, and an endless supply of chartalans claiming unrealistic futures. Don't confuse that with "and therefore results do not exist".
Mixing the two up is how we get a massive company like Microsoft to continually produce such atrocious software updates that destroy hardware or cause BSODs for their flagship Operating System.
That's not replacing software development. That's dysfunction masquerading as capability.
And none of what I said is goalpost moving. They are the goalposts constantly made by the AI industry and their hype-men. The very premise of replacing a significant amount of human labor underlies the exorbitant valuation AI has been given in the market.
It appears that your understanding of AI code generation reflects the state of 1-2 years ago. In which case of course it seems like what people are describing as reality, feels 1-2 years away.
> There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
This is exactly the goalpost moving I am talking about. I said 80% of code could be AI-written, you agreed, and followed up with "oh but it doesn't matter because now we're measuring by % of commits".
Technically 100% of the code they could produce could be created by a ton of very specific AI prompts. At that level of control it would be slower than typing the code out though.
Just throwing out random numbers like this is complete nonsense since there's about a million factors which determine the effectiveness of an LLM at generating code for a specific use case. And it also depends on what you consider producing by hand versus LLM output. Etc.
Today I fed to Opus 4.6 five screenshots with annotations from the client and told it to implement the changes. Then told it to generate real specs, which it did. I never even looked at the screenshots, I just checked and tested against the generated specs. Client was happy.
I don't know what it means.
some people build apps to solve a problem. why should they not share how they solved that problem?
i have written a blog post about a one line command that solves an interesting problem for me. for any experienced sysadmin that's just like a finger painting.
do we really need to argue if i should have written that post or not?
For example, if you wanted a pretty dress with a specific fabric and cut, you would likely have had to sew it yourself or pay a tailor because your off-the-rack options would be limited, costly, or ill-fitting. But people just did that without fanfare and it wasn't a counterculture. Or if you wanted custom cabinets or resin-coated live-edge stair treads, etc. You'd just figure out how to make it if you wanted it. Or you could pay someone else to do it.
Curious how this differed in northern Europe where Sloyd Woodworking has a long tradition in early education:
https://rainfordrestorations.com/category/woodworking-techni...
What has changed is that the fusion of the more artistic end of model making and woodwork is less lumped together with electronics and 3d printing.
I would say that there are much more makers, but they are more specialised.
Like… if the maker thing was less of an insane cult that died out than genuine excitement about things that actually did matter… well the whole thing falls apart.
We’re just not required to accept the (false, I think) premise this depends on, even if we’re inclined to agree with what it says about vibecoding.
Check out the Maker Project Lab weekly video showcasing awesome stuff from the maker community, it's inspiring and fun to see. https://www.youtube.com/@MakerProjectLab
And mastering a technology has lost its point.
Physical making is hard: you run up against the limits of plastic or the difficulty of cnc planning for various materials, as well as the limited value for small projects: people rarely make entire projects, instead making parts. So there is an upper bound for the utility of making. (btw, anyone have a laser welder or steel-capable CNC's they're tired of?)
Software making is what you make it, subject to the laws of complexity, and as valuable as its integration (computers, robotics). These in theory are limiting, but in practice there are effectively an infinite supply of valuable projects when the cost of production reduces. Deployments will be limited by access to customers, which is not a problem when people make software for themselves.
- I bet that holds true after tariffs. (it does, actually): PCB Cost: $5.00 Shipping:$5.63 Total:$10.63
- I bet that holds true for custom aluminum parts, etc
for some strange reason, shipping and prices from China << shipping within the USA, still, even after tariffs
"Maker nation" might have been 3D printer company hype. Or just the whole US supply chain is full of price gouging
Actually, the future isn't vibe coding, it's vibe agenting. GPT 5.3 is so advanced, you don't need to write a program to do something. You tell the agent what you want, and it does it for you by "using" desktop apps like a person. If it can't do it manually, it'll write a program to do it. That's where we're headed.
With AI you can build tools fast. You can then version and release those tools, and improve them, fast. Then the AI can use that version of that tool. This gives the AI a fixed set of deterministic functionality that works the same way every time.
The CSO that had all their mail deleted happened because the tools they have right now aren't very good. Whatever that mail tool was, could be easily modified to have a limiter added that stops attempts to mass-delete emails. Hell, your own email client already will prompt you to confirm if you really want to "delete all emails" - because humans are stupid, like AI, and make mistakes, like AI. They just have to build the guardrails in, rather than hoping and praying that the AI will "behave itself". If the AI is a monkey at a joystick, we still control all the machinery attached to the joystick.
Vibe coders treating those as the same category is what actually worries me. Even in regular software there's a feedback mechanism - unit tests go red, CI breaks. Vibe coding skips that too. You get working code that passes the happy path and nothing that tells you which 5% failure rate is the dangerous one. That judgment about problem category severity is the thing that's hard to develop without breaking things first.
AFAIK Replit and Claude code has way to reduce the rate of these kind of errors, but I havn’t deep dived into how.
I mean this happens in normal development?
That is what people do every day with LLM. When they ask LLM to do something, they are being software developers without them or you even realizing it. But what are they doing but building something with the LLM that takes digital input and returns digital output. It is software. Summarize this email for me gpt. That is a tool.
And this is the worst this technology is ever going to be :)
Don't take my word for it, go try building something that you always wanted to build but did not have time for. If you do not have something like that at the back of your head, I doubt you have to be concerned about this topic.
If you did Illustrator in an afternoon it should only take a week or two for the whole product line?
It implements the (pretty large) subset of the tool I personally need.
But everyone needs a different subset, and maintaining a coherent codebase that supports everyone's need is difficult.
What could put Adobe and others out of business is everyone reaching for Claude code, cursor or codex the next time they need complex software tailored to their use case instead of the one size fits all software that is commercially available.
Couldn't be happier. I make things because I want to see them exist, not because it was hard.
I think I’m learning less (about the code) but making more. Maybe that’s okay? There are other things to learn about. My code has users, it processes money. I user test, I iterate, I see what works and what they need.
All these maker types dropping that differentiator immediately in the name of pragmatism.
Never has it been more exciting to be a builder (software)! So much momentum and so little getting blocked. I am learning faster than ever even with LLMs doing so much of the heavy lifting. It is so fast to iterate and just MAKE STUFF!!!
> You can watch something structurally similar happening with vibe coding right now. People are rapidly prototyping tools that threaten to displace entire SaaS business models. But the value generated by all that rapid iteration and prototyping flows upward. It accumulates at the model layer, in the training data, in the infrastructure. The vibe coders themselves risk becoming interchangeable, each one spinning up impressive demos without accumulating durable value of their own. The pattern rhymes: cheap tools democratize one layer, and the layer beneath captures the surplus.
dot dot dot.
It’s obvious with each iteration of llms that vibe code , write-only-code is here to stay in many industries if not everywhere.
Instead what we were left with was an endless hunt for 'models', and no companies publishing their specs. Everything had to be done custom, and at best some niche manufacturing for weird side quests like adds ons for OneWheels, or cases for raspberry pis.
The closest thing to practical I have 3D printed is a wedge to better aim my google doorbell. I used to make some beautiful planters. I certainly am not 3D printing a droid, or a dishwasher impeller, or a fan blade for my 30 year old fridge.
So yes, while Claude code is fun, and you can build neat prototypes, it takes a lot of work to build a full product and then maintain it, scale it, deploy it. That takes persistent joy in what you're doing because you're not necessarily claude coding everything.
Learning modelling is a huge time sink, learning to make threaded parts, or anything modular to not have to re-print everything for changes. It's great but the printing is the easy part
Sure, you can do that, it's an option, but no serious engineering effort is being left entirely up to the AI.
Vibe coding is essentially the Jackson Pollock approach to software building. Throw a bunch of paint down, with very little control, and look, we have something novel.
It doesn't mean your going to replace all the ways of making art with paint throwing.
I'd love to start seeing more discussions about alternative approaches to working with AI. The recent Vinext article was great https://blog.cloudflare.com/vinext/. This seems to be "the way" for working with AI in a high stakes production environment, but what other ways are there.
I fear the focus on vibe coding is diluting and taking focus away from far better alternatives. Maybe because the narrative around those aren't quite so dramatic?
As for the parallels with the maker movements, here's one example: drones are one of my hobbies. I love drones and I've built countless fpv ones. For anyone that hasn't done that, the main thing to know is that no two self-build drones are the same - custom 3d printed parts, tweaks, tons of fiddling about. The main difference is that while I am self-taught when it comes to drones, I have some decent knowledge in physics, I understand the implications of building a drone and what could go wrong: you won't see me flying any of my drones in the city - you may find me in some remote, secluded area, sure. The point is I am taking precautions to make sure that when I eventually crash my drone(not IF but WHEN), it will be in a tree 10km from anything that breathes. Slop code is something you live with and there are infinite ways to f-up. And way too many people are living in denial.
The real parallel might be the early web era where anyone could make a website but finding them required Yahoo directories and later Google. Right now vibe coded apps have the same discovery problem - they exist but there's no effective way to find or evaluate them.
I’m not remotely thinking about AGI. I dabbled in the maker movement and it just doesn’t compare in the sheer velocity we have with GenAI and mass production of code.
It's like comparing Christianity to water wheels or gay pride to to the Saturn V rocket. It's just not really analogous in any way.
I do agree with the author about commoditization, however.
The most likely outcome is that software will be commoditized and software developers commoditized even harder. If we still need software engineers to prompt, you'll find plenty of people in India able to do those tasks, not necessarily with great quality until they too are replaced by better AI.
This whole situation inspired me to actually dive harder into Maker type stuff such as learning how to design PCBs, but one thing I found is that this TOO is very close to being automated by AI. To actually get hardware made, even prototyping PCBs, you NEED to go to China, and the Trump tariffs cut into the cost of doing these activities hard.
Maybe you could research how to make your own PCBs? It can be done at home with a little equipment and then you can offer it as a service to others.
The bigger issue with PCBs is that even with a nice prototype, actual manufacturing needs to be done in Shenzen for any sort of cost competitiveness if you want to step outside of the hobby realm (just as the author of this essay stated).
It's really, really hard to see where the USA stands in the value chain any more once LLMs have been deployed. If all the physical manufacturing lives in China, where the manufacturing supply chain also lives, and Chinese AI companies can very very easily distill US LLM models, the two remaining US advantages is actually just the dollar's role as reserve currency --- something that crypto bros and the US president is working on eroding at a record pace, and the fact that this overvalued dollar gets all the smart people in China and India to emigrate to the US -- also becoming politically less viable.
[0] https://oshpark.com/
Developing nations that were looking to tech to climb the economic ladder, are watching that ladder be pulled up.
Most of the upside will go to the US and China. Europe is lagging shockingly on AI spend, they're extremely far behind (but with constant plan announcements). If you didn't know any better, you'd think Europe believed the year was 2010.
If at all it will make me do more little hyper specific projects.
Im also not sure if “vibe coding” did not have a phase where early adopters were mucking around? I saw the early versions of gpt much earlier than chatgpt and a lot of folks were using transformers for coding before claude.
edit: I read this title wrong, thought it said "end the maker movement"
personally I enjoy creation and writing code so I'm not going to vibe code my hobby/passion project, I don't care if theoretically it'll save me x amount of time, the code is rote for me anyway but I have to be actively engaged in it to enjoy it
The author already touched on a better answer. Scenius worked because of the "permission to fuck around." Nobody expected your Arduino to ship. But the conclusion hands you four value-capture strategies and quietly revokes that permission. "Play freely, but collect the exhaust" isn't permission—it's a conditional license.
I once learned songwriting from an indie musician who refused autotune and wrote by hand. He said the point of busking isn't playing because there's an audience. It's playing when nobody stops. You play anyway. That's how you find your sound.
This gets at the root of "evaluative anesthesia." It's not that our tools are too powerful. It's that we're asking "is this valuable?" at every step. A busker doesn't ask that. Taste and judgment accumulate as a residue of immersion, not deliberate capture.
What vibe coding needs isn't a smarter consumption strategy. It might just be the courage to play to an empty street.
Vibe coding does none of the above
None of these sophisticated articles mention that you could already steal open source with the press of a button before LLMs. The theft has just been automated with what vibe coders think is plausible deniability.
> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize.
This version of the Maker Movement only ever existed in news articles and hype bubbles.
The Maker Movement was never about building small factories and consumer 3D printing was never about manufacturing things at scale. Everyone who was into 3D printing knew that we weren't going to be 3D printing all of our plastic parts at home because the limitations of FDM printing are obvious to anyone who has used one. At the time, consumer 3D printers were rare so journalists were extrapolating from what they saw and imagined a line going up and to the right until they could produce anything you wanted in your home.
The Maker Movement where people play with Raspberry Pi, Arduino, and cheap 3D printers is possibly stronger than ever. Everything is so cheap and accessible now. 10 years ago getting a 3D printer to produce parts was a chore that required a lot of knowledge and time. Now for a couple hundred dollars anyone can have a 3D printer at home that is mostly user friendly and lets them focus on printing things.
The real version of the Maker Movement just isn't that interesting to mainstream because, well, it's a bunch of geeks doing geeky things. There's also sadly a lot of unnecessary infighting and drama that occurs in maker-related companies, like the never ending Arduino company drama, the recent Teensy drama that goes back years, or the way some people choose their 3D printer supplier as their personal identity would rather argue about them online than print.
> This version of the Maker Movement only ever existed in news articles and hype bubbles.
That version of the Maker Movement was heavily pushed by city and the state government in Massachusetts. They put money into it; foundations funded it.
It was seen as a way to give students another pathway for those who weren't interested in going to college. I've seen first hand how some kids who weren't interested school or academics really got into the Maker thing, which got them into STEM.
Some of them ended up going to college to study engineering and related fields. Some of them ended up working in related fields and started their own businesses.
As time went on, it became clear to me that the Maker Movement wasn’t going to go mainstream, although 3D printing has found another niche audience recently in the home lab space. Many home-labbers on YouTube 3D print their own cases and other parts.
There will be normies that take up vibe coding like some knit their own sweaters or grow their own food because they enjoy it.
And there will be Fortune 500 companies that will vibe code certain products.
An example 3D workflow: Prototype design -> 3D print -> test/break -> production design -> real manufacturing process
The equivalent vibe code Vibecobe -> slop -> test/break -> real developers -> real development process
--
The real test for vibe coded stuff (much like 3D printed crap at craft fairs) will be if someone actually buys it. But much like those 'makers', vibe coders will have to go through the "real development process" if they want to make money at scale.
The more interesting question is what vibe coding actually democratizes. It's not engineering---it's implementation. The bottleneck shifts from 'can you write the code' to 'do you understand the domain well enough to specify what the code should do, and verify it's doing that correctly.'
I've watched domain experts---people with deep subject matter knowledge who previously couldn't build because they lacked CS fundamentals---suddenly able to ship working tools. Code quality is often brittle. But the problem understanding is sharp, because they're building something they actually needed.
The maker analogy would have been more accurate if 3D printers only failed when you asked them to print something you didn't fully understand. That's where vibe coding fails too.
If vibe coding ends, it will end because model collapse, diminishing returns, escalating costs as the VC money run out, etc. cause LLMs to fail to deliver the promised capacity to, per Dijkstra, "program if you cannot". There will be a culling as amateurs and dilettantes with no technical knowledge or interest lose interest in programming itself, and the field will collapse back into a niche. Amateurs and dilettantes crashed out early of the maker movement, if they got involved at all; "making" was for technically inclined people in the first place.
I realize that the wildest promises of 3D printing and maker stuff like Arduono never came to fruition, but maker spaces have matured greatly. If that is the analogy we are making, that means that vibecoding won’t reach “the masses” necessarily but it will be popular beyond the present audience.
Bambu Labs a month others have made 3d printing far more click and print with little to no tinkering.
I assume some could use it to make for commercial sale products but when I heard of it I really just pictured it mainly for small personal projects mainly.
I have always had an interest in electronics but without going to college there was really obvious no path to get into creating small diy projects. Then years back came along Raspberry Pi. I bought one along with a big variety of different sensors and a breadboard and all the things one would need to create something. I pictures making things that would email my mom when her plants were getting dry and many other dreams with all the sensors.
But it was still overwhelming. Lots of knowledge you need before you even start so it felt hard. But eventually I set off to try something and with many hours of searching for how to code what I wanted and essentially copying code and maybe slightly altering it to my needs I did finish one project. It was basic but I was always proud of what I accomplished. I had an IR sensor that would detect if someone walked in front of it and when that happened I also had a power relay that was connected to a lamp. When motion detected the lamp would then blink SOS in Morse code and it would also send me an email saying motion detected. What a feeling when I ran it and it worked on the first try.
But that took so much time searching and trying to find the code I wanted. I see vibe coding and imagine I could do the same thing in minutes verses hours. I don't think I will ever make some project that is ever going to make me money but do imagine with vibe coding the barrier to creating some of those projects I dreamed up in my head for personal use is much closer and obtainable.
It is not just vibe coding that is being developed, but general intellegence.
Lots of powerplants to fuel the surplus.
No, because too much money has been pumped into it.
from individual tinkerers and ideas guys cranking out all the projects they would have never subsidized, there's a lot of that
and with corporations I'm seeing there are lots of products that would have taken 8 quarters to do, all being compressed into one now. The flip side is that all 8 quarters wouldn't have been allowed to happen as priorities would have shifted before the product or feature roadmap was ever allowed to get that far, but instead now all of it is being built out and other iterations and directions are being done simultaenously
after all of this is shown not to be saving money, or creating much value because they're doing too much without market validation, then a more intelligent approach will occur and less vibe coding will occur
Anyway I think we are seeing a scenius phase -- it's just happening everywhere all at once on a world stage. And it's exciting. As with any moment in time there's a ton of experimentation and a small number of break-out hits. Also the pace of change means there's less staying power for a break-out hit than there used to be.
But the quick break-out hit phenomenon is particularly applicable for things that are more about the attention economy and less about the boring hidden things that traditionally have been where the economy's silent toil is really centered.
All of this makes me feel the author is too close to the creative end-consumer layer e.g. "make something flashy and cool whether it's a 3d-printer in a 5th avenue dept. store window, or a new app front end" but perhaps less focused on the full depth of things that really exist around them.
This really resonates with me in that a lot of NYC's "tech" circa 2013 was 3d printing oriented, much more so than in Silicon Valley. And I wondered why? but then it was a reflection that tech in NYC then was more about marketing, story telling, and less about the depth...
Obviously you had the west coast makers, you had the burners, so I don't mean to conflate all these differnet things. But the idea that Maker Faires were really about bringing manufacturing back... I don't know I think it was more about the counterculture, about having fun. I think that's coming back to tech right now as well in a sense. Even if it's also got dystopian overtones
There are plenty of products now that only exist because of what it did deliver on. Any one who spends time in the niche communities where it is thriving can see that... On the low end look at Apollo automation, the story of Grismo Knives, at the high end look a Hadrian Manufacturing.
Vibe coding is a terrible name, but what a skilled dev can do with a deeply integrated AI coding assistant is amazing. It changes the calculus of "Is it worth your time" (see: https://xkcd.com/1205/ ).
Is it helpful in my day to day: it sure is. Is it far more helpful in doing all the things that have been on the back burner for YEARS? My gods yes! But none of that is matching the hype thats out there around "vibe coding".
If someone tells me they ran a marathon, I'm impressed because I know that took work. If someone tells me they jogged 100 meters, I don't care at all (unless they were previously crippled or morbidly obese etc.).
I think there are just a ton of none-engineers who are super hyped right now that they built something/anything, but don't have any internal benchmark or calibration about what is actually "good" or "impressive" when it comes to software, since they never built anything before, with AI or otherwise.
Even roughly a year ago, I made a 3D shooting game over an evening using Claude and never bothered sharing it because it seemed like pure slop and far too easy to brag about. Now my bar for being "impressed" by software is incredibly high, knowing you can few shot almost anything imaginable in a few hours.
It's hard to not be dismissive or gate-keeping with this stuff, my goal isn't to discourage anyone or to fight against the lower barriers to entry, but it's simply a different thing when someone prompts a private AI model to make a thing in an hour.
Why share something that anyone can just “prompt into existence”?
Architecture wise and also just from a code quality perspective I have yet to encounter AI generated code that passes my quality bar.
Vibe coding is great for a PoC but we usually do a full rewrite until it’s production ready.
————
Might be a hot take, but I don’t think people who can’t code should ship or publish code. They should learn to do it and AI can be a resource on the way.. but you should understand the code you “produce”. In the end it’s yours, not the AIs code.
You should consider trying to using AI in a programming language that scores high in the AutoCoderBenchmark.
I think now you are freed up to make a shooter that people will actually want to play. Or at least attempt it.
We probably need to come to terms with the idea that no one cares about those details. Really, 2 years ago no one would have cared about your hand crafted 3d shooter either I think.
Then "impressive" shouldn't even be the benchmark. If someone gifted me $10K, I'm not going to care if they earned it in a competition or won it in a lottery. Value is value. I'm gratefully accepting it and not being snobby about it. I couldn't care less about how "impressive" anything is if it's useful to me.
It's also why AI generated code is a nightmare to read and deal with, because the intention behind the code does not exist. Code outputting malformed input because it was a requirement two years ago, a developer throwing in a quick hack to fix a problem, these are things you can divine and figure out from everything else.
Taking this to an extreme, let's say vibe coding becomes real enough, and frictionless enough, that you can prompt a first person shooter into existence in a few minutes or hours.
If/when this becomes true, nobody will want to play your shooter. You'll share your shooter with people and if they care at all about shooters, they'll just go prompt their favorite AI tool and conjure their own into existence.
Admittedly this is a bit extreme, and we aren't there yet. But I've thought about this in relation to art, and how some people now go "well, this empowers people who didn't know how to make a movie/cartoon/painting/game, it's empowering and democratizing". But in my mind, art is a form of communication between humans. Without the exchange between humans, art cannot exist. If all of us are each lost in our own AI-powered projects, and if anything can be easily conjured out of thin air, then why bother with the next person's art project (game or whatever)? I don't care about your game, let me make my own in a few minutes.
I'm thinking about potential counterpoints: ah, yes, but it's about "ideas". While we can both make our ideas reality, my ideas are more inventive, so my AI-powered projects are more appealing. I'm not convinced about this; I think slop will dominate and invade public spaces, but also... why draw the line at ideas? Why is "skill with a pencil" replaceable with AI-slop, but ideas aren't? Ideas are often overrated, what matters is execution, anyway.
Quick answer: No. Long answer: its the opposite; as an example, can use claude code to generate, build and debug ESP32 code for a given purpose; suddenly everyone can build smart gizmos without having to learn c/c++ and having knowledge of a ton of libraries.
I have Arduino and raspberry Pi boards. I am perfectly capable of hand writing code that runs on these machines. But they are sitting in the drawer gathering dust, because I don't have a use case -- everything I could possibly do with them is either not actually useful on a daily basis, or there are much better & reliable solutions for the actual issue. I literally spent hours going through other people's projects (most of which are very trivial), and decided that I have better things to do with my time. Lots and lots of people have the same issue.
And Claude Code is not going to change a single bit of that.
Also, its not about if there are better or more reliable options; that's the opposite of the maker mentality - you do it because it is useful, it is fun or just because you enjoy doing it.
Such as designing some light fixture, printing it, and illuminating it with an esp32 and some ws2812 leds. Yah you could spend an afternoon coding color transitions. Or use claude code for that.
The other day I also developed a RGB-RGBW converter using a rp2040; claude did most of the assembly, so instead of taking a couple of days, it took a couple of hours.
I don't prefer no code; my point is software is a barrier on embedded systems, and if I - someone who can actually program in c/c++, python and assembly, see huge benefits in using LLMs, for someone at an entry level it is a life changer.
if youre using a pico, you can use PIO to have a bit more power. (I use it to control stepper motors with a smooth accel/decel ramp. Its doable with RMT, but not as easy.
If whataboutism is all you have, this conversation is over.
> I suggest you actually look at a codebase of a proprietary device before forming a proper opinion
You have no idea what codebases I've seen and worked in, so don't assume I have not. My opinions are well-formed.
Why not? You've been quite confortable assuming things so far, without actually contributing anything of substance to the conversation. Your opinions may even be well-formed, but if they are, your communication skills clearly aren't.
So, how has been your experience using LLMs as a maker (the actual topic) or in the context of IoT development (the topic I was replying to)? Mine has been quite positive, ranging from ensuring specific blocks of assembly code are deterministic (instead of having to check dozens of pages in a manual, and count instructions at every adjustment), to building both code, test fixtures and build infrastructure, to generating documentation, to actually hunt and fix security and logic issues on older codebases.
When people read "vibecode" they assume a clueless intern operating Cursor without any idea of what he's doing (in part because of the overhype of misshaps of LLM-generated code), opposed to the old fox with decades of experience that knows every detail by heart. Thing is, the clueless intern will produce much better code with LLMs than without (and fewer defects, too), and the old fox will produce much more because it will delegate some tasks to coding agents instead of less senior team mates,and have results in hours, not weeks.