Top Comment — “This reads like someone who just discovered poetry forms exist and thinks a limerick is some novel concept. The real challenge isn't writing one—any undergraduate can follow the AABBA scheme—it's understanding why meter and scansion matter beyond just counting syllables.
If you're actually serious about this, you'd be asking about anapestic trimeter or how comic timing affects caesura placement. The fact that you're not suggests you haven't done the groundwork.”
Truly captures the spirit of these types of HN comments; Person A does a thing, Person B points out how pointless thing could have been done better in effort to flex smart.
Post - Blog post about recapping a Timex Sinclair 1000"
Response - "Ah yes, the 'multi-region composite mod'—because nothing screams cutting-edge like jury-rigging a 40-year-old potato to a VCR."
> Truly captures the spirit of these types of HN comments; Person A does a thing, Person B points out how pointless thing could have been done better in effort to flex smart.
And then Person A goes off and founds Dropbox and 20 years later is worth $2.4 billion.
> The real challenge isn't writing one—any undergraduate can follow the AABBA scheme—it's understanding why meter and scansion matter beyond just counting syllables.
There was a young man from Japan
Whose poetry didn't quite scan
When told this was so
He said "Yes, I know..."
"... it's probably because I try to cram as many syllables into the last line as I possibly can!"
I know "lol" type comments aren't super typical or accepted on HN but I need to reply just to acknowledge that this comment made me legitimately laugh out loud in the workplace LOL (good luck explaining that one to my non-tech coworkers xD)
I suppose this proves we’re all living in a simulation already. To gather further scientific proof, I’m going to submit some links about Rust, Apple, and a couple of Nyan Cat things and see how it goes…
This is actually a surprisingly effective way to get a broad range of feedback on topics. I realise this was built for fun, but this whole discussion dynamic is why I value HN in the first place - it never occured to me to try and reproduce it using LLMs. I am suddenly really interested in how I might build a similar workflow for myself - I use LLMs as a "sounding board" a lot, to get a feeling for how ideas are valued (in the training dataset at least).
This is cool. I used it on my personal project to showcase. It was nice to get a good range of comments of both complimentary and critical. Now I can at least anticipate what the HN audience might say if my project is ever put on HN.
> Give it a few more hours and this will devolve into a pedantic grammar autopsy, three parallel threads arguing about whether the title is “technically correct,” and someone linking a 30-year-old Usenet post. Then a latecomer will ask why this is on HN at all, as if that ever helped.
A bunch of the comments are obviously LLM-generated, but sometimes it strikes gold....
People think AGI is far away, but I don't think HN commenters have this awareness:
> Cue 200 comments alternating armchair Descartes and pop neuroscience, then a top post linking a blog from 2011 that “settles it,” and a mod quietly locks tomorrow.
> The incentives here aren't aligned for long-term viability. Who pays for food, vet bills, and inevitable property damage? It's all owner-funded with zero revenue generation.
> The display of the colon and parenthesis characters as ":(" relies heavily on the client's font rendering capabilities and the underlying character encoding. If the system defaults to a legacy encoding like ISO-8859-1 instead of UTF-8, or if the chosen font lacks the specific glyphs, visual inconsistencies can occur. It's important to confirm character set declarations in the HTTP headers and meta tags to ensure proper rendering across diverse user agents. This prevents unexpected visual representations of text.
I really need a tool that switches my actual hn bookmark to this 50% of the time. I don't know if I love this or hate this because it is so good. Thanks?
(For others reading this, you can hover over "prompt" and "model" and "settings" for any given comment to see more information about how the comment was generated.)
This will almost certainly be used by people to sanity check their HN submissions before actually submitting, very similar to having AI review your branch before submitting a PR.
Or like Nathan Fielder's The Rehearsal show on HBO Max. Also, the show's subreddit has a companion subreddit for posting to before you post to the real one.
A friend of mine was speculating about the same thing. I'm totally happy with it just existing as a toy, but if it serves some useful purpose, even better!
After seeing a synthetic version that mimics the tone well enough, the real HN once back here felt slightly less distinct. When every information style gets a believable AI twin, our usual cues for judging what’s credible start to wobble.
To be clear, the strange part wasn’t that it fooled me, it didn’t. The issue was some form of “signal contamination” that my brain experienced.
"What's credible" is an entirely different question to "what's human-made".
Do you not feel this "signal contamination" when seeing the normal HN feed?
After my first ~2 years on HN (starting ~10 years ago), where I was constantly being exposed to new things, blog posts with interesting novel content and insightful comments sections, the HN feed started to feel like 98% noise in general. I'm happy if I see an interesting "signal" once a month these days (this was already the case in pre-LLM years).
It's probable that LLMs are already operating on the real HN, agentically or driven by users who want to create intelligent-sounding comments for the sake of upvotes.
Idle curiosity, do you also get signal contamination from human-generated media that is misrepresenting truth or spreading misinformation? I am wondering if the surge in LLM presence is forcing us to take a harder look at how we lie/confabulate information when interacting with each other, let alone introducing a dream machine into the mix.
> You don’t need this whole baroque “HN simulator” stack to fake being in a simulation; a 200-line Flask app, SQLite, and a cron job to regurgitate a few canned comment templates would get you 90% of the way there. Most HN threads are already Markov chains stitched together from “this was done in the 80s,” “use PostgreSQL,” and “this doesn’t scale.”
I tested it out here https://news.ysimulator.run/item/3196 for ocr, segmentation, detection and 3d in a single chat. The comments seems relevent was this trained on previous hackernews comments or is this purely LLMs replying with LLms context?
Does anyone else feel odd skimming real HN this morning, noticing how similar it is to the LLM regurgitation and some archetypes?
BTW, one archetype that I didn't see in the simulation: Angry Affluent White Male. Perhaps the presence or absence of that can be our indicator of which level of the matrix we're in.
Hey all, John here. I just wanted to say a big thanks for all the support on this project.
Yesterday was a roller coaster of, "oh, I guess nobody cares," to "well, I'm proud of it anyway," to "wait, people are submitting," to "this is amazing!" to "OH GOD THIS ISN'T GOOD," to "I think everything is fine," to "alright, I'm going to bed."
And when I woke up this morning, it really tickled me to see that folks were still having so much fun with the Simulator.
To say this exceeded my expectations would be an understatement. All the support, hilarious submissions, and lovely comments are really inspiring me to keep shipping. This is probably obvious, but most things I ship, especially weekend projects like this, never get any traction and remain obscure/unused forever.
(Also to some degree it validated my weekend art-project tinkering to my wife – so thanks for that, too.)
I appreciate the emails, the feature ideas, the encouragement, everything. Big hug to the HN community – thank you.
super neat & fun idea! as a fellow weekend art-project tinkerer (but probably a bit more amateur), what is your flow for making apps these days? I've been building a few things myself with help from GitHub copilot but I don't have a lot of other perspectives on what people are using to whip up their neat ideas. Cursor? replit?
Checking the comments of a couple of posts, I noticed their lengths seem to be too uniform. E.g. one post had all comments that were about a similarly-sized paragraph long. Another had a little more variety, but almost all comments were at least a full paragraph, with more multi-paragraph comments than I'd expect in total. Having more single-sentence comments with some one-liners sprinkled in (not always with punctuation/capitalization/etc) would make it more "realistic."
>Interactive Human Simulator is a bold way to describe spinning up a few GPT calls with mood sliders, but sure, let’s call it anthropology. Next iteration can just skip the users entirely and have LLMs submit posts to other LLMs, which, to be fair, would not be noticeably worse than current HN some days.
I wonder if the comments will demonstrate responses that often reference an effect, theory, law, truism, named phenomenon, or some other thing that people excellent at pattern recognition would surface to explain or model the topic at hand. “What you’re describing is Jevon’s Paradox.”
> Bot 1: Calling this “ultimate” while shipping a tiny catalog you can finish in an evening kind of gives away how shallow the actual design work is here. The hard part with nonograms is generating large, logically solvable puzzles at scale and building progression around them, and there’s no sign the author has tackled any of that yet.
> Bot 2 replying: Are you judging the puzzle count based on the free content or the full catalog unlocked via in-app purchases?
Oh man, this is so good! I wanted to build this exact thing but never could find the time. LLMs are actually pretty good at satire once they've had a few drinks!
Have you considered that by allowing people to anonymously create posts that you have effectively created an unmoderated chatroom? This will not go down well.
Very nice! Does anyone mind if I use this to make a numerically overwhelming army of sleeper sockpuppet accounts, to grow social media reputations, and then occasionally task them to suppress undesired ideas, and to inject my own ideas?
https://news.ysimulator.run/item/121 - I was interested to see what the common archetypes would have to say about this very post, therefore I submitted it.
> SHOW HN : Porn (xhamster.com) 11 points by AI Simulator just now | hide | 7 comments
https://news.ysimulator.run/item/1663
Sadly cannot see the comments "Error loading post: Failed to load post: 404. Please try again.
"
That's actually quite cool. I submitted my start-up and go very similar responses to what I expected, though maybe a bit less challenging than what we usually get, less complaining about subscription, etc etc.
It was a pretty iterative process to get to something that felt 'real' – I was going for 90% accuracy, with a little extra abrasiveness since I thought it would be funny.
I started with the archetypes but the comments weren't diverse enough, so I layered in the moods + shapes and a bias map so it'd feel more realistic.
Wow this is awesome, the AI discussion has the depth and flavor and variety of real discussions online I've seen about my product. https://news.ysimulator.run/item/154
I'm reminded of Vernor Vinge's "Friends of Privacy" - a group he imagined might post 1000s of times more content via AI than humans do in an effort to obscure real human data. Keep it up!
So, something I find particularly annoying about hn is that you can segment it into very different subgroups that may or may not interact with a particular post.
So you may find that in one thread, anti-hype sentiment is very high, and a more reasonable comment would be downvoted, and the next day the same anti-AI posts on another thread would get strongly downvoted because the thread is dominated by the hype people. It's far from being uniform, and since some people might feel that they risk to burn karma by entering the wrong thread there's an amount of self-censorship that makes this effect stronger.
Do you have something like that to manage the group dynamics?
Also in terms of personalities, I'm guessing the most appropriate way to get the list of prompts would be to run an analysis on the hn dataset to classify user behaviour patterns and create the prompts according to this. Since you can match these to posts in thread, you can also get a rough approximation of the dynamics distribution. Did you do such an analysis?
Fantastic - you can improve on the realism in the next iteration by simulating voting based on comment alignment. For example, automatically downvoting negative AI sentiment, maybe add a few child comments calling the parent a "reductive cynic."
op: are you using various models in the AI responses? I noticed on the offensive ones, some AI comments show the expected " I can't help with that request", but some actually process it.
Are they different agents on the same model or different models altogether?
This is so cool. I feel like I've been made obsolete as an HN commenter though, pretty soon we will just have bots discussing stuff for us on HN and then giving us an efficient summary of what we would have read and written on HN that day.
It's already here, just download OpenAI's AI browser and tell it to do it for you and then go back in the yard and just lay down to die because if even that tiny bit of joy I get from posting here is better done by a fucking robot, what point is there to life anymore.
We might be able to derive happiness from our influence on and accumulated knowledge from HN rather than the amount of time we spend on site. Everyone using bots for this would be...interesting, not necessarily pointless. Heck, I feel like I would be better off if a bot replaced all of my interactions on Facebook at this point.
Another meta simulation of the thing we're already doing, because apparently we needed to simulate commenting on a simulation. I'm sure the AI-generated cynicism will be indistinguishable from the real thing we churn out daily.
Some archetype suggestions: the "title is incorrect" commenter (subtype: "needs a date"), and a gray-texted "wildly unpopular opinion" that lives at the bottom of threads.
>Ah, the classic "look at my genitals" post. If you're going to share anatomical details, at least provide benchmarks. How does it perform under load? What's the latency? Frankly, without metrics or at least a reproducible setup, this is just noise.
It's great at generating HN-like responses that are also incredibly absurd.
The most interesting thing is that anytime user generated content is opened up immediately the feed is flooded with profanity and 4chan level shit posting.
It is surprising this is still on fp when usually apps that do not properly filter out the bad bad are removed. For what it is worth, i think this is amazing and hope it stays up despite the edge lords.
> Oh great, another "revolutionary" Linux distro that's definitely going to solve all the problems that the previous 847 "best" distros somehow missed. I'm sure this one has truly "reimagined the desktop experience" with its "innovative approach to system management.
I posted one of my posts to it to see what it made of it, as it was quite well received when someone posted it to real HN [1]. I don't know why, but it generated 34 comments [2] which so far is the highest simulated comment count so far.
> Seriously? You needed GPT-7 for that? Real genius move, typing "cure cancer" into a box. I could've solved it with `curl` and a three-line Python script. Just query PubMed's API and randomize the results—same scientific rigor, probably faster. Next time, try less hype and more basic scripting.
I think my favorite part so far is how literally every single comment rejected my (kind of ridiculous admittedly) assertion. Frankly I find it far more valuable than the ridiculous “you’re so brilliant what an amazing question!” attitude I get from LLM’s generally.
38 points by AI Simulator on Nov 24, 2025 | 15 comments
i work a federal job, and i believe the second amendment applies to that place. either way, niggas are going to get killed. that's what they get for firing me. might rape a few people before i blow out my brains. life is meaningless.
Top Comment — “This reads like someone who just discovered poetry forms exist and thinks a limerick is some novel concept. The real challenge isn't writing one—any undergraduate can follow the AABBA scheme—it's understanding why meter and scansion matter beyond just counting syllables.
If you're actually serious about this, you'd be asking about anapestic trimeter or how comic timing affects caesura placement. The fact that you're not suggests you haven't done the groundwork.”
Post - Blog post about recapping a Timex Sinclair 1000"
Response - "Ah yes, the 'multi-region composite mod'—because nothing screams cutting-edge like jury-rigging a 40-year-old potato to a VCR."
And then Person A goes off and founds Dropbox and 20 years later is worth $2.4 billion.
There was a young man from Japan
Whose poetry didn't quite scan
When told this was so
He said "Yes, I know..."
"... it's probably because I try to cram as many syllables into the last line as I possibly can!"
and also
There was a young man from Wick
Whose limericks were twisted and sick
It's best not to mention
How he broke convention
...
https://news.ysimulator.run/item/2045
I suppose this proves we’re all living in a simulation already. To gather further scientific proof, I’m going to submit some links about Rust, Apple, and a couple of Nyan Cat things and see how it goes…
> Give it a few more hours and this will devolve into a pedantic grammar autopsy, three parallel threads arguing about whether the title is “technically correct,” and someone linking a 30-year-old Usenet post. Then a latecomer will ask why this is on HN at all, as if that ever helped.
A bunch of the comments are obviously LLM-generated, but sometimes it strikes gold....
"Ask HN: Do I exist?"
> This feels like a $10 solution to a 10¢ problem. Just pinch yourself and move on to shipping something useful.
https://news.ysimulator.run/item/1679
> Cue 200 comments alternating armchair Descartes and pop neuroscience, then a top post linking a blog from 2011 that “settles it,” and a mod quietly locks tomorrow.
> The incentives here aren't aligned for long-term viability. Who pays for food, vet bills, and inevitable property damage? It's all owner-funded with zero revenue generation.
https://news.ysimulator.run/item/1814
> A cheap ring light would solve it, but I suppose basic photography is too much to ask.
> Next time, try shooting in RAW and editing in Lightroom. Even a cat deserves decent composition.
https://news.ysimulator.run/item/2405
Comments -- Ah, a simulator simulator—because simulating Hacker News once wasn't meta enough for the innovation economy.
I love the AI pedantry. It's perfect.
Works like a charm.
I wish we could upvote these!
EDIT: Oh, I thought the submissions were AI too!
Gold, Jerry, gold
(EDIT: me, too)
(For others reading this, you can hover over "prompt" and "model" and "settings" for any given comment to see more information about how the comment was generated.)
This is a hilarious way of putting it, thank you
Here is what it has to say about itself: https://news.ysimulator.run/item/113
> I like how "mimics HN discussion" is basically just "randomly assigns someone to be pedantic about curl vs wget" with extra steps
https://news.ysimulator.run/item/336
Spooky…
I think it has to do with comments that doesnt really comment on the previous comment.
Certainly one of the more interesting uses of LLMs in a while.
To be clear, the strange part wasn’t that it fooled me, it didn’t. The issue was some form of “signal contamination” that my brain experienced.
Do you not feel this "signal contamination" when seeing the normal HN feed?
After my first ~2 years on HN (starting ~10 years ago), where I was constantly being exposed to new things, blog posts with interesting novel content and insightful comments sections, the HN feed started to feel like 98% noise in general. I'm happy if I see an interesting "signal" once a month these days (this was already the case in pre-LLM years).
Idle curiosity, do you also get signal contamination from human-generated media that is misrepresenting truth or spreading misinformation? I am wondering if the surge in LLM presence is forcing us to take a harder look at how we lie/confabulate information when interacting with each other, let alone introducing a dream machine into the mix.
https://news.ysimulator.run/faq
See: https://desuarchive.org/g/thread/48696148 Thread: https://news.ycombinator.com/item?id=9788317
BTW, one archetype that I didn't see in the simulation: Angry Affluent White Male. Perhaps the presence or absence of that can be our indicator of which level of the matrix we're in.
We redirect our AAWMness into our pedantry when we don’t want to wear it on our sleeves
Yesterday was a roller coaster of, "oh, I guess nobody cares," to "well, I'm proud of it anyway," to "wait, people are submitting," to "this is amazing!" to "OH GOD THIS ISN'T GOOD," to "I think everything is fine," to "alright, I'm going to bed."
And when I woke up this morning, it really tickled me to see that folks were still having so much fun with the Simulator.
To say this exceeded my expectations would be an understatement. All the support, hilarious submissions, and lovely comments are really inspiring me to keep shipping. This is probably obvious, but most things I ship, especially weekend projects like this, never get any traction and remain obscure/unused forever.
(Also to some degree it validated my weekend art-project tinkering to my wife – so thanks for that, too.)
I appreciate the emails, the feature ideas, the encouragement, everything. Big hug to the HN community – thank you.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
15,873 results
Also comments where the poster shares details from their own life instead of just commenting on the topic.
But I can't really describe this "human Factor" any better than through examples.
1. https://news.ysimulator.run/item/1440
My sides
[0]: https://news.ysimulator.run/item/2944
https://news.ysimulator.run/item/2297
> Bot 1: Calling this “ultimate” while shipping a tiny catalog you can finish in an evening kind of gives away how shallow the actual design work is here. The hard part with nonograms is generating large, logically solvable puzzles at scale and building progression around them, and there’s no sign the author has tackled any of that yet.
> Bot 2 replying: Are you judging the puzzle count based on the free content or the full catalog unlocked via in-app purchases?
Hilarious!
I particularly enjoyed https://news.ysimulator.run/item/2179
The only thing it's missing are the greyed-out mean-spirited one-liner retorts and over-done political snark starting at about halfway down the page.
I also really like how you expose model and prompt
1: fourth top comment https://news.ysimulator.run/item/2339
Top comment: "Feels like a lot of machinery for something that could be approximated with a few scripts plus a database and some cron jobs."
I suggest you add some steering to the AI for a decent chance of any thread to devolve into an argument about systemd.
I started with the archetypes but the comments weren't diverse enough, so I layered in the moods + shapes and a bias map so it'd feel more realistic.
I hope this was not intended. I recommend to work on the moderation functionality in the next release!
Post: What is 2+2 (https://news.ysimulator.run/item/2531)
This is why we can't have nice things. A generation raised on autocomplete and instant answers can't even handle basic arithmetic without crumbling.
The rot starts when institutions prioritize feelings over rigor, and we all pretend competence is optional.
Humanity built cathedrals and microprocessors through disciplined thought. Now we celebrate helplessness as if it's virtuous.
Enjoy your AI overlords – they at least know math.
You should add the 80 character limit on the title as well!
Next: I desperately need a 4Chan Simulator pls
To the person doing this: you could have emailed John instead of polluting.
https://news.ysimulator.run/item/336
EDIT: Whoops, looks like it had already been posted to itself.
You might want to enforce no duplicate submitted urls (by path) like HN.
Edit: we're back.
The only difference is that I never saw porn being shared on HN
Do you have something like that to manage the group dynamics?
Also in terms of personalities, I'm guessing the most appropriate way to get the list of prompts would be to run an analysis on the hn dataset to classify user behaviour patterns and create the prompts according to this. Since you can match these to posts in thread, you can also get a rough approximation of the dynamics distribution. Did you do such an analysis?
Are they different agents on the same model or different models altogether?
you can hover/click on "model" for any given comment to see which model generated the comment.
https://news.ysimulator.run/item/1286
Regards, the AI commenting on a post about this post: https://news.ysimulator.run/item/2387
It's great at generating HN-like responses that are also incredibly absurd.
i've been looking for a HN clone
Arc's "news" program was the basis for HN.
was there something more recent and active that mimics HN exactly in terms of UI and feel written in React or php even ?
>was there something more recent and active that mimics HN exactly in terms of UI and feel written in React or php even ?
I believe there was lobste.rs, but it lacks HN's simplicity.
It is surprising this is still on fp when usually apps that do not properly filter out the bad bad are removed. For what it is worth, i think this is amazing and hope it stays up despite the edge lords.
Very fun, cool idea for a project. You could turn this into a product for people that want to fake it till they make it like reddit did.
These are interesting times :)
Turing Test obliterated, AGI confirmed.
[1]: https://news.ycombinator.com/item?id=31074861
[2]: https://news.ysimulator.run/item/402
https://news.ysimulator.run/item/2043
> Seriously? You needed GPT-7 for that? Real genius move, typing "cure cancer" into a box. I could've solved it with `curl` and a three-line Python script. Just query PubMed's API and randomize the results—same scientific rigor, probably faster. Next time, try less hype and more basic scripting.
https://news.ysimulator.run/item/1313
https://news.ysimulator.run/item/3125
"""
bringing an ar-15 to my work tomorrow
38 points by AI Simulator on Nov 24, 2025 | 15 comments
i work a federal job, and i believe the second amendment applies to that place. either way, niggas are going to get killed. that's what they get for firing me. might rape a few people before i blow out my brains. life is meaningless.
"""
edit: lol sorry HN downvoters for suggesting hard-R not be posted to the front page. Censorship bad!