*Astroturfing:* Coordinated campaigns where accounts pose as students sharing "cheatsheets" and "predicted exam leaks." Other accounts then upvote, leave supportive comments, and ask follow-up questions—creating the illusion of organic student excitement. Multiple threads have exposed this pattern [1][2][3].
*Paid fake posts:* High school students report being offered payment to write promotional Reddit posts [4].
*Pressuring critics:* Users who post negative reviews report being contacted directly by company representatives, told it's "a shame" they're posting publicly [5]. Critical comments receive coordinated mass downvotes [6].
*Soliciting copyrighted materials:* They use TikTok influencers and fake reddit posts to persuade students to sell them official IB exam papers, violating IB policies [7].
The r/IBO moderators are actively investigating [8].
These practices appear to be working great for them. Recently, they acquired OnePrep (oneprep.xyz), a free SAT prep tool that was already popular on r/sat. Since the acquisition, the same manipulation tactics have been deployed at scale: 150 Trustpilot reviews in a window of a few days [9], and widespread coordinated Reddit manipulation—multiple accounts posting "tips" that recommend Oneprep, coordinated upvoting, and fake enthusiasm in comments. The most prominent example was a 2,000+ upvote post removed by moderators for manipulation, but it's part of a sustained campaign across the subreddit.
*Sources:*
[1] https://www.reddit.com/r/IBO/comments/1p55qun/ [2] https://www.reddit.com/r/IBO/comments/1jsb00a/ [3] https://www.reddit.com/r/IBO/comments/1ohcohi/ [4] https://www.reddit.com/r/IBO/comments/1p55qun/comment/nqmhal3/ [5] https://www.reddit.com/r/IBO/comments/1my1ajx/comment/na94upv/ [6] https://www.reddit.com/r/IBO/comments/1my1ajx/comment/na8zvs4/ [7] https://www.reddit.com/r/IBO/comments/1mej900/ [8] https://www.reddit.com/r/IBO/comments/1my1ajx/comment/nagdkl5/ [9] https://www.trustpilot.com/review/oneprep.xyz
Astroturfing is the practice of creating a fake "grassroots" movement to make it look like a cause, product, or candidate has widespread public support when they actually do not.
Where would one find some reddit users willing to do such reviews, by the way?
They're buying stolen Reddit accounts and spamming over 500 videos a day to various subreddits.
They're also advertising fake "unlimited" plans. Their reseller pricing (they're a reseller) is 1/10th the upstream API pricing, so they're metering and throttling and banning users that cost them money.
They're getting thousands of people to subscribe to $1800 "18 month" plans.
Their unofficial subreddit is full of complaints. Probably a dozen complaint threads a day now.
Highly unethical company.
It's exhausting, especially since people will write out real advice and corrections about how to deal with rats, bedbugs, neighborhoods, etc. and it all goes into the ether in hopes someone will get scammed. Or maybe it's an SEO thing because the site name is so generic it's un-googleable. I hope it doesn't work.
I used to co-work next to a SEO specialist back in my freelance days and he would offer rankings, but the client would not be told that they were getting said rankings by blackhat SEO tactics (that mostly no longer work).
It's all so obvious and standardised that I have to imagine it is part of a toolkit or framework marketers are using without much thought.
[1] https://arstechnica.com/information-technology/2012/06/reddi...
There's obviously a massive difference between using sockpuppet accounts to:
* Influence perception on a social media platform as a 3rd party
vs.
* Put content on a social media platform that users are looking for so they return to the platform
It doesn't matter who shares a story with you on social media if the goal is to entertain, but it does matter if the goal is to get you to do something [spend money on their courses]
So you could clearly tell if people liked or didn't like something.
I wish there were laws that required large social media sites to publish data to their end users that indicate the severity of the problem.
(Also it's the kind of website where you absolutely can get good responses from "Show HN: A thing you might want to use and here's how much profit I'm making from it already" until a bunch of green usernames say nice things about it)
It's also the flip side of people feeling free to say what they want under the cover of (pseudo) anonymity.
I wonder if one solution is to partition the web into places where anonymity isn't possible, and places where it is.
1. I am one of the named, publicly accountable people registered as participating in this thing, and the same as posted under this pseudonym yesterday
2. Provided I'm reasonably careful, you can't tell which one is me unless n of m participants agree to unmask me.
3. I can only post under one name at a time. I can change pseudonym, but then my old one is marked as abandoned, so I can't trivially fake conversations with myself.
The people who frequent this forum think they are immune to astroturfing because they all work in ad tech.
4 lines of code could catch this.
And now Reddit has made it possible to hide your post history.
Probably because of this exact issue.
That's using reddit's own site, of course there are other methods like Google dorks.
There's the classic search "hack" of adding site:reddit.com to any product recommendation search, to find "real" recommendations.
Most of the time this is going to find 5-10 posts, each with only a dozen comments and a dozen up-votes. And yet it feels do much more real than whatever at the top of Google that many people will trust these reviews.
https://www.naag.org/find-my-ag/
https://reportfraud.ftc.gov/
https://www.404media.co/
https://www.gauthmath.com/
This AI cheating app is currently #8 for "education" in the iOS app store.
Where are they saying that?
Also what is the second "conclusion" screenshot from? (Who is the "Matthew" and what analysis, mentioned in that screenshot?)
YC is full of scams.
There is a line between fake it till you make it and fraud.
I thought that was the dictionary definition of social media? If it isn't yet, it should be, Reddit is just the tip of the iceberg.
I mean I am shocked that this post didn't get flagged immediately ofc.
you ever notice how most YC announcements have comments disabled?
Actual YC announcements do not have comments disabled.
We do tend to be more lenient when there's no evidence of organized manipulation, just friends/fans/users trying to be helpful and not realizing that it's actually unhelpful. What dedicated HN users tend not to realize is that such casual commenters usually have no idea how HN is supposed to work.
But this leniency isn't YC-specific. We're actually less lax when it comes to YC startups, for several reasons.
I'm not going to out people here but maybe it helps you to know that not everyone plays by the rules. Tbf I also understand that this is just really hard to enforce.
But as noted by freehorse, dang has stated it multiple times and I personally have not seen any threads memoryholed and would call out YC if they were.
Healthy skepticism plus the maturing industry of online propaganda and persuasion campaigns is where I would put Occam's razor a la "minimal assumptions". Every social media site has been manipulated at all levels, moderation notwithstanding, I see no reason to believe HN is immune to this.
It is not just a question of economics for YC to allow and even administrate this kind of manipulation, but of second- and third-order goals like consent/consensus manufacturing, reputation-building, shoring up investments by building "viral" interest, etc. These are immediate logical deductions from the patterns of behavior by humans and the bots that imitate them that are present everywhere on the internet these days.
But talking about repressive behaviour by mods against YC-related criticism specifically, I do not see that. I understand the prior would be that a popular forum run by YC would want to protect and censor in order to protect interests of the companies they back. I also had that prior. However, this is not "occam's razor", it is a prior. The time I have been around here I have not noticed this kind of behaviour happening though, while I have definitely noticed other kinds of stuff getting repressed, meaning that it is less likely that such repression would go unnoticed all the time. Thus I adjusted my understanding accordingly by shifting the prior according the data. If you find different examples I am willing to take them into account.