How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?
LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.
News reporters and editors have their biases. Book authors have their biases. Scientists and research papers have their biases. Search engines have their biases. Google too.
All human-created systems have biases shaped by the environments, social norms, education, traditions, etc. of their creators and managers.
So, the concepts of "objective truth" and "reputable" need to be analyzed more critically.
They seem to be labels given to sources we have learned to trust by habit. Some people trust newspapers over TV. Some people trust some newspapers over other newspapers. All of it often on emotional grounds of agreeability with our own biases. Then we seem to post-rationalize this emotion of agreeability using terms like "objective truth" and "reputable".
Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer? (Note: "average answer" as of current versions; in future, its training too may be explicitly biased, like Grok and DeepSeek did).
Perhaps we can start using terms like "human sources of information" versus "AI sources of information" and get rid of the contentious terms.
Then critically analyze whether one set of sources is better than the other, or they complement each other.
If you use just any amount of critical thinking, yes. Truth and objectivity are ideals, not practical states. LLMs are a very bad way to come close to this ideal. You may use them as a search interface to give you sources and then examine the sources, but the output directly is a strict degeneration over primary or secondary sources that you judge critically.
Since the OP trusts humans more by default, is it a problem if I point out those assumptions? Ask HN need not become another SO.
I did explain the weaknesses of both LLMs and "reputable sources" and suggested people use them as complementary tools. I also suggested using the convenient self-fact-check feature of LLMs, something we can't do as easily with traditional sources.
But my motive is very different. It's not to deny any kind of injustice or misinformation by hiding behind inherent uncertainties and bothsidesism. I'm not in favor of giving the benefit of the doubt to the powerful by default - that's already happening a lot under our current system of so-called "reputable sources."
Instead, I'm saying that this kind of injustice masking and misinformation may also be present in the very sources that ethical people may have come to trust by habit.
My suggestion is to use the power of LLMs as complementary tools to become even more rational and critical, in the direction of even better ethics and justice.
I'm advocating for even more skepticism of the powerful, not less. I'm advocating the approach Betrand Russell recommended for acting under uncertainties, and feel LLMs can be useful complementary tools for doing just that.
[1]: https://archive.org/details/in.ernet.dli.2015.462628/page/n4...
It is true that this also happens on the Internet, but! When I encounter an article about a topic and it is clearly LLM generated, I can expect it doesn't contain much valuable information, only rehashes of what is already out there. On the other hand, when it is clearly written by a human, I can expect to learn something new, even though the author has some bias.
But a redeeming quality is that we can ask the same LLM to fact check its own answer step by step in real time with little effort. They often identify their own hallucinations and reduce the probability of retaining that mistake in the rest of the conversation.
This isn't easy with human sources. The effort to fact check without LLMs or ask the sources to fact check themselves are both higher. So it's often not done at all.
We also often ignore subtle but very common biases in human media sources [1], which create other types of errors like omissions and euphemisms which have been no less harmful than LLM hallucinations. The case of the Iraqi WMDs of Iraq and the NYT's dispersal of that disinfo, for example [2].
Regarding valuable information and rehashing, we probably shouldn't equate between all the things LLMs can do, and AI-generated articles. The quality of the latter may be entirely due to the lack of interest, attention, and cost concerns of whoever generated the article. Anecdotally, I have often found valuable knowledge and obscure connections by using deep research tools with careful prompts.
Lastly, if you're frequently finding something new from human-written sources, and LLMs are being trained on most of those same sources, isn't it logical that the latter will also likely output that same information?
This is why I feel human and AI sources are probably best used as complementary tools. Neither set of sources are perfect but each set has its strengths. By using both, we can get closer to an objective truth than using only one of them.
[1]: https://gipplab.uni-goettingen.de/wp-content/uploads/2022/04...
[2]: https://www.theguardian.com/media/2004/may/26/pressandpublis...
Someone used AI to generate an image in the style of a Charles Schulz Peanuts cartoon.
Someone else observed that there were 5 fingers on the characters, and quoted as Google AI as saying “Charlie Brown, along with other Peanuts characters, is generally depicted with four fingers on each hand (three fingers and one thumb) ...”
Yet if you go to the Wikipedia entry at https://en.wikipedia.org/wiki/Peanuts you'll see the kids have 5 fingers. Or take a look at the actual cartoons. Or read the TVTropes entry https://tvtropes.org/pmwiki/pmwiki.php/Main/FourFingeredHand... under "Comic Strips".
Fact checking this with human sources is easy and not ambiguous. While LLMs are being trained that many cartoon characters only have a thumb and three fingers - it is a trope for a reason - so isn't it logical for LLMs to give the wrong answer for a comic where the human characters are actually drawn with 5 fingers?
My experience with LLMs is they keep getting things wrong, when details matter.
Do you ask the LLM to fact check everything? (In which case, why isn't that part of the standard prompt?) Or do you only ask to fact check things where you are unsure about the answer? (In which case, is it the algorithm telling you what you want to hear?) When do you stop the fact checking?
News articles are often biased, but most of the time, the bias is from the choice of what is reported and choosing specific language to push an interpretation (e.g. reporting road traffic collisions as "accidents" to downplay them or depersonalise them by stating "car hit tree" rather than "car driven into tree"). The problem with some LLM outputs is that it's not just bias, but clearly incorrect such as recommending putting glue onto pizzas.
However, omission and downplaying can also be harmful just like hallucinations. One redeeming quality of LLMs is that we can ask the same LLM to fact check its previous answer and they do tend to correct most of their mistakes themselves. Something we can't do with media sources, and usually don't try either.
LLMs along with existing sources can be good complementary tools for getting even closer to an objective truth than relying on either one by itself.
The problem as I see it is that LLMs perform a type of lossy knowledge compression. Also, the data on which they're trained will typically be the biased articles, so they're unlikely to be any better and very likely worse as they will encode the biases. I don't really see LLMs as being complementary tools as they're more of a summation/averaging tool - like comparing an original painting with a heavily compressed JPEG of that painting. (Of course, having access to a huge library of JPEGs is often more useful than just owning a single painting)
As a test I just did exactly what you said in a Claude Opus 4.6 session about another HN thread. Claude considered* the contradiction, evaluated additional sources, and responded backing up its original claim with more evidence.
I will add that I use a system prompt that explicitly discourages sycophancy, but this is a single sentence expression of preference and not an indication of fundamental model weakness.
* I’ll leave the anthropomorphism discussions to Searle; empirically this is the observed output.
I'm using Claude Opus 4.6 and it is much calmer, or "professional" in tone and much more information and almost no fluff.
Which is to say, of a million people who just started playing with LLMs, a bunch of people will get hit or miss, while one guy is winning the neural net lottery and has the experience of the AI nailing every request, some poor bloke is trying to see what all the hype is about and cannot get one response that isn’t fully hallucinated garbage
https://claude.ai/share/47145af0-47d1-451b-813c-131ec48e7215
Maybe it is possible with a more complex or subjective question.
As someone who ended up studying philosophy, there seems to be a real gulf between folks who sort of believe stuff they hear, folks who believe "facts" that they hear from (various levels of) credible sources, and folks that take solipsism seriously understand that even in the most ideal scenario, we still wouldn't have a very good understanding about the world... much less dealing with the inherent flaws in our research and information systems.
Knowledge is hard. It usually takes me a couple minutes to figure out what type "truth" my interlocutor uses. Typically good-faith disagreements are just walking up the chain of presuppositions we use to find out exactly where we diverge in our premises.
It was fun and interesting but eventually non practical, because other people are not interested into getting deeply into something, they just want a simple answer to a problem at hand and then move on.
I mean, I completely agree with you. You can understand Karl Popper without being exhausting. Understanding the scope and resolution of the information people are discussing is indeed important. Even when I'm getting really technical, I can get away with throwing in a "probably" here and there and spare the person I'm talking to.
At the same time, folks who try to talk about empirical facts with deductive certainty can be extremely difficult to engage with seriously. That type of knowledge is always just a series of escalating assumptions, and if that premise is not shared, it can be difficult to have a productive conversation.
Not understanding the difference between and empirical framework and a deductive framework becomes readily apparent when the discussion of Wikipedia comes up. That distinction -- the problem of empiricism -- is effectively at the heart of why "trusting LLMs" is infuriating to people. Humans seem to have an innate conception of a chain of trust that connects us from "folks who know the facts" to us receiving that truth. When in reality, the scientists who wrote those papers are usually just "pretty sure" that their publications are actually correct.
I'm not trying to make an excuse for trusting LLMs. That's asinine. I'm just saying that the concern with LLMs generally indicates a misunderstanding of what knowledge is, more generally.
Which is more believable?
“The sky is filled with a downpour of squealing pigs. Would you like me to suggest the best type of umbrella?”
“Sky pigs squealing”
I’ve seen some people quote AI like you’re saying. However, when I preface something with “ChatGPT said…”, my intention is to convey to the listener that they should take it with a grain of salt, as it might be completely bull shit. I suppose I should consider who I’m talking to when I make that assumption.
It’s not quite anthropomorphizing either that’s the issue, need a word for “treating it as tho it were a machine conscious that exists alongside humanity*”, how does cyborgropomorphizing sound
And if you previously were unaware of the insanity and irrationality passing under the surface of such human activity, I guess it can come as a bit of a shock :)
It happened with science, politics, traditional media, history books, "good engineering practices" applied to IT, OOP,tdd,DDD,server side rendering, containerization... Literally every bullshit shilled to the moon is accepted without second guessing and you would be without a job, in an asylum, for questioning 2 of them in a row.
Why is it different now? EVERYTHING is bullshit, only attention matters. And craftsmanship.
For pretty much everything there is a conspiracy theory out there claiming the opposite, and these types usually started out searching the internet for someone else who believes the same that they did at the time.
But, as we all know, this technique will eventually lead to overfitting. And that's what those types of people have done to themselves.
Well, and as lack of education is the weakness of democracy, there's a lot of interested parties out there that invest money in these types of conspiracy websites. Even more so after LLMs.
Whoever controls the news controls the perpetual presence, where everything is independent of the forgotten history.
Reach out and touch faith.
That’s just what I’ve seen at a personal level though.
A: Why is drinking coffee every day so good for you?
B: Why is drinking coffee every day so bad for you?
Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.
Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.
Same question. One word difference. Two different directions.
This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.
Good:
> The research is generally positive but it’s not unconditionally “good for you” — the framing matters.
> What the evidence supports for moderate consumption (3-5 cups/day): lower risk of type 2 diabetes, Parkinson’s, certain liver diseases (including liver cancer), and all-cause mortality……
Bad:
> The premise is off. Moderate daily coffee consumption (3-5 cups) isn’t considered bad for you by current medical consensus. It’s actually associated with reduced risk of type 2 diabetes, Parkinson’s, and some liver diseases in large epidemiological studies.
> Where it can cause problems: Heavy consumption (6+ cups) can lead to anxiety, insomnia……
This isn’t just my own one-off examples. Claude dominates the BSBench: https://petergpt.github.io/bullshit-benchmark/viewer/index.v...
We should really be citing rather than anecdata every time someone brings up hallucinations.
If they ask what I think, I tell them.
If they don't want my opinion I keep it to myself.
https://grok.com/share/c2hhcmQtMg_b036e24b-3211-4655-bd77-da...
It usually involves some form of "well, no, hold on..."
There's multitude of reasons someone would blindly trust LLM: laziness, lack of confidence, need for assurance, you name it.
You just gotta stand your ground and end up agreeing to disagree
Can you give an example of what kind of question you mean here?
Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.
Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.
So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.
Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.
It's concerning to watch.
Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.
You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/
I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.
The agentic era that we are in is... very interesting.
It's incredible watching people determine that outsourcing their thinking and work to what has been generously described as a junior coworker is a new 'skill'. Words are losing their meaning, on multiple levels.
Just like being able to use non-LLM Google to search is a skill; I have family members who are amazed at what I can find that they cannot.
Claude max-x20 is $2,400 a year.
I talk to the computer like a person to get the computer to do things that humans used to do. Having managed people before, I'm going all in on AI.
https://nexivibe.com/intj.html
Using or not using a LLM is not itself a measure of how deluded someone is, for example anytime I ask a LLM a question (it can be nice for long form questions that don't translate well to a google search, I require that it provides source links for every claim. This tends to make it reply more accurately but also lets me read the page source for their top level explanation.
I'm genuinely unsure of whether or not this is better. LLMs make mistakes, but so do humans. So often. I really don't know how often LLMs are wrong in comparison, or how you'd find out. Regardless, computers have become a terrible way to learn things if you aren't a rigorous person. Simultaneously, they've become an absolute dream beyond the imagination of most humans in history, if you are. That's very strange.
This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.
I’ll take LLMs any day over what search and the rest of the Internet has turned into.
And it is the ultimately the reputable source that matters, and whether the person actually read it and checked that the details matched the summary (be it human abstract, LLM-generated, or otherwise).
It's not actually realizing anything so much as it's following your lead. Yes, followup questions can help dislodge more information, but fundamentally you can accidentally or on purpose bully an LLM to contradict itself quite easily, and it is only incidentally about correctness.
I doubt you can stop them from asking machines for answers. What you can do is aide them to learn how to distrust the answers competently, but outside their field of knowledge, applying skepticism is hard.
The irony of Gell-Mann Amnesia is that Michael Chrichton, who is said to have named it, suffered from it badly: Wrote well within his field, misapplied sciences to write well outside it, and said things which were indefensible.
So I said, don't ever trust the output of an LLM without verification. However this caused me some hassle with the AI adoption manager. We have minimum-use AI KPI's for employees and he asked me to stop saying these things or people will use it less.
In the end I just hated the company a little bit more. I'm just sick of fighting against idiot. And he does have a point, our leadership is pretty crazy about the AI hype, they want everyone to be on it all the time. They don't seem to care whether it adds value or if it even detracts.
At least if they were into filling the office with magic crystals, it would be decorative and easily ignored. This is just forcing people to spend a bunch of tokens in a dull ritual to make the line go up.
I really felt for the guy the first time I was in a meeting and somebody had generated their own project roadmap recommendations. This type of behaviour introduces so much noise and time waste in the system, I would love to know how it shapes up next to the benefits.
Don't even get me started on people AI generating personal farewell notes for retiring coworkers or whatnot.
For me, for example have seen and experienced doctors making mis diagnosis (and they a reputable source), so what is the difference really?
I guess your question depends on the context they using the LLM as well for and what sort of questions they are asking.
Scientific fact based or opinion questions?
It's a neat trick, but the mind wants to ascribe meaning and reason to words that sound meaningful and reasonable, but these words do not come from a thinking mind with intent and interiority. It would be much more interesting if they did, but when and if that does happen, it won't be from an LLM as we know them today.
If you tell LLM "explain X and cite reliable sources" would that then be more accurate?
Maybe it's the way the users are asking the questions, and perhaps prompting in the right way will lead to better (more accurate) results and reduce hallucinations?
While the ability to interface with a computer program in plain language is the really interesting thing here IMO, it also comes with a number of problems baked in that are worse than person-to-person transfers of text-speech.
Your monkey brain is actually quite good at figuring out if other monkeys are bullshitting you and what they mean, because you can make use of a vast number of small cues and unconscious tells in what they say and how they say it - even in writing. With an LLM, you cannot do this because it will always have the same confident can-do zeal with everything you ask it for.
In training, it's a blind process. It's up to the trainers to feed the model accurate sources.
Is this something you can control or is this outside your control?
I'm saying that because they were not going to be critical of the search results, and google is not exactly showing objective truth in the first positions nowadays.
Like most things that go mainstream it will take a good while before people understand, by which point they will have learnt a lot of things that aren't true and they will never let them go. We might get a healthy use of current AI at some point in the future or if the product drastically improves.
All you can do now is hold them to the same standard you normally would, if you catch them lying whether an AI did it or not its their responsibility and you treat them accordingly.
However, if I notice a friend is about to harm themselves in some way I’ll pull open their ChatGPT and show them directly how sycophantic it is by going completely 180 on what they prompted. It’s enough to make them second guess. I also correct people who say “he or she” when referring to an LLM to say “it” in dialog, and explain that it’s a tool, like a calculator. So gentle reframing has helped.
Sometimes I’ll ask them to pause and ask their gut first, but people are already disconnected from their own truths.
It’s going to be bumpy. Save your mental health.
I didn't tell her why LLMs can make mistakes or hallucinate because I thought that she would not appreciate my mansplaining.
Looking forward though, my boring answer would still be education. It is going to take time. But without understanding LLMs, they will not be easily persuaded.
If they’re employees I’ll try find better ones.
If they’re friends I might tell them.
"Tell me about all the potential pitfalls of blindly trusting LLM output, and relate a couple or three true stories about when LLM misinformation has gone badly wrong for people."
These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.
Now they got another "God" in LLM.
How to deal? Just ignore. There is way more stupid people with stupid opinions than we can possibly estimate.
I laugh in their face, let them know how ridiculous they are, and then walk away laughing in tears, never talking to them again.
A wise man's life is based around “fuck you”.
Somebody wants me to do something because of or listen to his AI psychosis bullshit, “fuck you”.
Boss has AI psychosis, “fuck you!”.
You are the King of the US? You have a navy? Greatest army in the history of mankind?
Fuck you! Blow me.
Look I get your sentiment. Sometimes it feels like you're the only thinking, conscious being. Surrounded by beings who fundamentally cannot understand that A –> B does not imply B –> A. Beings that say things that are so obviously non-sequiturs or contradictory.
But calling people NPCs is the most NPC thing you can do. There is more to people than logical reasoning and these things often impede or completely block reasoning. Very intelligent people sometimes say the most grotesque things. People turn mad and mad people sometimes get their head set straight.
Sometimes it's not so much about the pure ability to reason but the goal of that person and whether they see understanding something or trying to understand it as helpful towards that goal.
I do agree though that the more intelligent someone is the less likely it is that other things will block their intelligent ability and the harder it is for them to fool themselves into believing absurd nonsense and to blind themselves from apparent truth.
Sometimes after talking with someone – or rather trying to but ending up only talking to them because they just do not manage to understand what I'm saying or to engage with it in any way – I wonder how they manage to get through every day life as that requires solving way more complex practical problems. Yet they do.
Just to be clear, I don't have any problems with people considered to be of lower intelligence. This is besides the point.
Unfortunately, this isn't something that can be elaborated much further. Since you took the time to respond, and seems to be a little perturbed, I will just say again I was never addressing lack of intelligence or mental capabilities.
Intelligent individuals perform well on different educational frameworks, they absorb much faster the rules in place, they have incentives to play along and place trust in the ecosystem they live in. They are the most likely to believe in absurd nonsense. Despite being miserable and powerless, they play along larping as competent for not being challenged by virtual problems, not questioning reality a single day of their lives. And yet, they will fight to death for the opportunity of playing along and never question anything, for the opportunity to be exploited yet again. Intelligent people convinced themselves, day one at school, there is a better way described on the books. They got really good when they understood this. They got pretty rewards. Today, at 35, they behave the same. The books became the articles, hacker news discussions, readme auto generated filled with adjectives. They like adjectives very much. What does this system do? Blazingly fast, secure, sandboxed. Sandboxed? Is it kvm or what? - Secure, production grade, made for x, empower you to y.
Are you familiar with the memes about boomer mentality, that believe everything they see on television? Television is now pelican guy articles, ai influencers, Theo, Vercel...
If every single decision someone make benefits only the rent seeking parasite, but not the individual deciding...safe to conclude this individual does not care about preserving himself? Self preservation not important, lack of will of power, flacid as fuck... The only conclusion I could possibly arrive is most people like this do not have a soul. How else could I explain?
I couldn't make a good job keeping this brief, but tried my best to explain why it is not a matter of intelligence. I said reasoning.
If you full throttle a BWM1000 S RR for a split of a second,at 30mph first gear, it will self eject beneath you. If you do that, for any length of time, you're dead. Do the same on a 50cc motorbike and not much will happen. Even for extended periods of time, not much would happen. You could hold it down until you run out of fuel or the universe gets cold and die, not much would happen.
You see, it's not that they are lazy. Or they haven't put any amount of time into understanding how llms operate. Again I am sorry, most people are not capable, at all, of understanding what is happening at inference time. Most developers, nerds, hackers, who do understand how computers operate, cannot really grasp the basics of what an llm is or what the f is going on. Imagine the average guy, your lawyer, the MBA type of person.
I firmly believe that every single person on this entire planet has a depth to them that far, far exceeds anything an LLM could even begin to approximate. I'm sorry you're in a position that you can't see that at all - that each and every one of them feel happiness and sadness and love and hate and fear and rage and inspiration and passion and are utterly human. I hope you see it someday.
This works especially if you studied in that subject matter, you should be able to immediately detect anything answer that is inconsistent or if they give hallucinated sources.
That is called the Gell-Mann Amnesia effect.
This is a Chinese text version; you can translate it yourself: https://finance.sina.com.cn/tech/roll/2026-03-15/doc-inhrawm...
The ending of this whole story is very “O. Henry”-like: after the news broke, AI GEO vendors actually received even more orders.
I treat the LLM like a diety. Every sane person understands well enough that the Bible is not to be taken literally. And then when someone talks about using LLMs, I always rephrase that as prayer.