Show HN: I used Claude Code to discover connections between 100 books

(trails.pieterma.es)

346 points | by pmaze 17 hours ago

45 comments

  • drakeballew 9 hours ago
    This is a beautiful piece of work. The actual data or outputs seem to be more or less...trash? Maybe too strong a word. But perhaps you are outsourcing too much critical thought to a statistical model. We are all guilty of it. But some of these are egregious, obviously referential LLM dog. The world has more going on than whatever these models seem to believe.

    Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.

    • liqilin1567 7 hours ago
      When I saw that the trail goes through just one word like "Us/Them", "fictions" I thought it might be more useful if the trail went through concepts.
      • tmountain 25 minutes ago
        The links drawn between the books are “weaker than weak” (to quote Little Richard). This is akin to just thumbing the a book and saying, “oh, look, they used the word fracture and this other book used the word crumble, let’s assign a theme.” It’s a cool idea, but fails in the execution.
    • rtgfhyuj 2 hours ago
      give it a more thorough look maybe?

      https://trails.pieterma.es/trail/collective-brain/ is great

      • eloisius 2 hours ago
        It’s any interesting thread for sure, but while reading through this I couldn’t help but think that the point of these ideas are for a person to read and consider deeply. What is the point of having a machine do this “thinking” for us? The thinking is the point.
    • what-the-grump 7 hours ago
      Build a rag with significant amount of text, extract it by key word topic, place, date, name, etc.

      … realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data.

      You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way.

      • electroglyph 3 hours ago
        unrelated in any way? that's not normal. have you tested the model to make sure you have sane output? unless you're using sentence-transformers (which is pretty foolproof) you have to be careful about how you pool the raw output vectors
  • chrisgd 2 hours ago
    Really great work but have to agree with others that I don’t see the threads.

    The one I found most connected that the LLm didn’t was a connection between Jobs and the The Elephant in the Brain

    The Elephant in the Brain: The less we know of our own ugly motives, the easier it is to hide them from others. Self-deception is therefore strategic, a ploy our brains use to look good while behaving badly.

    Jobs: “He can deceive himself,” said Bill Atkinson. “It allowed him to con people into believing his vision, because he has personally embraced and internalized it.”

  • 8organicbits 11 hours ago
    Can someone break this down for me?

    I'm seeing "Thanos committing fraud" in a section about "useful lies". Given that the founder is currently in prison, it seems odd to consider the lie useful instead of harmful. It kinda seems like the AI found a bunch of loosely related things and mislabeled the group.

    If you've read these books I'm not seeing what value this adds.

    • Closi 11 hours ago
      I guess the lies were useful until she got caught?
      • irishcoffee 9 hours ago
        Why lie if it isn’t useful? Lying is generally bad, why do a generally bad thing if there isn’t at least a justification, a “use” if you will.
    • Terretta 7 hours ago
      Thanos is the comic book villain snapping his fingers.

      Theranos is the fraud mentioned in the piece.

  • theturtletalks 12 hours ago
    In a similar vein, I've been using Claude Code to "read" Github projects I have no business understanding. I found this one trending on Github with everything in Russian and went down the rabbit hole of deep packet inspection[0].

    0. https://github.com/ValdikSS/GoodbyeDPI

    • noname120 10 hours ago
      ValdikSS is the guy behind the SBC XQ patches for Android (that alas were never merged by G). I didn’t expect to see him here with another project!

      https://habr.com/en/articles/456476/

      https://android-review.googlesource.com/c/platform/system/bt...

    • dinkleberg 12 hours ago
      That's a cool idea. There are so many interesting projects on GitHub that are incomprehensible without a ton of domain context.
      • theturtletalks 12 hours ago
        I got the idea from an old post on here called Story of Mel[0] where OP talks about the beauty of Mel's intricate machine code on a RPC-4000.

        This is the part that always stuck with me:

        I have often felt that programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process. You can learn a lot about an individual just by reading through his code, even in hexadecimal. Mel was, I think, an unsung genius.

        0. http://catb.org/esr/jargon/html/story-of-mel.html

        • coolewurst 12 minutes ago
          Thank you for sharing that story. Mel seems virtuousic, but is that really art? Optimizing pattern positioning on a drum for maximum efficiency. Is that expression?
  • smusamashah 12 hours ago
    I dont understand the lines connecting two pieces of text. In most cases, the connected words have absolutely zero connection with each other.

    In "Father wound" the words "abandoned at birth" are connected to "did not". Which makes it look like those visual connections are just a stylistic choice and don't carry any meaning at all.

    • Oras 12 hours ago
      I had the exact same impression.
    • hecanjog 4 hours ago
      Yes, they look really good but they're being connected by an LLM.
  • johnwatson11218 10 hours ago
    I did something similar whereby I used pdfplumber to extract text from my pdf book collection. I dumped it into postgresql, then chunked the text into 100 char chunks w/ a 10 char overlap. These chunks were directly embedded into a 384D space using python sentence_transformers. Then I simply averaged all chunks for a doc and wrote that single vector back to postgresql. Then I used UMAP + HDBScan to perform dimensionality reduction and clustering. I ended up with a 2D data set that I can plot with plotly to see my clusters. It is very cool to play with this. It takes hours to import 100 pdf files but I can take one folder that contains a mix of programming titles, self-help, math, science fiction etc. After the fully automated analysis you can clearly see the different topic clusters.

    I just spent time getting it all running on docker compose and moved my web ui from express js to flask. I want to get the code cleaned up and open source it at some point.

    • ct0 10 hours ago
      This sounds amazing, totally interested in seeing the approach and repo.
    • hellisad 8 hours ago
      Sounds a lot like Bertopic. Great library to use.
  • pxc 12 hours ago
    I read a book maybe a decade ago on the "digital humanities". I wish now I could remember the title and author. :(

    Anyway, it introduced me to the idea of using computational methods in the humanities, including literature. I found it really interesting at the time!

    One of the the terms it introduced me to is "distant reading", whose name mirrors that of a technique you may have studied in your gen eds if you went to university ('close reading"). The idea is that rather than zooming in on some tiny piece of text to examine very subtle or nuanced meanings, you zoom out to hundreds or thousands of texts, using computers to search them for insights that only emerge from large bodies of work as wholes. The book argued that there are likely some questions that it is only feasible to ask this way.

    An old friend of mine used techniques like this for dissertation in rhetoric, learning enough Python along the way to write the code needed for the analyses she wanted to do. I thought it was pretty cool!

    I imagine LLMs are probably positioned now to push distant reading forward in an number of ways: enabling new techniques, allowing old techniques to be used without writing code, and helping novices get started with writing some code. (A lot of the maintainability issues that come with LLM code generation happily don't apply to research projects like this.)

    Anyway, if you're interested in other computational techniques you can use to enrich this kind of reading, you might enjoy looking into "distant reading": https://en.wikipedia.org/wiki/Distant_reading

    • plutokras 12 hours ago
      > I wish now I could remember the title and author.

      LLMs are great at finding media by vague descriptions. ;)

      • ako 11 hours ago
        According to Claude (easy guess from the wikipedia link?):

        The book is almost certainly by *Franco Moretti*, who coined the term "distant reading." Given the timeframe ("maybe a decade ago") and the description, it's most likely one of these two:

        1. *"Distant Reading"* (2013) — A collection of Moretti's essays that directly takes the concept as its title. This would fit well with "about a decade ago."

        2. *"Graphs, Maps, Trees: Abstract Models for Literary History"* (2005) — His earlier and very influential work that laid out the quantitative, computational approach to literary analysis, even if it didn't use "distant reading" as prominently in the title.

        Moretti, who founded the Stanford Literary Lab, was the major proponent of the idea that we should analyze literature not just through careful reading of individual canonical texts, but through large-scale computational analysis of hundreds or thousands of works—looking at trends in genre evolution, plot structures, title lengths, and other patterns that only emerge at scale.

        Given that the commenter specifically remembers learning the term "distant reading" from the book, my best guess is *"Distant Reading" (2013)*, though "Graphs, Maps, Trees" is also a strong possibility if their memory of "a decade" is approximate.

      • pxc 7 hours ago
        After some digging, I think it was likely this one: https://direct.mit.edu/books/book/5346/Digital-Humanities
  • urbandw311er 11 hours ago
    This feels like a nice idea but the connection between the theme and the overarching arc of each book seems tenuous at best. In some cases it just seems to have found one paragraph from thousands and extrapolated a theme that doesn’t really thread through the greater piece.

    I do like the idea though — perhaps there is a way to refine the prompting to do a second pass or even multiple passes to iteratively extract themes before the linking step.

  • bonkusbingus 11 hours ago
    "There are, you see, two ways of reading a book: you either see it as a box with something inside and start looking for what it signifies, and then if you're even more perverse or depraved you set off after signifiers. And you treat the next book like a box contained in the first or containing it. And you annotate and interpret and question, and write a book about the book, and so on and on. Or there's the other way: you see the book as a little non-signifying machine, and the only question is "Does it work, and how does it work?" How does it work for you? If it doesn't work, if nothing comes through, you try another book. This second way of reading's intensive: something comes through or it doesn't. There's nothing to explain, nothing to understand, nothing to interpret." — Gilles Deleuze
    • drakeballew 9 hours ago
      I am not familiar with the source of this quote, but I don't disagree, it is just incredibly reductive. Gilles Deleuze him-/her-self was not born and did not live in a vacuum. They were influenced and mimetically reproduced ideas they were exposed to, like we all do. I don't find the point of this project meaningless myself. The opposite in fact. But the results are not accurate for anyone who has actually read any of these texts.
  • zkmon 2 hours ago
    Given the common goals of every book (fame and sales by grabbing user attention), the general themes and styles would have high similarity. It's like flowers with bright colors and nice shapes.

    Orwelliian motives (sheer egoism, aesthetic enthusiasm, historical impulse and political purposes) are somewhat dated.

  • tolerance 11 hours ago
    I don’t like this product as a service to readers (i.e., people who read as a cognitive/philosophical exploit) but I do think that somewhere embedded in its backend there are things of benefit.

    I think that this sucks the discreet joy out of reading and learning. Having the ways that the topics within a certain book can cross over in lead into another book of a different topic externalized is hollowing and I don’t find it useful.

    On the other hand I feel like seeing this process externalized gives us a glimpse at how “the algorithms” (read: recommender systems) suggest seemingly disjunctive content to users. So as a technical achievement I can’t knock what you’ve done and I’m satisfied to see that you’re the guy behind the HN Book map that I thought was nice too.

    At its core this looks like a representation of the advantages that LLMs can afford to the humanities. Most of us know how Rob Pike feels about them. I wonder if his senior former colleague feels the same: https://www.cs.princeton.edu/~bwk/hum307/index.html. That’s a digression, but I’d like to see some people think in public about how to reasonably use these tools in that domain.

    • mathgeek 11 hours ago
      > Having the ways that the topics within a certain book can cross over in lead into another book of a different topic externalized is hollowing and I don’t find it useful.

      Intuitively, I agree. This feels like the different between being a creator (of your own thoughts as inspired by another person's) and a consumer (although in a somewhat educational sense). There would need to be a big advantage to being taught those initial thoughts, analogous to why we teach folks algebra/calculus via formulas rather than having every student figure out proofs for themselves.

  • lkbm 11 hours ago
    Earlier today, I was thinking about doing something somewhat similar to this.

    I was recently trying to remember a portal fantasy I read as a kid. Goodreads has some impressive lists, not just "Portal Fantasies"[0], but "Portal Fantasies where the portal is on water[1], and a seven more "where/what's the portal" categories like that.

    But the portal fantasy I was seeking is on the water and not on the list.

    LLMs have failed me so far, as has browsing the larger portal fantasy list. So, I thought, what if I had an LLM look through a list of kids books published in the 1990s and categorize "is this a portal fantasy?" and "which category is the portal?"

    I would 1. possibly find my book and 2. possibly find dozens of books I could add to the lists. (And potentially help augment other Goodread-like sites.)

    Haven't done it, but I still might.

    Anyway, thanks for making this. It's a really cool project!

    [0] https://www.goodreads.com/list/show/103552.Portal_Fantasy_Bo...

    [1] https://www.goodreads.com/list/show/172393.Fiction_Portal_is...

  • barrenko 1 hour ago
    On a long enough timeline, we will be using Claude Code for .. any.. type of work?
  • amadeuswoo 11 hours ago
    The feedback loop you describe—watching Claude's logs, then just asking it what functionality it wished it had—feels like an underexplored pattern. Did you find its suggestions converged toward a stable toolset, or did it keep wanting new capabilities as the trails got more sophisticated?
    • samuelknight 11 hours ago
      I do this all the time in my Claude code workflow: - Claude will stumble a few times before figuring out how to do part of a complex task - I will ask it to explain what it was trying to do, how it eventually solved it, and what was missing from its environment. - Trivial pointers go into the CLAUDE.md. Complex tasks go into a new project skill or a helper script

      This is the best way to re-enforce a copilot because models are pretty smart most of the time and I can correct the cases where it stumbles with minimal cognitive effort. Over time I find more and more tasks are solved by agent intelligence or these happy path hints. As primitive as it is, CLAUDE.md is the best we have for long-term adaptive memory.

    • pmaze 11 hours ago
      I ended up judging where to draw the line. Its initial suggestions were genuinely useful and focused on making the basic tool use more efficient. e.g. complaining about a missing CLI parameter that I'd neglected to add for a specific command, requesting to let it navigate the topic tree in ways I hadn't considered, or new definitions for related topics. After a couple iterations the low hanging fruit was exhausted, and its suggestions started spiralling out beyond what I thought would pay off (like training custom embeddings). As long as I kept asking it for new ideas, it would come up with something, but with rapidly diminishing returns.
  • timoth3y 11 hours ago
    What meaningful connections did it uncover?

    You have an interesting idea here, but looking over the LLM output, it's not clear what these "connections" actually mean, or if they mean anything at all.

    Feeding a dataset into an LLM and getting it to output something is rather trivial. How is this particular output insightful or helpful? What specific connections gave you, the author, new insight into these works?

    You correctly, and importantly point out that "LLMs are overused to summarise and underused to help us read deeper", but you published the LLM summary without explaining how the LLM helped you read deeper.

    • pmaze 10 hours ago
      The connections are meaningful to me in so far as they get me thinking about the topics, another lens to look at these books through. It's a fine balance between being trivial and being so out there that it seems arbitrary.

      A trail that hits that balance well IMO is https://trails.pieterma.es/trail/pacemaker-principle/. I find the system theory topics the most interesting. In this one, I like how it pulled in a section from Kitchen Confidential in between oil trade bottlenecks and software team constraints to illustrate the general principle.

      • timoth3y 10 hours ago
        Can you walk me though some of the insights you gained? I've read several of those books, including Kitchen Confidential and Confessions of an Economic Hit Man, and I don't see the connection that the LLM (or you) is trying to draw. What is the deeper insight into these works that I am missing?

        I'm not familiar with he term "Pacemaker Principle" and Google search was unhelpful. What does it mean in this context? What else does this general principle apply to?

        I'm perfectly willing to believe that I am missing something here. But reading thought many of the supportive comments, it seems more likely that this is an LLM Rorschach test where we are given random connections and asked to do the mental work of inventing meaning in them.

        I love reading. These are great books. I would be excited if this tool actually helps point out connections that have been overlooked. However, it does not seem to do so.

        • varenc 5 hours ago
          > Can you walk me though some of the insights you gained?

          This made me realize that so many influential figures have either absent fathers, or fathers that berated them or didn't give them their full trust/love. I think there's something to the idea that this commonality is more than coincidence. (that's the only topic of the site I've read through yet, and I ignored the highlighted word connections)

        • gchamonlive 10 hours ago
          > we are given random connections and asked to do the mental work of inventing meaning in them

          How is that different from having an insight yourself and later doing the work to see if it holds on closer inspection?

          • delusional 9 hours ago
            Don't ask me to elaborate on this, because it's kinda nebulous in my mind. I think there's a difference between being given an insight and interrogating that on your own initiative, and being given the same insight.
            • gchamonlive 9 hours ago
              I don't doubt there is a difference in the mechanism of arriving at a given connection. What I think it's not possible to distinguish is the connection that someone made intuitively after reading many sources and the one that the AI makes, because both will have to undergo scrutiny before being accepted as relevant. We can argue there could be a difference in quality, depth and search space, maybe, but I don't think there is an ontological difference.
              • fwip 8 hours ago
                The one that you thought of in the shower has a much greater chance of being right, and also of being relevant to you.
    • Aurornis 9 hours ago
      I like design that highlights words in one summary and links them to highlights in the next. It's a cool idea

      But so many of the links just don't make sense, as several comments have pointed out. Are these actually supposed to represent connections between books, or is it just a random visual effect that's suppose to imply they're connected?

      I clicked on one category and it has "Us/Them" linked to "fictions" in the next summary. I get that it's supposed to imply some relationship but I can't parse the relationships

    • rjh29 11 hours ago
      100 books is too small a datasize - particularly given it's a set of HN recommendations (i.e. a very narrow and specific subset of books). A larger set would probably draw more surprising and interesting groupings.
      • DyslexicAtheist 10 hours ago
        > 100 books is too small a datasize

        this to me sounds off. I read the same 8, to 10 books over and over and with every read discover new things. the idea of more books being more useful stands against the same books on repeat. and while I'm not religious, how about dudes only reading 1 book (the Bible, or Koran), and claiming that they're getting all their wisdom from these for a 1000 years?

        If I have a library of 100+ books and they are not enough then the quality of these books are the problem and not the number of books in the library?

  • pennaMan 31 minutes ago
    this is amazingly cool, great work!
  • lisdexan 9 hours ago
    Finally, Schizophrenia as a Service (SaaS).
  • guidoism 5 hours ago
    Nice! I've been using Claude Code and ChatGPT for something similar. My inspiration is Adler's concept of The Great Conversation and Adler's Propædia. I've been able to jump between books to read about the same concept from different author's perspectives.
  • hecanjog 7 hours ago
    You really know what a good interface should be like, this is really inspiring. So is the design of everything I've seen on your website!

    I won't pile on to what everyone else has said about the book connections / AI part of this (though I agree that part is not the really interesting or useful thing about your project) but I think a walk-through of how you approach UI design would be very interesting!

  • hising 11 hours ago
    Yeah, I had a similar idea, I used Open AI API to break down movies into the 3 act structure, narrative, pacing, character arcs etc and then trying to find movies that are similar using PostgreSQL with pgvector. The idea was to have another way to find movies I would like to watch next based on more than "similar movies" in IMDb. Threw some hours at it, but I guess it is a system that needs a lot of data, a lot of tokens and enormous amount of tweaking to be useful. I love your idea! I agree with you on that we could use LLM:s for this kind of stuff that we as humans are quite bad at.
  • Aurornis 13 hours ago
    It’s interesting how many of the descriptions have a distinct LLM-style voice. Even if you hadn’t posted how it was generated I would have immediately recognized many of the motifs and patterns as LLM writing style.

    The visual style of linking phrases from one section to the next looks neat, but the connections don’t seem correct. There’s a link from “fictions” to “internal motives” near the top of the first link and several other links are not really obviously correct.

    • pmaze 13 hours ago
      The names & descriptions definitely have that distinct LLM flavour to them, regardless of which model I used. I decided to keep them, but as short as possible. In general, I find the recombination of human-written text to be the main interest.

      There's two stages to the linking: first juxtaposing the excerpts, then finding and linking key phrases within them. I find the excerpts themselves often have interesting connections between them, but the key phrases can be a bit out there. The "fictions" to "internal motives" one does gel for me, given the theme of deceiving ourselves about our own motivations.

    • reedf1 12 hours ago
      Well even the post itself reads to me as AI generated
  • itsangaris 10 hours ago
    surprised to that "seeing like a state" didn't get included in the "legibility tax" category
  • JimmyJamesJames 10 hours ago
    Like this initial step and its findings.

    #1: would a larger dataset increase the depth and breadth of insight ( go to #2) #2: with the initial top 100, are there key ‘super node’ books that stand out as ones to read due the breadth they offer. Would a larger dataset identify further ‘super node’ books.

  • amelius 11 hours ago
    Makes me wonder, how well could an LLM-based solution score on the Netflix prize?

    https://en.wikipedia.org/wiki/Netflix_Prize

    (Are people still trying to improve upon the original winning solution?)

  • adsharma 7 hours ago
    This is GraphRAG using SQLite.

    Wouldn't it be good if recursive Leiden and cypher was built into an embedded DB?

    That's what I'm looking into with mcp-server-ladybug.

  • dev_l1x_be 9 hours ago
    Claude code is good for arranging random things into categories, with code, configuration and documentation files it is barely goes into random rabbit holes or hallucinates categories for me.
  • sciences44 11 hours ago
    Love the originality here - makes you curious to explore more.

    Solid technical execution too. Well done!

  • threecheese 7 hours ago
    Where did you come across Leiden partitioning? I’m facing a similar use case and wonder what you’re reading.
  • chromanoid 9 hours ago
    > A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

    I really appreciate you mentioning this. I think this is the nature of LLMs in general. Any symbol it processes can affect its reasoning capabilities.

  • pharrington 3 hours ago
    Please don't give yourself LLM-induced psychosis.
  • dangoodmanUT 12 hours ago
    The UI animations are so fun
  • jgalt212 8 hours ago
    What did it say about who wrote To Kill a Mockingbird?
  • wormpilled 13 hours ago
    >A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems

    Interesting... seems like it wants the keys on your system! ;)

  • typon 4 hours ago
    The website design and content are much nicer than the "ideas" here. Just standard LLM slop once if you actually have read some of these books.
  • miracoli 10 hours ago
    wow I hope the bubble pops soon.. now that you discovered books with AI that was illegally trained on them, how about reading them?
    • nephihaha 10 hours ago
      I'm not sure I understand what the connections are exactly, or whether they go much deeper than certain words and phrases.
      • only-one1701 10 hours ago
        I'm really not trying to be mean, but one of the things we learn in the humanities is that basically any two texts can be connected via extremely broad statements (e.g. "Perfect is the enemy of the good"). This is like the joke on twitter about how every couple of years someone in tech invents the concept of public transportation.
  • only-one1701 10 hours ago
    This is an IQ test lol
  • jereees 11 hours ago
    now do this for research papers! fun stuff :)
  • mannanj 10 hours ago
    Seems like a lot of successful leaders have a history of or normalize deception and lying for some benefit.
  • joe_the_user 12 hours ago
    A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

    It's all fun and game 'till someone loses an eye/mind/even-tenuous-connection-to-reality.

    Edit: I'd mention that the themes Claude finds qualify as important stuff imo. But they're all pretty grim and it's a bit problematic focusing on them for a long period. Also, they are often the grimmest spin things that are well known.

    • drakeballew 9 hours ago
      Don't believe Claude, let's put it that way.
  • durch 12 hours ago
    [flagged]
    • glemion43 11 hours ago
      I'm carrying a thought around for the last few weeks:

      A LLM is a transformer. It transforms a prompt into a result.

      Or a human idea into a concrete java implementation.

      Currently I'm exploring what unexpected or curious transformations LLMs are capable of but haven't found much yet.

      At least I myself was surprised that an LLM can transform a description of something into an IMG by transforming it into a SVG.

      • durch 11 hours ago
        Format conversions (text → code, description → SVG) are the transformations most reach for first. To me the interesting ones are cognitive: your vague sense → something concrete you can react to → refined understanding. The LLM gives you an artifact to recognize against. That recognition ("yes, more of that" or "no, not quite") is where understanding actually shifts. Each cycle sharpens what you're looking for, a bit like a flywheel, each feeds into the next one.
    • calmoo 11 hours ago
      Ironically your comment is clearly written by an LLM.
      • durch 11 hours ago
        Ironic indeed: pattern-matching the prose style instead of engaging the idea is exactly the shallow reading the post is about.
        • calmoo 11 hours ago
          Your original comment is completely void of any substance or originality. Please don't fill the web with robot slop and use your own voice. We both know what you're doing here.
          • drekipus 10 hours ago
            I dunno, he might have just been reading too much that he really writes like this now. I've seen it happen.
            • calmoo 10 hours ago
              no, definitely not. It was 100% LLM written. Look at their post history.
    • sidrag22 10 hours ago
      > gets at something fundamental.

        :D
    • afro88 11 hours ago
      LLMs are generators, and that was the correct way to view them at the start. Agents explore.
      • durch 11 hours ago
        Generator vs. explorer is a useful distinction, but it's incomplete. Agents without a recognition loop are just generators with extra steps.

        What makes exploration valuable is the cycle: act, observe, recognize whether you're closer to what you wanted, then refine. Without that recognition ("closer" or "drifting"), you're exploring blind.

        Context is what lets the loop close. You need enough of it to judge the outcome. I think that real shift isn't generators → agents. It's one-shot output → iterative refinement with judgment in the loop.

        • throwawaySimon 11 hours ago
          Please stop.
          • durch 11 hours ago
            Something in there you'd like to discuss further, I've been thinking a lot about these ideas ever since LLMs came around, and I think these are many more of these discussion ahead of us...
            • throwawaySimon 11 hours ago
              Kind of tedious trying to have a discussion with someone who clearly generates their part.
  • napolux 12 hours ago
    Monetize it!