I was a top 0.01% Cursor user, then switched to Claude Code 2.0

(blog.silennai.com)

48 points | by SilenN 14 hours ago

18 comments

  • tuckwat 1 hour ago
    > You no longer need to review the code. Or instruct the model at the level of files or functions. You can test behaviors instead.

    Maybe for a personal project but this doesn't work in a multi-dev environment with paying customers. In my experience, paying attention to architecture and the code itself results in a much more pliable application that can be evolved.

    • madrox 1 hour ago
      It's doesn't work...yet. I agree my stomach churns a little at this sentence. However, paying customers care about reliability and performance. Code review helps that today, but it's only a matter of time before it is more performative than useful in serving those goals at the cost of velocity.
      • AIorNot 1 hour ago
        the (multi) billon dollar question is when that will happen, I think, case in point:

        the OP is a kid in his 20s describing the history of the last 3 years or so of small scale AI Development (https://www.linkedin.com/in/silen-naihin/details/experience/)

        How does that compare to those of us with 15-50 years of software engineering experience working on giant codebases that have years of domain rules, customers and use cases etc.

        When will AI be ready? Microsoft tried to push AI into big enterprise, Anthropic is doing a better job -but its all still in infancy

        Personally for me I hope it won't be ready for another 10 years so I can retire before it takes over :)

        I remember when folks on HN all called this AI stuff made up

        • madrox 1 hour ago
          As a guy in his mid-forties, I sympathize with that sentiment.

          I do think you're missing how this will likely go down in practice, though. Those giant codebases with years of domain rules are all legacy now. The question is how quickly a new AI codebase could catch up to that code base and overtake it, with all the AI-compatibility best practices baked in. Once that happens, there is no value in that legacy code.

          Any prognostication is a fool's errand, but I wouldn't go long on those giant codebases.

          • AIorNot 14 minutes ago
            Yeah agreed - It all depends on how quickly AI (or more aptly, ai driven work done by humans hoping to make a buck) starts replacing real chunks of production workflows

            “prediction is hard especially about the future” - yogi berra

            As a hedge - I have personally dived deep into AI coding, actually have been for 3 years now - I’ve even launched 2 AI startups and working on a third - but its all so unpredictable and hardly lucrative yet

            As an over 50 year old - I’m a clear target for replacement by AI

    • zdragnar 1 hour ago
      Everyone who is responsible for SOC 2 at their company just felt a disturbance.

      Honestly, I can't wait for AI: development practices to mature, because I'm really tired of the fake hype and missteps getting in the way of things.

      • LtWorf 1 hour ago
        Why would AI not fall for fake hype?
    • nzoschke 1 hour ago
      Counter argument...

      High velocity teams also observe production system telemetry and use error rates, tracing and more to maintain high SLAs for customers.

      They set a "budget" and use feature flagging to release risky code and roll back or roll forward based on metrics.

      So agentic coding can feed back on observed behaviors in production too.

  • ch_123 1 hour ago
    > Use Claude Code if you

    > a) never plan on learning and just care about outputs, or

    > b) are an abstraction maximilist.

    As a Claude Code user for about 6 months, I don't identify with either of these categories. Personally I switched to Claude Code because I don't particularly enjoy VScode (or forks thereof). I got used to a two window workflow - Claude Code for AI-driven development, and Goland for making manual edits to the codebase. As of a few months ago, Claude Code can show diffs in Goland, making my workflow even smoother.

    • kldavis4 1 hour ago
      My only gripe with the Goland integration is if you have multiple terminals open with CC, it will randomly switch to the first terminal for no apparent reason. Then, if you aren't paying close attention, you prompt the wrong instance.
    • csto12 1 hour ago
      Do you find yourself making manual changes 50%, 40%, 30%… of the time?

      Always curious to hear how individuals have their workflows, if you don’t mind sharing.

  • asdff 1 hour ago
    >> You no longer need to review the code. Or instruct the model at the level of files or functions. You can test behaviors instead.

    I think this is where things will ultimately head. You generate random code, purely random in raw machine readable binary, and simply evaluate a behavior. Most random generated code will not work. some, however, will work. and within that working code, some will be far faster and this is the code that is used.

    No different than what a geneticist might do evaluating generated mutants for favorable traits. Knowledge of the exact genes or pathways involved is not even required, one can still select among desired traits and therefore select for that best fit mechanism without even knowing it exists.

    • hollowturtle 1 hour ago
      Why should we throw away decades of development in determistic algorithms? Why tech people mentions "geneticists"? I would never select an algorithm with a "good" flying trait for making an airplane works, that's nuts
      • asdff 1 hour ago
        But you have selected an algorithm with a "good" flying trait already for making airplanes. Just with another avenue to get to it versus pure random generation. The evolution of the bird has came up with another algorithm for example, where they use flapping wings instead of thrust from engines. Even among airplane development, a lot was learned by studying birds, which are the result of a random walk algorithm.
        • hollowturtle 58 minutes ago
          No there is no selection and no traits to pick, it's the culmination of research and human engineering. An airplan is a complex system that needs serious engineering. You can study birs but up till a certain point, if you like it go doing bird watching, but it's everything except engineering
          • asdff 46 minutes ago
            >it's the culmination of research and human engineering.

            And how is this different than the process of natural selection? More fit ideas win out relative to less fit and are iterated upon.

            • hollowturtle 26 minutes ago
              First of all natural selection doesn't happen per se, nor is controlled by some inherent mechanism, it's the by product of many factors external and internal. So the comparison is just wrong. Human engineering is an interative process not a selection. And if we want to call it selection, even though it is a stretch, we're controlling it, we the master of puppets, natural selection is anything but a controlled process. We don't select a more resistant wing, we engineer the wing with a high bending tolerance, again it's an iterative process
              • asdff 19 minutes ago
                We do select for a more resistant wing. How did we determine that this wing is more resistant? We modeled its bending tolerance and selected this particular design against other designs that had worse evaluated results for bending tolerance.
      • phist_mcgee 1 hour ago
        Great rule of business: sell a solution that causes more problems, requiring the purchase of more solutions.
        • hollowturtle 1 hour ago
          Customers are tired of getting piles of shit, look at the Windows situation
          • lenerdenator 1 hour ago
            Or don't sell the solution. When you have monopolies, regulatory capture, and endless mountains of money, you can more or less do what you'd like.
            • hollowturtle 56 minutes ago
              That's a lie, people will eventually find a way out, it was always like that, being it open source or by innovating and eventually leave the unable to innovate tech giants dying. We have Linux and this year will be the most exciting for the Linux desktop given how bad the Windows situation is
              • bradlys 32 minutes ago
                Only been hearing that for twenty years and these tech giants are bigger than they’ve ever been.

                I remember when people said Open Office was going to be the default because it was open source, etc etc etc. It never happened. Got forked. Still irrelevant.

                • hollowturtle 19 minutes ago
                  I said "being it open source or by innovating" eg Google innovated and killed many, also contributed a lot to open source. Android is a Linux success, ChromeOS too. Now Google stinks and it is not innovating anymore, except for when other companies, like OpenAI, come for their lunch. Google was caught off guard but eventually catching up. Sooner or later, big tech gets eaten by next big tech. I agree if we stop innovating that would never happen, like Open Office is the worst example you could have picked
    • stavros 1 hour ago
      Maybe, but you won't be able to test all behaviors and you won't have enough time to try a million alternatives. Just because of the number of possibilities, it'll be faster to just read the code.
      • asdff 1 hour ago
        Eventually the generation and evaluation will be quite fast where testing a million alternatives will be viable. Impressive you suggest that there might be a million alternatives but it would be faster to just read the code and settle on one. How might that be determined? Did the author who wrote the standard library really come up with the best way when writing those functions? Or did they come up with something that seemed alright to ship relative to other ideas people came up with?

        I think we need to think outside the box here and realize ideas can be generated, evaluated, and settled upon far faster than any human operates. The idea of doing what a trillion humans evaluating different functions can do is actually realistic with the path of our present technology. We are at the cusp of some very remarkable times, even more remarkable than the innovations of the past 200 years, should we make progress on this effort.

        • stavros 1 hour ago
          If this were viable, we'd all be running Haiku ten times in the time it took to run Opus once, but nobody does.
          • asdff 48 minutes ago
            We don't have the compute for this today. We will in several centuries, if compute growth continues.
        • jimbokun 57 minutes ago
          This comment strikes me as not having a good intuition for how fast the space of possible programs can grow.
          • asdff 47 minutes ago
            You don't think the space of possible problems can be parsed with increased compute?
    • lunar_mycroft 1 hour ago
      For this to work, you'd have to fully specify the behavior of your program in the tests. Put another way, at that point your tests are the program. So the question is, which is a more convenient way to specify the behavior of a program: a traditional programming language, or tests written in that language. I think the answer should be fairly obvious.
      • asdff 1 hour ago
        Behavior does not need to be fully specified at the outset. It could be evaluated after the run. We've actually done this before in our own technology. We studied birds and their flight characteristics, and took lessons from that for airplane development. What is a bird but the output of a random walk algorithm selected by constraints bound by so many latent factors we might never fully grasp?
        • lunar_mycroft 55 minutes ago
          > Behavior does not need to be fully specified at the outset. It could be evaluated after the run.

          This doesn't work when the software in question is written by competent humans, let alone the sort of random process you describe. A run of the software only tells you the behavior of the software for a given input, it doesn't tell you all possible behaviors of the software. "I ran the code and the output looked good" is no where near sufficient.

          > We've actually done this before in our own technology. We studied birds and their flight characteristics, and took lessons from that for airplane development.

          There is a vast chasm between "bioinspiration is sometimes a good technique" and "genetic algorithms are a viable replacement for writing code".

          • asdff 42 minutes ago
            Genetic algorithms created our species, which are far more complex than anything we have written in computer science. I think they have stood up to the tests of creating a viable product for a given behavior.

            And with future compute, you will be able to evaluate behavior across an entire range of inputs for countless putative functions. There will be a time when none of this is compute bound. It is today, but in three centuries or more?

    • mitemte 1 hour ago
      I can see a lot of negatives in relation to removing the human readable aspect of software development. Thorough testing would be virtually impossible because we’d be relying on fuzzing to iron out potential edge cases or bugs.

      In this situation, AI companies are incentivised to host the services their tooling generates. If we don't get source code, it is much easier for them to justify not sharing it. Plus, who is to say the machine code even works on consumer hardware anyway? It leads to a future where users specify inputs while companies generate programs and handle execution. Everything becomes a black box. No thank you.

      • asdff 49 minutes ago
        All these questions are true for agriculture, yet you say "yes thank you, and please continue" for that industry I am sure, which seeks to improve product through random walk and unknown mechanisms. Maybe take a step back and examine your own biases.
    • madrox 1 hour ago
      You're describing genetic algorithms: https://en.wikipedia.org/wiki/Genetic_algorithm
      • asdff 1 hour ago
        Exactly. As compute increases these algorithms will only get more compelling. You can test and evaluate so many more ideas than any human inventors can generate on their own.
      • HPsquared 1 hour ago
        I suppose you could generate prompts from "genes" somehow.
    • Dilettante_ 1 hour ago
      We would of course need to specify the behaviors to test for. The more precisely we specify these behaviors, the more complexly our end product would be able to behave. We might invent a formal language for writing down these behaviors, and some people might be better at thinking about what kind of tests would need to be written to coax a certain type of end result out of the machine.

      But that's future music, forgive a young man for letting his imagination run wild! ;)

      • asdff 29 minutes ago
        If we consider other fields such as biology, behaviors of interest are specified but I'm not sure a formal language is currently being used per say. Data are evaluated on dimensional terms that could be either quantitative or qualitative. meta analysis of some sort might be used to reduce dimensionality to some degree but that usually happens owing to lack of power for higher resolution models.

        One big advantage of this future random walk paradigm is you would not be bound by the real world constraints of sample collection of biological data. datasets could be made arbitrarily large and cost to do so will follow an inverse relationship with compute gains.

    • asdev 1 hour ago
      all fun and games until you need to debug the rats nest that you've been continually building. I am actually shocked people who have coded before have been one-shotted into believing this
      • asdff 1 hour ago
        If a bug rears its head it can be dealt with. Again, this is essentially already practiced by humans through breeding programs. Bugs have come up, such as deleterious traits, and we have either engineered solutions to get around them or worked to purge the alleles behind the traits from populations under study. Nothing is ever bug free. The question is if the bugs are show stoppers or not. And random walk iteration can produce more solutions that might get around those bugs.
    • Fire-Dragon-DoL 1 hour ago
      How do you handle the larger amounts of tests? I did this but my PRs are larger because more tests are needed
      • asdff 1 hour ago
        I'm not sure. My thinking is this will occur on the scale of the next several hundred years, not the next few.
    • themafia 1 hour ago
      > You generate random code,

      Code derived from a training set is not at all "random."

      • asdff 1 hour ago
        Not saying training set or present LLM. Truly random binary generator left to its own device. Lets evaluate what spits out from that iterated several trillion times over with the massive compute capability we will have. I am not thinking of this happening in the next couple years, but in the next couple of centuries.
        • themafia 58 minutes ago
          > Truly random binary generator left to its own device.

          This has been tried. It tends to overfit to unseen test environment conditions. You will not produce what you intend to produce.

          > the massive compute capability we will have

          Just burn carbon in the hopes something magical will happen. This is the mentality of a cargo cult.

          • asdff 55 minutes ago
            We also burn carbon to feed the brain. Compute is what is increasing in capability on the scale of orders of magnitudes just within our own lifetimes. Brainpower is not increasing in capability. If you want future capabilities and technological advancement to occur at the fastest pace possible, eventually we have to leave the slow ape brain behind in favor of sources of compute that can evaluate functions several orders of magnitude faster.
    • dns_snek 1 hour ago
      And how exactly do you foresee probabilistic systems working out in real life? Nobody wants software that seldom does what they expect, and which tends to trend toward desirable behavior over time (where "desirable" behavior is determined by the sum of global feedback and revenue/profit of the company producing it).

      Today you send some money to your spouse but it's received by another person with the same name. Tomorrow you order food but your order gets mixed up with someone else's.

      Tough luck, the system is probabilistic and you can only hope that the evolutionary pressures influence the behavior to change in desirable ways. This fantasy is a delusion.

      • asdff 57 minutes ago
        I think you misunderstand. Once established a function found through random walk is no different than a function found in any other way. If it works it works, if it doesn't it doesn't.
    • bossyTeacher 1 hour ago
      > ou generate random code, purely random in raw machine readable binary, and simply evaluate a behavior. Most random generated code will not work. some, however, will work. and within that working code, some will be far faster and this is the code that is used.

      Humans are expensive but this approach seems incredibly inefficient and expensive. Even a junior can make steady progress against implementing a function, with your approach, just monkey coding like that could take you ages to write a single function. Estimates in software are already bad, they will get worse with your approach

      • asdff 1 hour ago
        Today it might not work given what a junior could do against the cost of compute through random walk, but can you say the same in three centuries? We increase compute by the year, but our own brainpower does not increase on those terms. Estimates are that we are actually losing brainpower over time.
    • formerly_proven 1 hour ago
      This is a recurring fantasy in LLM threads but makes little sense. Writing machine code is very difficult (even writing byte code for simple VMs is annoying and error-prone). Abstractions are beneficial and increase productivity (per human, per token). It makes essentially no sense to throw away seven decades of productivity increasing technologies to have neural nets punch cards again, and it's not going to happen unless tokens become unimaginably cheap.
      • asdff 1 hour ago
        Compute is always increasing on this planet. It makes no sense to stick with seven decade old paradigms from the time when we were simply transmuting mathematical proofs into computational functions. We should be exploring the void, we will have the compute for this. Randomness will take away the difficulty as we increase compute to parse over these random functions in reasonable time frames. The idea of limiting technological development to what our ape brain can conceive of on its own on human biological timescales is quite a shackle, honestly.
    • lenerdenator 1 hour ago
      I can tell you why this won't go this way:

      Customers.

      When you sell them a technological solution to their problem, they expect it to work. When it doesn't, someone needs to be responsible for it.

      Now, maybe I'm wrong, but I don't see any of the current AI leaders being like, "Yeah, you're right, this solution didn't meet your customer's needs, and we'll eat the resulting costs." They didn't get to be "thought leaders" in the current iteration of Silicon Valley by taking responsibility for things that got broken, not at all.

      So that means you will need to take responsibility for it, and how can you make that work as a business model? Well, you pay someone - a human - who knows what they're looking at to review at least some of the code that the AI generates.

      Will some of that be AI-aided? Of course. Can you make a lot of the guesswork go away by saying "use commonly-accepted design patterns" in your CLAUDE.md? Sure. But you'll still need someone to enforce it and take responsibility at the end of the day if it screws up.

      • asdff 54 minutes ago
        You are thinking in terms of the next few years not the next few centuries. Plenty of software sold today fails to meet expectations and no one eats costs.
  • kelseyfrog 1 hour ago
    The "Council of models" is a good first step, but ultimately I found myself settling on an automated talent acquisition pipeline.

    I have a BIRTHING_POOL.md that combines the best AGENTS.md and introduces random AI-generated mutations and deletions. The candidates are tested using take-home PRs which are reviewed by HR.md and TECH_MANAGER.md. TECH_MANAGER.md measures completion rate per tokens (effectiveness) and then sends the stack ranking of AGENT.mds to HR to manage the talent pool. If agent effectiveness drops low enough, we pull from the birthing pool and interview more candidates.

    The end result is that it effectively manages a wider range of agent talents and you don't get into these agent hive mind spirals you get if every worker has the same system prompt.

    • natpat 1 hour ago
      Is this satire? I can't tell any more.
      • NewJazz 1 hour ago
        I don't understand why people try so hard to anthropormophize these tools and map them to human sociology...
  • stavros 1 hour ago
    I really love AI for lots of things, but, when I'm reading a post, the AI aesthetic has started to grate. I read articles and they all have the same "LLM" aesthetic, and I feel like I'm reading posts written by the same person.

    Sure, the information is all there, but the style just puts me off reading it. I really don't like how few authors have a voice any more, even if that voice is full of typos and grammatical errors.

  • marstall 1 hour ago
    I am still a WindSurf user. It has the quirk of deciding for itself on any given day whether to use ChatGPT 5.2 or Claude Opus 4.5 in Cascade (its agentic side panel). I've never noticed much of a difference, they are both amazing.

    I thought the difference must be in how Claude Code does the agentic stuff - reasoning with itself, looping until it finds an answer, etc. - but I have spent a fair amount of time with Claude Code now and found that agentic experience to be about the same between Cascade and Claude Code.

    What am i missing? (serious question, i do have Claude Code FOMO like the OP)

  • connectsnk 16 minutes ago
    Wouldn’t this kind of setup eat away tokens at a very fast rate and even the max plan will be quickly overrun? Isn’t this a viable workflow to use Claude code to create just one pull request at a time, lightly review the code and allow it to be merged?
  • hollowturtle 1 hour ago
    >> You no longer need to review the code.

    You also no longer need to work, earn money, have a life, read, study, know anything about the world. This is pure fantasy my brain farts hard when I read sentences like that

    • p1esk 1 hour ago
      You also no longer need to work, earn money, have a life, read, study, know anything about the world. This is pure fantasy

      This will be reality in 10-20 years

      • hollowturtle 1 hour ago
        It's already reality if you want to, today and in 10-20 years the outcome will be the same: being an homeless! And no please no UBI bs thanks
        • p1esk 21 minutes ago
          99.9% of today’s jobs will be fully automated in 20 years. What do you think will happen to all the unemployed population?
          • hollowturtle 18 minutes ago
            hahahahaha. Please can you advice on lottery numbers? I'd like to win a bunch of money before losing the job
  • nebezb 1 hour ago
    > my experience from 5 years of coding with AI

    What AI have you been using for 5 years of coding?

    • NewJazz 59 minutes ago
      More importantly: what software of value have they produced in that time? I glanced around their site, just saw a bunch of teaching materials about Ai.
    • mkozlows 1 hour ago
      Github Copilot was available in 2021, believe it or not. It was just auto-complete plus a chat window, but it seemed like a big deal at the time.
      • nebezb 19 minutes ago
        This caught me by surprise. Wow time flies.
    • ronsor 1 hour ago
      Probably the original GitHub Copilot
      • ashirviskas 1 hour ago
        It is only 4 years old
        • dkdcio 1 hour ago
          technical preview in June 2021. I was using it for a bit before that as an internal employee. so they may have rounded up slightly or also were an internal beta test

          side note, I’ve been trying to remember when it launched internally if anybody knows. I feel like it was pre-COVID, but that’s a long timeline from internal use to public preview

    • varispeed 1 hour ago
      If AI can 10x then that's 50 years worth of development. Likely OP has developed a UNIX like operating system, is my guess.
      • louthy 1 hour ago
        And they have already retired.
      • ohyoutravel 1 hour ago
        I heard they created Plan10: An AI-files based Linux version.
    • ashirviskas 1 hour ago
      Keyboard autocomplete?
  • des429 1 hour ago
    This reads like a useful guide, not an answer to the question "why use Claude code over cursor" that the author includes at the beginning.
  • igorpcosta 1 hour ago
    I'd love to have you as a top user on autohand [.] ai/cli and interested in your experience with us.
  • pancsta 15 minutes ago
    > I was a top 0.01%

    wow

  • kayson 1 hour ago
    Do we really need to qualify our power user level down to 100ppm percentiles...?
    • Dilettante_ 1 hour ago
      Comparing digital penile volume is a time-honored tradition!
  • igorpcosta 1 hour ago
    I'd love to have you as a top user on autohand [.] ai/cli and interested in your experience with us. We're a bootstrap ai lab.
  • 9cb14c1ec0 1 hour ago
    Is it just me, or has Claude Code gotten really stupid the last several days. I've been using it almost since it was publicly released, and the last several days it feels like it reverted back 6 months. I was almost ready to start yolo-ing everything, and now it's doing weird hallucinations again and forgetting how to edit files. It used to go into plan mode automatically, now it won't unless I make it.
  • jrflowers 1 hour ago
    > This is a guide that combines:

    > 1. my experience from 5 years of coding with AI

    It is a testament to the power of this technology that the author has managed to fit five years of coding with AI in between 2023 and now

  • dispersed 1 hour ago
    [dead]