Ask HN: What's Hacker News's vision for the future?

It's the weekend. Let's do some brainstorming. What's forefront on your mind with regards to the human predicament, and how we move forward? We all see that it seems pretty much everyone has gone insane, or at least that's what the narrative portrays. Why are companies, governments, and other organizations doing such stupid things? So we've made machines that can appear to "think", by articulating linguistic constructs that make sense. In fact, there appears to be actual semantic and logical structure embedded within the neural networks of LLMs. Ok, let's keep our heads on straight, keep in mind the mathematics involved, chill out, and just think about what it is that we actually want (and what "we" are, for that matter). Are we going to someday transcend language? I'm aware that I'm typing this message out to you, using language, with hopes that this form of communication will become obsolete. But what will that look like? I know that human language in it's current form is severely limited. If we're to design some future communication protocol and methodology, let's contemplate what types of things even need to be communicated? For example, what is "the news" nowadays? Could it be simply a video stream of the sunrise? Well it would take a long time to get there. In the meantime, it should probably be something like "yeah, it's another day, and humans are doing the work necessary to put order to chaos and make things better for everyone".

From a more concrete perspective, I'd say that we've reached the point when we can transcend the idea of what "work" is currently. We can see beyond the limited view of doing things for monetary profit, and see that certain systems could be implemented much better if there were certain forms of entity-organization and resource-allocation at play. What is a company if it's ran by a computer? What does it serve? I mean, it sounds silly to suggest some big companies with overlapping domains of operation should consolidate their operations, but we all know that that's where things should be headed in many cases.

So yeah, what's your vision for the future?

18 points | by gooob 7 hours ago

8 comments

  • keepamovin 6 hours ago
    It doesn't really matter what the people here say. The world is moving in a certain direction, and the people here are not deciding that direction, tho they may be unhappy with it. Online opinion is just a wave, noise.

    If you care about the future, what people say on the Internet is not worth your time. Just make it happen.

  • Kokouane 3 hours ago
    Might be a crazy statement, but I believe Meta is on the right track. Right now, I think most people can clearly see that more and more people are getting addicted to the little device in their hand.

    The "Metaverse" is going to be a more interactive, immersive extension of that device. I also believe that Meta's superintelligence team isn't necessarily about achieving AGI, but rather, creating personable, empathetic LLMs. People are so lonely and seeking friendship that this will be a very big reason to purchase their devices and get tapped into this world.

    • sMarsIntruder 2 hours ago
      The observation about smartphone addiction is certainly valid, with studies showing average daily screen time exceeding 7 hours for many users, driven by algorithmic engagement.

      BUT While the Metaverse could theoretically extend that immersion, historical execution suggests caution: initiatives like Horizon Worlds have struggled with user adoption and technical hurdles, indicating it might not seamlessly evolve from current devices as envisioned.

      On the superintelligence front, focusing on empathetic LLMs for companionship taps into real societal issues like rising loneliness (e.g. reports from the WHO highlight it as a global health threat). This approach risks exacerbating dependency rather than alleviating it, potentially creating echo chambers of artificial interaction over genuine human bonds.

      So yes, Meta shows some promise in these areas, but success is anything but assured. Their previous massive investments have largely failed to deliver the transformative changes they hyped.

  • mikewarot 7 hours ago
    My vision for the future includes greatly reducing the power requirements for AI by rethinking computing using first principles thinking. Every single attempt at this so far wasn't willing to go far enough, and ditch the CPU or RAM. FPGAs got close, but they went insane with switching fabrics and special logic blocks. Now they've added RAM, which is just wrong.

    Edit/Append: I've had this idea [1] forever (since the 1990s, possibly earlier... don't have notes going that far back). Imagine the simplest possible compute element, the look up table... arranged in a grid. Architectural optimizations I've pondered over time lead me to a 4 bits in, 4 bits out look up table, with latches on all outputs and a clock signal. This prevents race conditions by slowing things down. The gain is that you can now just clock a vast 2d array of these cells with a 2 phase clock (like the colors on a chessboard) and it's a universal computer, Turing complete, but you can actually think about it without your brain melting down.

    The problem (for me) has always been programming it and getting a chip made. Thanks to the latest "vibe coding" stuff, I've gotten out of analysis paralysis, and have some things cooking on the software front. The other part is addressed by TinyTapeout, so I'll be able to get a very small chip made for a few hundred dollars.

    Because the cells are only connected to neighbors, the runs are all short, low capacitance, and thus you can really, REALLY crank up the clock rates, or save a lot of power. Because the grid is uniform, you wont have the hours or days long "routing" problems that you have with FPGAs.

    If my estimates are right, it will cut the power requirements for LLM computing by 95%.

    [1] Every mention of BitGrid here on HN - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

    • mindcrime 7 hours ago
      greatly reducing the power requirements for AI by rethinking computing using first principles thinking.

      I feel some affinity for this statement! Although what I've said in the past was more along the lines of "rethinking our approach to (artificial) neural networks from first principles" and not necessarily the foundations of computing itself. That said, I wouldn't reject your position out of hand at all!

      It certainly feels like we've reached a point where there may be an opportunity to stop, take stock, look back, revisit some things, and maybe do a bit of a reset in some areas.

    • gooob 7 hours ago
      interesting. can you expand on this?
  • Spooky23 5 hours ago
    I think AI is going to help accelerate standardization of processes and make bigger businesses more efficient and profitable. Smaller firms are toast as the behemoths diversify as growth is capped.

    End of the day, I see it as repeat of the 1920s, good and bad. Technology will drive discontent until we figure out how to tame it.

  • mindcrime 7 hours ago
    What's forefront on your mind with regards to the human predicament, and how we move forward?

    what's your vision for the future?

    Honestly, I consider those two pretty different questions. At the very least, I'd approach them very differently in terms of time-scale. What's "top of mind" for me is more about the short-term threats I perceive to our way of life, whereas my "vision for the future" is - to my way of thinking - more about how I'd like things to be in some indeterminate future (that might never arrive, or might arrive long after my passing).

    To the first question then: what's on my mind?

    1. The rise of authoritarianism and right-wing populism, both in the US and across the world.

    2. The increasing capabilities of artificial intelligence systems, and the specter of continued advances exacerbating existing problems of unequal wealth / power imbalances / injustice / etc.

    Combine (1) and (2) and you have quite a toxic stew on your hands in the worst case. Now I'm not necessarily predicting the worst case, but I wouldn't bet money that I couldn't afford to lose against it either. So worst case, we wind up in a prototypical cyberpunk dystopia, or something close to it. Only probably less pleasant than the dystopias we are familiar with from fiction.

    And even if we don't wind up in a straight up "cyberpunk dystopia", one has to wonder what's going to happen if fears of AI replacing large numbers of white-collar jobs come true. And note that that doesn't have to happen tomorrow, or next year, or 5 years from now or whatever. If it happens 15 years, or 25 year, or 50 years, or whatever, from now, the impact could still be profound. So even for those of you who are dismissive of the capabilities of current AI systems, I encourage you to think about the big picture and play some mental simulations with different rates of change and different time scales.

  • AdieuToLogic 4 hours ago
    > So yeah, what's your vision for the future?

    The hopeful version:

      People get their head out of a phone only to realize
      life is more than the next dopamine hit.
    
    The dystopian version:

      The logical conclusion of what is detailed in the
      paragraphs above.
    
      Where being addicted to a handheld device is not
      only normal, but expected.
    
      Where "what it is that we actually want" is not an
      individual choice, but a corporate one.
    
      Where the idea of technofascism is introduced as
      "silly to suggest" and then normalized as "but we
      all know that that's where things should be headed
      in many cases." (see above)
  • spcebar 3 hours ago
    The world gets hotter, political control of nations continues to flip back and forth between conservative and progressive ideologies, Koreas don't unify, water shortages intensify, year of the Linux desktop.