I think a more accurate and more useful framing is:
Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.
The statement "Game theory is inevitable. Because game theory is just math, the study of how independent actors react to incentives." implies that the "actors" are humans. But that's not what game theory assumes.
Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.
Yes, game theory is not a predictive model but an explanatory/general one. Additionally not everything is a game, as in statistics, not everything has a probability curve. They can be applied speculatively to great effect, but they are ultimately abstract models.
You can use it for either predictive or explanatory purposes. In the early ('00s) years of Google it was common to diagram out the incentives of all the market participants; this led to such innovations like the use of the second-price VCG auction [1] for ad sales that now make over a third of a trillion dollars per year.
> It’s important to note that our move to a single unified first price auction only impacts display and video inventory sold via Ad Manager. This change will have no impact on auctions for ads on Google Search, AdSense for Search, YouTube, and other Google properties, and advertisers using Google Ads or Display & Video 360 do not need to take any action.
1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph
2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.
3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority
TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation
As a recent example, Jon Haidt seems to have used this kind of tactic to pull off a coup with the whole kids/smartphones/social media thing [0]. Everybody knew social media tech was corrosive and soul-rotting, but nobody could move individually to stand up against its “inevitability.”
Individual families felt like, if they took away or postponed their kids’ phones, their kid would be left out and ostracized—which was probably true as long as all the other kids had them. And if a group of families or a school wanted to coordinate on something different, they’d have to 1) be ok with seeming “backwards,” and 2) squabble about how specifically to operationalize the idea.
Haidt framed it as “four simple norms,” which offered specific new Schelling points for families to use as concrete alternatives to “it’s inevitable.” And in shockingly little time, it’s at the point where 26 states have enshrined the ideas into legislation [1].
AI slop is self-limiting. The new game-theoretic equilibrium is that nobody trusts anything they read online, at which point it will no longer be profitable to put AI slop out there because nobody will read it.
Unfortunately, it's going to destroy the Internet (and possibly society) in the process.
That’s my sense too. I wonder where the new foca are starting to form, as far as where people will look to serve the purposes that this slop’s infiltrating. What the inevitable alternatives to the New Inevitable start to look like.
At the risk of dorm-room philosophizing: My instincts are all situated in the past, and I don’t know whether that’s my failure of imagination or whether it’s where everybody else is ending up too.
Do the new information-gathering Schelling points look like the past—trust in specific individual thinkers, words’ age as a signal of their reliability, private-first discussions, web of trust, known-human-edited corpora, apprenticeship, personal practice and experience?
Is there instead no meaningful replacement, and the future looks like people’s “real” lives shrinking back to human scale? Does our Tower of Babel just collapse for a while with no real replacement in sight? Was it all much more illusory than it felt all along, and the slop is just forcing us to see that more clearly?
Did the Cronkite-era-television—>cable transition feel this way to people before us?
> AI slop is self-limiting. The new game-theoretic equilibrium is that nobody trusts anything they read online, at which point it will no longer be profitable to put AI slop out there because nobody will read it.
AI slop, unfortunately, is just starting.
It is true that nobody trusts anything online... esp the Big Media and the backlash against it in the last decade+ or so. But that's exactly where AI slop is coming in. Note the crazier and crazier conspiracy theories that are taking hold all around, and not just in the MAGA-verse. And there's plenty of takers for AI slop - both consumers of it, and producers of it.
And there's plenty of profit all around. (see crypto, NFTs, and all manners of grifting)
So no, I dont think "nobody will read it". It's more like "everybody's reading it"
But I do agree on the denouement... it's destroying the internet and society along with it
'defect' only applies to prisoners dilemma type problems. that is just one, very limited class of problem, and I would argue not very relevant to discussing AI inevitability.
Game theory is still inevitable. Its application to humans may be non-obvious.
In particular, the "games" can operate on the level of non-human actors like genes, or memes, or dollars. Several fields generate much more accurate conclusions when you detach yourself from an anthrocentric viewpoint, eg. evolutionary biology was revolutionized by the idea of genes as selfish actors rather than humans trying to pass along their genes; in particular, it explains such concepts as death, sexual selection, and viruses. Capitalism and bureaucracy both make a lot more sense when you give up the idea of them existing for human betterment and instead take the perspective of them existing simply for the purpose of existing (i.e. those organizations that survive are, well, those organizations that survive; there is nothing morally good or bad about them, but the filter that they passed is simply that they did not go bankrupt or get disbanded).
But underneath those, game theory is still fundamental. You can use it to analyze the incentives and selection pressures on the system, whether they are at sub-human (eg. viral, genetic, molecular), human, or super-human (memetic, capitalist, organizational, bureaucratic, or civilizational) scales.
Perhaps you don't intend this, but I intuit that you imply that Game theory's inevitability leads to the inevitability of many claims the author's claims aren't inevitable.
To me, this inevitability only is guaranteed if we assume a framing of non-cooperative game theory with idealized self-interested actors. I think cooperative game theory[1] better models the dynamics of the real world. More important than thinking on the level of individual humans is thinking about the coalitions that have a common interest to resist abusive technology.
Both cooperative and non-cooperative games are relevant. Actually, I think that one of the most intriguing parts of game theory is understanding under what conditions a non-cooperative game becomes a cooperative one [1] [2].
The really simple finding is that when you have both repetition and reputation, cooperation arises naturally. Because now you've changed the payoff matrix; instead of playing a single game with the possibility of defection without consequences, defection now cuts you off from payoffs in the future. All you need is repeated interaction and the ability to remember when you've been screwed, or learn when your counterparty has screwed others.
This has been super relevant for career management, eg. you do much better in orgs where the management chain has been intact for years, because they have both the ability and the incentive to keep people loyal to them and ensure they cooperate with each other.
>I think cooperative game theory[1] better models the dynamics of the real world.
If cooperative coalitions to resist undesirable abusive technology models the real world better, why is the world getting more ads? (E.g. One of the author's bullet points was, "Ads are not inevitable.")
Currently in the real world...
- Ads frequency goes up : more ad interruptions in tv shows, native ads embedded in podcasts, sponsors segments in Youtube vids, etc
- Ads spaces goes up : ads on refrigerator screens, gas pumps touch screens, car infotainment systems, smart TVs, Google Search results, ChatGPT UI, computer-generated virtual ads in sports broadcasts overlayed on courts and stadiums, etc
What is the cooperative coalition that makes "ads not inevitable"?
I'll try and tackle this one. I think the world is getting more ads because Silicon Valley and it's Anxiety Economy are putting a thumb on the scale.
For the entirety of the 2010's we had SaaS startups invading every space of software, for a healthy mix of better and worse, and all of them (and a number even today) are running the exact same playbook, boiled down to broad terms: burn investor money to build a massive network-effected platform, and then monetize via attention (some combo of ads, user data, audience reach/targeting). The problem is thus: despite all these firms collecting all this data (and tanking their public trust by both abusing it and leaking it constantly) for years and years, we really still only have ads. We have specifically targeted ads, down to downright abusive metrics if you're inclined and lack a soul or sense of ethics, but they are and remain ads. And each time we get a better targeted ad, the ones that are less targeted go down in value. And on and on it has gone.
Now, don't misunderstand, a bunch of these platforms are still perfectly fine business-wise because they simply show an inexpressible, unimaginable number of ads, and even if they earn shit on each one, if you earn a shit amount of money a trillion times, you'll have billions of dollars. However it has meant that the Internet has calcified into those monolith platforms that can operate that way (Facebook, Instagram, Google, the usuals) and everyone else either gets bought by them or they die. There's no middle-ground.
All of that to say: yes, on balance, we have more ads. However the advertising industry in itself has never been in worse shape. It's now dominated by those massive tech companies to an insane degree. Billboards and other such ads, which were once commonplace are now solely the domain of ambulance chasing lawyers and car dealerships. TV ads are no better, production value has tanked, they look cheaper and shittier than ever, and the products are solely geared to the boomers because they're the only ones still watching broadcast TV. Hell many are straight up shitty VHS replays of ads I saw in the fucking 90's, it's wild. We're now seeing AI video and audio dominate there too.
And going back to tech, the platforms stuff more ads into their products than ever and yet, they're less effective than ever. A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian, but the problem is nobody notices the background wallpaper, which means despite the saturation, ads get less attention then ever before. And worse still, the folks who don't block cost those ad companies impressions and resources to serve those ads that are being ignored.
So, to bring this back around: the coalition that makes ads "inevitable" isn’t consumers or creators, it's investors and platforms locked into the same anxiety‑economy business model. Cooperative resistance exists (ad‑blockers, subscription models, cultural fatigue), but it’s dwarfed by the sheer scale of capital propping up attention‑monetization. That’s why we see more ads even as they get less effective.
> Billboards and other such ads, which were once commonplace are now solely the domain of ambulance chasing lawyers and car dealerships. TV ads are no better, production value has tanked, they look cheaper and shittier than ever, and the products are solely geared to the boomers because they're the only ones still watching broadcast TV.
This actually strikes me as a good thing. The more we can get big dumb ads out of meatspace and confine everything to devices, the better, in my opinion (though once they figure out targeted ads in public that could suck).
I know this is an unpopular opinion here, but I get a lot more value out of targeted social media ads than I ever did billboards or TV commercials. They actually...show me niche things that are relevant to my interests, that I didn't know about. It's much closer to the underlying real value of advertising than the Coca-Cola billboard model is.
> A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian...
Also this. It's not dystopian. It's genuinely a better experience than sitting through a single commercial break of a TV show in the 90s (of which I'm sure we all sat through thousands). They blend in. They are easily skippable, they don't dominate near as much of your attention. It's no worse than most of the other stuff competing for your attention. It doesn't seem that difficult to me to navigate a world with background ad radiation. But maybe I'm just a sucker.
> I know this is an unpopular opinion here, but I get a lot more value out of targeted social media ads than I ever did billboards or TV commercials. They actually...show me niche things that are relevant to my interests, that I didn't know about. It's much closer to the underlying real value of advertising than the Coca-Cola billboard model is.
You are describing two different advertising strategies that have differing goals. The billboard/tv commercial is a blanket type that serves to foster a default in viewers minds when they consider a particular want/need. Meanwhile, the targeted stuff tries to identify a need you might be likely to have and present something highly specific that could trigger or refine that interest.
Yes, I'm saying, as a consumer, I much prefer the latter, and I get more value from it. And it's only enabled by modern individualized data collection.
> This actually strikes me as a good thing. The more we can get big dumb ads out of meatspace and confine everything to devices, the better, in my opinion (though once they figure out targeted ads in public that could suck).
I mean the issue is the billboards aren't going away, they're just costing less and less which means you get ads for shittier products (see aforementioned lawyers, reverse mortgages and other financial scams, dick pills, etc.). If they were getting taken down I'd heartily agree with you.
> I know this is an unpopular opinion here, but I get a lot more value out of targeted social media ads than I ever did billboards or TV commercials. They actually...show me niche things that are relevant to my interests, that I didn't know about. It's much closer to the underlying real value of advertising than the Coca-Cola billboard model is.
Perhaps they work for you. I still largely get the experience that after I buy a toilet seat for example on Amazon, Amazon then regularly shows me ads for additional toilet seats, as though I've taken up throne collecting as a hobby or something.
> Also this. It's not dystopian. It's genuinely a better experience than sitting through a single commercial break of a TV show in the 90s (of which I'm sure we all sat through thousands). They blend in. They are easily skippable, they don't dominate near as much of your attention. It's no worse than most of the other stuff competing for your attention.
I mean, I personally loathe the way my attention is constantly being redirected, or attempted to be, by loud inane bullshit. I tolerate it, of course, what other option does one have, but I certainly wouldn't call it a good or healthy thing. I think our society would leap forward 20 years if we pushed the entirety of ad-tech into the ocean.
> If they were getting taken down I'd heartily agree with you.
At some point it won't be worth it to maintain them, hopefully.
> I still largely get the experience that after I buy a toilet seat for example on Amazon, Amazon then regularly shows me ads for additional toilet seats, as though I've taken up throne collecting as a hobby or something.
This is definitely a thing, I feel like it's getting better though and stuff like that drops off pretty quickly. But it still doesn't bother me nearly as much as watching the same 30 second TV commercial for the 100th time, I just swipe or scroll past, and overall it's still much better than seeing the lowest common denominator stuff.
> I mean, I personally loathe the way my attention is constantly being redirected, or attempted to be, by loud inane bullshit. I tolerate it, of course, what other option does one have, but I certainly wouldn't call it a good or healthy thing. I think our society would leap forward 20 years if we pushed the entirety of ad-tech into the ocean.
I hear you, the attention economy is a brave new world, and there will probably be some course corrections. I don't think ads are really the problem though, in some ways everything vying for your attention is an ad now. Through technology we democratized the means of information distribution, and I would rather have it this way than having four TV channels, but there are some growing pains for sure.
> This is definitely a thing, I feel like it's getting better though and stuff like that drops off pretty quickly. But it still doesn't bother me nearly as much as watching the same 30 second TV commercial for the 100th time, I just swipe or scroll past, and overall it's still much better than seeing the lowest common denominator stuff.
I'll second the absolute shit out of that. My only exposure to TV anymore is hotels and I cannot fathom why anyone would spend ANY money on it as a service, let alone what I know cable costs. The ads are so LOUD now and they repeat the same like 4 or 5 of them over and over. Last business trip I could lipsync a Wendy's ad like I'd done it my whole life.
> I hear you, the attention economy is a brave new world, and there will probably be some course corrections. I don't think ads are really the problem though, in some ways everything vying for your attention is an ad now.
See I don't like the term attention economy, I vastly prefer anxiety economy. An attention economy implies at least some kind of give and take, where a user's attention is rewarded rather than simply their lack of it is attempted to be punished. The constant fomenting of FOMO and blatant use of psychological torments does not an amicable relationship make. It makes it feel like a constant back and forth of blows, disabling notifications, muting hashtags, unsubscribing from emails because you simply can't stand the NOISE anymore.
I'll just take the very first example on the list, Internet-enabled beds.
Absolutely a cooperative game - nobody was forced to build them, nobody was forced to finance them, nobody was forced to buy them. this were all willing choices all going in the same direction. (Same goes for many of the other examples)
There's a slight caveat here that you are sometimes forced to effectively buy and use internet-connected smart devices if you live in rented housing and the landlord of your unit provides it. This is probably not an issue for an internet-connected bed, because conventionally a bed isn't something a landlord provides, but you might get forced into using a smart fridge, since that's typically a landlord-provided item.
I lived in a building some years ago there where the landlord bragged about their Google Nest thermostat as an apartment amenity - I deliberately never connected it to my wifi while I lived there (and more modern smart devices connect to ambient cell phone networks in order to defeat this attack). In the building I currently live in, there are a bunch of elevators and locks that can be controlled by a smartphone app (so, something is gonna break when AWS goes down). I noticed this when I was initially viewing the apartment and I considered it a downside - and ultimately chose to move there anyway because every rental unit has downsides and ultimately you have to pick a set of compromises you can live with.
I view this as mostly a problem of housing scarcity - if housing units are abundant, it's easier for a person to buy thier own home and then not put internet-managed smart furniture in it; or at least have more leverage against landlords. But the region I live in is unfortunately housing-constrained.
Game theory is not inevitable, neither is math. Both are attempts to understand the world around us and predict what is likely to happen next given a certain context.
Weather predictions are just math, for example, and they are always wrong to some degree.
Because the models aren't sophisticated enough (yet). There's no voodoo here.
I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
Math is almost the definition of inevitability. Logic doubly so.
Once there's a sophisticated enough human model to decipher our myriad of idiosyncrasies, we will all be relentlessly manipulated, because it is human nature to manipulate others. That future is absolutely inevitable.
Might as well fall into the abyss with open arms and a smile.
>Because the models aren't sophisticated enough (yet). There's no voodoo here.
Idk if that's true.
Navier–Stokes may yet be proven Turing-undecidable, meaning fluid dynamics are chaotic enough that we can never completely forecast them no matter how good our measurement is.
Inside the model, the Navier–Stokes equations have at least one positive Lyapunov exponent. No quantum computer can out-run an exponential once the exponent is positive
And even if we could measure every molecule with infintesimal resolution, the atmosphere is an open system injecting randomness faster than we can assimilate it. Probability densities shred into fractal filaments (butterfly effect) making pointwise prediction meaningless beyond the Lyapunov horizon
> I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
You might be conflating determinism with causality. Determinism is a metaphysical stance too because it asserts absence of free will.
Regardless of the philosophical nuance between the two, you are implicitly taking the vantage point of "god" or Laplace's Demon: infinite knowledge AND infinite computability based on that knowledge.
Tech people ought to know that we can't compute our way out of combinatorial explosion. That we can't even solve for a simple 8x8 game called chess algorithmically. We are bound with framing choices and therefore our models will never be a lossless, unbiased compression of reality. Asserting otherwise is a metaphysical stance, implicitly claiming human agency can sum up to a "godlike", totalizing compute.
In sum, models will never be sophisticated enough, claiming otherwise has always ended up being a form of totalitarianism, willful assertion one's favorite "framing", which inflicted a lot of pain in the past. What we need is computational humility. One good thing about tech interviews that it teaches people resource complexity of computation.
It's funny because a central tenet of quantum mechanics, that I find deeply frustrating, is "No determinism, sorry."
So even as you chastise people for shying away from logically concluding the obvious, you're trusting your intuition over the scientific consensus. Which is fine, I've absolutely read theories or claims about quantum mechanics and said "Bullshit," safe in the knowledge that my belief or disbelief won't help or hinder scientific advancement or the operation of the universe, but I'd avoid being so publicly smug about it if I were you.
But the world is not deterministic, inherently so. We know it's probabilistic at least at small enough scales. Most hidden variable theories have been disproven, and to the best of our current understanding the laws of the physical universe are probabsilitic in nature (i.e the Standard Model). So while we can probably come up with a very good probabilistic model of things that can happen, there is no perfect prediction, or rather, there cannot be
There is strong reason to expect evolution to have found a system that is complex and changing for its control system, for this very reason so it can't get easily gamed (and eaten).
If you start studying basically any field that isn't computer science you will in fact discover that the world is rife with randomness, and that the dreams of a Laplace or Bentham are probably unrealizable, even if we can get extremely close (but of course, if you constrain behavior in advance through laws and restraints, you've already made the job significantly easier).
Thinking that reality runs like a clock is literally a centuries outdated view of reality.
I think its hubris to believe that you can formulate the correct game theoretic model to make significant statements about what is and is not inevitable.
I guess, but there are significant differences between the laws of physics and a game theoretic description of human behavior. Fundamentally, you cannot game theoretically predict the future without a model of the participants and, as you perhaps have noticed, there is no single model for the behavior of human beings because, fundamentally, human beings are an abstraction which covers ~9 billion distinct globs of cells with different genes, gene expressions, culture, personal experiences, etc.
As a physicist I think people are more sure about what an electron is, for example, than they should be, given that there is no axiomatic formulation of quantum field theory that isn't trivial, but at least there we are in spitting distance of having something to talk about such that (in very limited situations, mind you) we can speak of the inevitable. But the OP rather casually suggested, implicitly, if not explicitly, that the submitted article was wrong because "game theory," which is both glib and just like technically not a conclusion one could reasonably come to with an honest appraisal of the limitations of these sorts of ways of thinking about the world.
Game theory is only as good as the model you are using.
Now couple the fact that most people are terrible at modeling with the fact that they tend to ignore implicit constraints… the result is something less resembling science but something resembling religion.
The concept of Game Theory is inevitable because it's studying an existing phenomenon. Whether or not the researchers of Game Theory correctly model that is irrelevant to whether the phenomenon exists or not.
The models such as Prisoner's Dilemma are not inevitable though. Just because you have two people doesn't mean they're in a dilemma.
---
To rephrase this, Technology is inevitable. A specific instance of it (ex. Generative AI) is not.
Yes it's one thing to say that game theory is inevitable, but defection is not inevitable. In fact, if you consider all levels of the organization of life, from multicellularity to large organisms, to families, corporations, towns, nations, etc, it all exists because entities figured out how to cooperate and prevent defection.
If you want to fix these things, you need to come up with a way to change the nature of the game.
In a world ruled by game theory alone marketing is pointless. Everyone already makes the most rational choice and has all the information, so why appeal to their emotions, build brand awareness or even tell them about your products. Yet companies spend a lot of money on marketing, and game theory tells us that they wouldn't do that without reason
Game theory makes a lot of simplifying assumptions. In the real world most decisions are made under constraints, and you typically lack a lot of information and can't dedicate enough resources to each question to find the optimal choice given the information you have. Game theory is incredibly useful, especially when talking about big, carefully thought out decisions, but it's far from a perfect description of reality
> Game theory makes a lot of simplifying assumptions.
It does because it's trying to get across the point that although the world seems impossibly complex it's not. Of course it is in fact _almost_ impossibly complex.
This doesn't mean that it's redundant for more complex situations, it only means that to increase its accuracy you have to deepen its depth.
They are at best an attempt to use our tools of reason and observation to predict nature, and you can point to thousands of examples, from market crashes to election outcomes, to observe how they can be flawed and fail to predict.
This argument has the unspoken premise that in large part, people's core identity is reacting to external influences. I believe that while responding to influences is part of human existence, the richness of the individual transcends such an explanation for all their actions. The phrase "game theory is inevitable" reads like the perspective of an aristocrat looking down on the masses - enough vision to see the interplay of things, and enough arrogance to assume they can control it.
Game theory is a model that's sometimes accurate. Game theorists often forget that humans are bags of thinking meat, and that our thinking is accomplished by goopy electrochemical processes
Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.
Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.
People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.
Game theory assumes that all the players agree on the pay-offs. However this is often not the case in real world situations. Robert MacNamara (the ex US secretary of defence) said that he realized after the Vietnam war the US and the Vietnamese saw the war completely differently, even years after war had ended (see the excellent documentary 'Fog of War').
I do partly disagree because Game Theory is based on an economic, (and also mentioned) reductionist view of a human, namely homo oeconomicus that does have some bold assumptions of some single men in history that asserted that we all act only with pure egoism & zero altruism which is nowadays highly critiqued and can be challenged.
It is out of question that it is highly useful and simplifies it to an extent that we can mathematically model interactions between agents but only under our underlying assumptions. And these assumptions must not be true, matter of fact, there are studies on how models like the homo oeconomicus have led to a self-fulfilling reality by making people think in ways given by the model, adjusting to the model, and not otherwise, that the model ideally should approximate us. Hence, I don't think you can plainly limit or frame this reality as a product of game theory.
> Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
That's not how mathematics works. "it's just math therefore it's a true theory of everything" is silly.
We cannot forget that mathematics is all about models, models which, by definition, do not account for even remotely close to all the information involved in predicting what will actually occur in reality. Game Theory is a theory about a particular class of mathematical structures. You cannot reduce all of existence to just this class of structures, and if you think you can, you'd better be ready to write a thesis on it.
Couple that with the inherent unpredictability of human beings, and I'm sorry but your Laplacean dreams will be crushed.
The idea that "it's math so it's inevitable" is a fallacy. Even if you are a hardcore mathematical Platonist you should still recognize that mathematics is a kind of incomplete picture of the real, not its essence.
In fact, the various incompleteness theorems illustrate directly, in Mathematic's own terms, that the idea that a mathematical perspective or any logical system could perfectly account for all of reality is doomed from the start.
"in formal experiments the only people who behaved exactly according to the mathematical models created by game theory are economists themselves, and psychopaths" [1]
One person has more impact than you think. Many times it's one person that is speaking what's on the mind of many and that speaking out can bring the courage to do what needs to be done for many people that sitting on the fence. The Andor TV series really taught me that. I'm working on a presentation of surveillance capitalism that I plan to show to my community. It's going to be an interesting future. Some will side with the Empire and others with side with the Rebellion.
You realize surveillance capitalism is what caused the Andor TV show (and more broadly the entire Star Wars franchise) to exist at all, right? Gigantic corporate entities have made a lot of money from monetizing the Star Wars franchise.
I'll say frankly that I personally object to Star Wars on an aesthetic level - it is ultimately an artistically-flawed media franchise even if it has some genuinely compelling ideas sometimes. But what really bothers me is that Star Wars in its capacity as a modern culturally-important story cycle is also intellectual property owned by the Disney corporation.
The idea that the problems of the world map neatly to a confict between an evil empire and a plucky rebellion is also basically propagandistic (and also boring). It's a popular storytelling frame - that's why George Lucas wrote the original Star Wars movies that way. But I really don't like seeing someone watch a TV series using the Star Wars intellectual property package and then using the story the writers chose to write - writers ultimately funded by Disney - as a basis for how they see themselves in the world poltically.
> if you can shape those beliefs and that information flow, you alter the incentives
Selective information dissemination, persuasion, and even disinformation are for sure the easiest ways to change the behaviors of actors in the system. However, the most effective and durable way to "spread those lies" are for them to be true!
If you can build a technology which makes the real facts about those incentives different than what it was before, then that information will eventually spread itself.
For me, the canonical example is the story of the electric car:
All kinds of persuasive messaging, emotional appeals, moral arguments, and so on have been employed to convince people that it's better for the environment if they drive an electric car than a polluting, noisy, smelly, internal-combustion gas guzzling SUV. Through the 90s and early 2000s, this saw a small number of early adopters and environmentalists adopting niche products and hybrids for the reasons that were persuasive to them, while another slice of society decided to delete their catalytic converters and "roll coal" in their diesels for their own reasons, while the average consumer was still driving an ICE vehicle somewhere in the middle of the status quo.
Then lithium battery technology and solid-state inverter technology arrived in the 2010s and the Tesla Model S was just a better car - cheaper to drive, more torque, more responsive, quieter, simpler, lower maintenance - than anything the internal combustion engine legacy manufacturers could build. For the subset of people who can charge in their garage at home with cheap electricity, the shape of the game had changed, and it's been just a matter of time (admittedly a slow process, with a lot of resistance from various interests) before EVs were simply the better option.
Similarly, with modern semiconductor technology, solar and wind energy no longer require desperate pleas from the limited political capital of environmental efforts, it's like hydro - they're just superior to fossil fuel power plants in a lot of regions now. There are other negative changes caused by technology, too, aided by the fact that capitalist corporations will seek out profitable (not necessarily morally desirable) projects - in particular, LLMs are reshaping the world just because the technology exists.
Once you pull a new set of rules and incentives out of Pandora's box, game theory results in inevitable societal change.
The greatest challenge facing humanity is building a culture where we are liberated to cooperate toward the greatest goals without fear of another selfish individual or group taking advantage to our detriment.
Yes, the mathematicians will tell you it's "inevitable" that people will cheat and "enshittify". But if you take statistical samplings of the universe from an outsider's perspective, you would think it would be impossible for life to exist. Our whole existence is built on disregard for the inevitable.
Reducing humanity to a bunch of game-theory optimizing automatons will be a sure-fire way to fail The Great Filter, as nobody can possibly understand and mathematically articulate the larger games at stake that we haven't even discovered.
Game theory applied to the world is a useful simplification; reality is messy. In reality:
* Actors have access to limited computation
* The "rules" of the universe are unknowable and changing
* Available sets of actions are unknowable
* Information is unknowable, continuous, incomplete, and changes based on the frame of reference
* Even the concept of an "Actor" is a leaky abstraction
There's a field of study called Agent-based Computational Economics which explores how systems of actors behaving according to sets of assumptions behave. In this field you can see a lot of behaviour that more closely resembles real world phenomena, but of course if those models are highly predictive they have a tendency to be kept secret and monetized.
So for practical purposes, "game theory is inevitable" is only a narrowly useful heuristic. It's certainly not a heuristic that supports technological determinism.
I mean, in an ideal system we would have political agency greater than the sum of individuals who would put pressure/curtail the rise of abusive actors taking advantage of power and informational asymetry to try and gain more power (wealth) and influence (wealth) in order to gain more wealth
It feels like the only aspect of Game Theory at work here is opportunity cost. For example, why shouldn't you make AI porn generation software? There's moral reasons for it, but usually, most put it aside because someone else is going to get the bag first. That exhaustive list the author enumerated are all in some way byproducts of break-things-move-fast-say-sorry-later philosophy. You need ID for the websites because you did not give a shit and wanted to get the porn out there first and foremost. Now you need IDs.
You need to track everyone and everything on the internet because you did not want to cap your wealth at a reasonable price for the service. You are willing to live with accumulated sins because "its not as bad as murder". The world we have today has way more to do with these things than anything else. We do not operate as a collective, and naturally, we don't get good outcomes for the collective.
what i'm reading here then is that those 7.999999999B others are braindead morons.
OP is 100% correct. either you accept that the vast majority are mindless automatons (not hard to get onboard with that honestly, but still, seems an overestimate), or there's some kind of structural unbalance, an asymmetry that's actively harmful and not the passive outcome of a 8B independent actors.
Agree with OP. This reminds me of fast food in the 90s. Executives rationalized selling poison as "if I don't, someone else will" and they were right until they weren't.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
Obesity rates have never been higher and the top fast food franchises have double digit billions in revenue. I don’t think there is any redemption arc in there for public health since the 90s.
those statistics really gloss over / erase the vast cultural changes that have occurred. america / the west / society's relationship to fast food and obesity is dramatically different than it was thirty years ago.
I'm genuinely curious about the changes you are talking about?
Keep in mind, thirty years ago, I was a kid. I thought that fast food was awesome.
My parents would allow me a fast food meal at best once a month, and my "privileged" friends had a fast food meal a week.
Now, I'd rather starve than eat something coming from a fast food.
But around me, normies at eating at least once a day from a fast food.
We have at least ten big franchises in the country, and at every corner there's a kebab/tacos/weird place selling trash.
So, from my POV, I'd thought that, in general, people are eating much more fast food than thirty years ago.
In the interim America got obsessed with fitness and being out of shape much less obese became dramatically less popular in the middle / upper class.
Like now it's possible to go days in some cities without seeing a single obese person. It's still a big problem. Outside of the cities and in lower class areas, but... I think the changes are trickling down / propagating? That's been my impression at least.
Surprised by your take on fast food, by the way. When I complain about fast food like was ubiquitous in the 90s I think of McDonald's and other highly processed things. The type that are covered in salt and cheap oil and artifical smells and where the meat is like reconstituted garbage, where lunch is 1500 calories, where everyone gets a giant soda, where kids are enticed with cheap plastic crap.
But a corner kebab or taco place seems like an unequivocal positive for society, I have no complaints about their existence at all. I feel like most people eating at corner shops for half of their meals is pretty much ideal--if it's affordable to do so then it is a very sensible and economically positive division of labor. On the condition that the food be of decent quality, of course. Which sometimes it is. Perhaps not as much as it should be though, but people do have standards and will pick the better places.
Since you talked about "the west", I applied your comment to my situation also (France).
But it seems that some things were and are still different.
Related to fitness, sure, there's millions of people who "go to the gym" at least one a week and buy food supplements and protein powders...
But they'll happily eat fast foot several times a week.
And if we talk about ultra-processed food, it's even worse.
> But a corner kebab or taco place seems like an unequivocal positive for society, I have no complaints about their existence at all.
That's probably a big difference, because nobody here will dare say that those place serves actual food.
Not because of the cultural aspect, but just because it's the case.
They use the lowest quality in every ingredients, use lots of bad oils to cook, put tons of salt and other additives...
And don't get me started on the hygiene side.
People are perfectly aware of that and they'll even joke about it while eating their 50% fat kebab.
At least McDonald's have the hygiene on their side!
We don't have the same obesity epidemic, partly due to portion sizing and mobility, but almost half the population is overweight and figures are still going up.
Agree and disagree. It is also possible to take a step back and look at the very large picture and see that these things actually are somewhat inevitable. We do exist in a system where "if I don't do it first, someone else will, and then they will have an advantage" is very real and very powerful. It shapes our world immensely. So, while I understand what the OP is saying, in some ways it's like looking at a river of water and complaining that the water particles are moving in a direction that the levees pushed them. The levees are actually the bigger problem.
We are the levees in your metaphor and we have agency. The problem is not that one founder does something before another and gains an advantage. The problem is the millions of people who buy or use the harmful thing they create - and that we all have control over. If we continue down this path we'll end up at free will vs determinism and I choose to believe the future is not inevitable.
We aren't the real levees though. The system we live in is. Yes, a few people will push back and try to change the momentum to a different direction but that's painful and we have enough going on each day that most people don't have time for that (let alone agree on the direction). Structural change is the only real way to guide the river.
I get your point. I'm merely pointing out that some things, even though they aren't technically inevitable, are (in practice) essentially inevitable because larger forces are pushing things in that direction.
Through a very complicated, long, and ardous process.
Its mostly by design (at least in my country) so one bad actor (e.g. a failed painter) cant change the whole system instantly
I do disagree that some of these were not inevitable. Let me deconstruct a couple:
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
The author doesn't mean that the technologies weren't inevitable in the absolute sense. They mean that it was not inevitable that anyone should use those technologies. It's not inevitable that they will use Tiktok, and it is not inevitable for anyone, I've never used Tiktok, so the author is right in that regard.
If you disavow short form video as a medium altogether, something I'm strongly considering, then you can. It does mean you have to make sacrifices, for example Youtube doesn't let you disable their short form video feature so it is inevitable for people who choose they don't want to drop Youtube. That is still a choice though, so it is not truly inevitable.
The larger point is that there are always people pushing some sort of future, sketching it as inevitable. But the reality is that there always remains a choice, even if that choice means you have to make sacrifices.
The author is annoyed at people throwing the towel in the ring and declaring AI is inevitable, when the author apparently still sees a path to not tolerating AI. Unfortunately the author doesn't really constructively show that path, so the whole article is basically a luddite complaint.
Re Tiktok, what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
> what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
This is not a new thing. TV monetizes human attention. Tiktok is just an evolution of TV. And Tiktok comes from China which has a very different society. If short-form algo slop video can thrive in both liberal democracies and a heavily censored society like China, than it's probably somewhat inevitable.
Radio broadcasting and newspapers monetized it even before TV. China is hyper-capitalist too, what is restricted is mainly political speech so that doesn't make much difference. If anything the EU is probably where advertisement is the most regulated. We can easily envision having way more constrains on advertisement and influencing, that would reduce drastically the value of human attention. Not sure many would get in the streets to protest against that.
The monetization of attention was a side effect of TV, not the primary purpose.
TikTok and other current efforts have that monetization as their primary purpose.
The profit-first-everything-else-never approach typical in late-stage capitalism was not inevitable. It is very possible to see the specific turns that led us to this point, and they did not have to happen.
This appears to be overthinking it: sure it's inevitable that when zero trust systems are shown to be practicable, they will be explored. But, like a million other ideas that nobody needed to spend time on, selling NFTs should've been relegated to obscurity far earlier than what actually happened.
> Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable.
Just objectively false and assumes that the path humans took to allow this is the only path that unfolded.
Much of this tech could have been regulated early on, preventing garbage like short-form slop, from existing.
So in short, none of what you are describing is "inevitable". Someone might come up with it, and others can group together and say: "We aren't doing that, that is awful".
Which is exactly what happened though?
I never engaged with most of what the author laments - one thing i found hard was to exist in society without a smartphone, but thats more down to current personal circumstances than inevatibility.
My personal experience is that most people dont mind these things, for example short form content: most of my friends genuinely like that sort of content and i can to some extent also understand why. Just like heroin or smoking it will take some generations to regulate it (and tbf we still have problems with those two even though they are arguably much worse)
> Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers. These things don't have any utility otherwise and no reason to exist outside of those.
Scams and fraud are such potent drivers that perhaps it was inevitable, but one could imagine a more competent regulatory regime that nipped this stuff in the bud.
nb: avoiding financial regulations and money laundering are forms of fraud
> The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers.
The idea of a cheap, universal, anonymous digital currency itself is old (e.g. eCash and Neuromancer in the '80s, Snow Crash and Cryptonomicon in the '90s).
It was inevitable that someone would try implementing it once the internet was widespread - especially as long as most banks are rent-seeking actors exploiting those relying on currency exchanges, as long as many national currencies are directly tied to failing political and economic systems, and as long as the un-banking and financially persecution of undesirables was a threat.
Doing it so extremely decentralized and with a the whole proof-of-work shtick tacked on top was not inevitable and arguably not a good way to do it, nor the cancer that has grown on top of it all...
I think you could say it's inevitable because of the size of both the good AND bad opportunities. Agree with you and the original point of the article that there COULD be a better way. We are reaping tons of bad outcomes across social media, crypto, AI, due to poor leadership(from every side really).
Imagine new coordination technology X. We can remove any specific tech reference to remove prior biases. Say it is a neutral technology that could enable new types of positive coordination as well as negative.
3 camps exist.
A: The grifters. They see the opportunity to exploit and individually gain.
B: The haters. They see the grifters and denigrate the technology entirely. Leaving no nuance or possibility for understanding the positive potential.
C: The believers. They see the grift and the positive opportunity. They try and steer the technology towards the positive and away from the negative.
The basic formula for where the technology ends up is -2(A)-(B) +C. It's a bit of a broad strokes brush but you can probably guess where to bin our current political parties into these negative categories. We need leadership which can identify and understand the positive outcomes and push us towards those directions. I see very little strength anywhere from the tech leaders to politicians to the social media mob to get us there. For that, we all suffer.
> These things don't have any utility otherwise and no reason to exist outside of those.
Lol. Permissionless payments certainly have utility. Making it harder for governments to freeeze/seize your assets has utility. Buying stuff the government disallows, often illegitimately, has value. Currency that can't be inflated has value.
Any outside of pure utility, they have tons of ideological reason to exist outside scams and fraud. Your inability to imagine or dismissal of those is telling as to your close-mindedness.
But without regulation they clearly devolve into scam and fraud vehicles. Crypto just isn't worth the time or effort for regular folks. I'm not sure what's going to happen first -- abandonment or bank run, but crypto like all unregulated banking systems, are destined to fail. I guess it could end up being regulated but at this point, with such pervasive scam/fraud use, that will probably just accelerate the bank run.
Shouldn't you have already moved on to AI hype? The fact that you're still worshipping crypto is telling as to your close-mindedness.
What have I said that indicates worship? I am simply pointing out that what you claimed was objectively false, and in response you moved the goalposts.
You're taking the meaning of the word "inevitable" too literally.
Something might be "inevitable" in the sense that someone is going to create it at some point whether we like it or not.
Something is also not "inevitable" in the sense that we will be forced to use it or you will not be able to function in society. <-- this is what the author is talking about
We do not need to tolerate being abused by the elites or use their terrible products because they say so. We can just say no.
It was all inevitable, by definition, as we live in a deterministic universe (shock, I know!)
But further, the human condition has been developing for tens of thousands of years, and efforts to exploit the human condition for a couple of thousand (at least) and so we expect that a technology around for a fraction of that would escape all of the inevitable 'abuses' of it?
What we need to focus on is mitigation, not lament that people do what people do.
The point is that regulation could have made Bitcoin and NFTs never cause the harm they have inflicted and will inflict, but the political will is not there.
> Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable.
I doubt that. There is a reason the videos get longer again.
So people could have ignored the short form from the beginning. And wasn’t the matching algorithm the teal killer feature that amazed people, not the length of the videos?
I've got a hypothesis that the reason short-form video like TikTok became dominant is because of the decline in reading instruction (eg. usage of whole-word instruction over phonics) that started in 1998-2000. The timing largely lines up: the rise of video content started around 2013, just as these kids were entering their teenage years. Media has significant economies of scale and network effects (i.e. it is much more profitable to target the lowest common denominator than any niche group), and so if you get a large number of teenagers who have difficulty with reading, media will adjust to provide them content that they can consume effortlessly.
Anecdotally, I hear lots of people talking about the short attention span of Zoomers and Gen Alpha (which they define as 2012+; I'd actually shift the generation boundary to 2017+ for the reasons I'm about to mention). I don't see that with my kid's 2nd-grade classmates: many of them walk around with their nose in a book and will finish whole novels. They're the first class after phonics was reintroduced in the 2023-2024 kindergarten year; every single kid knew how to read by the end of kindergarten. Basic fluency in skills like reading and math matters.
The short-form video craze started in the U.S. though, right? And with firms like Vine and SnapChat rather than TikTok. Like I said, media (particularly social media) has strong network effects, so if you get a critical mass of early users you can take over the rest of the population even if the initial spark that attracted them doesn't apply to the rest of the world. Same as how Facebook started out at the most prestigious dorm in the most prestigious college of the U.S. - by the time it got to senior citizens they don't care about college prestige, but they got on because their grandchildren were sharing their photos on it, and the reason the grandchildren got on was because they wanted to be cooler.
I recognize this is very anecdotal (your observation and mine), but my gen alpha daughter approaching the teenage phase always has her head in a book. She also has a very short attention span.
That’s ridiculously US-centric. TikTok is a global phenomenon initiated by a Chinese company. Nothing would be different in the grand scale if there were zero American TikTok users.
That was also roughly the time period where mobile phones and their networks started to become reliably able to stream video at scale. That seems like a more plausible proximate cause for the timing of the rise of TikTok.
Even if that's true, that sub-minute videos are not the apex content, that only goes to prove inevitability. Every idea will be tested and measured; the best-performing ones will survive. There can't be any coordination or consensus like "we shouldn't have that" - the only signal is, "is this still the most performant medium + algorithm mix?"
I feel that the argument here hinges on “performant”
The regulatory, cultural, social, even educational factors surrounding these ideas are what could have made these not inevitable. But changes weren’t made, as there was no power strong enough to enact something meaningful.
I don't know if you wrote it as a form of satire, but obviously thete is no such thing as "YouTube's front-page". Everyone gets recommended different videos, based on various signals, even when not authenticated.
Shorts are everywhere because it is the most addictive form of media, easy to consume, no effort required to follow through.
More generally I think the problems we got into were inevitable. They are the result of platforms optimizing for their own interests at the expense of both creatives and users, and that is what any company would do.
All the platforms enshittified, they exploit their users first, by ranking addictive content higher, then they also influence creatives by making it clear only those who fit the Algorithm will see top rankings. This happens on Google, YT, Meta, Amazon, Play Store, App Store - it's everywhere. The ranking algorithm is "prompting" humans to make slop. Creatives also optimize for their self interest and spam the platforms.
This post rhymes with a great quote from Joseph Weizenbaum:
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
That reminds me of water use in California. We frequently have droughts, and the messaging is always to reduce water usage. I have friends who turn the shower off while soaping up just to save a few gallons out of civic duty. Meanwhile a few companies are using more water than every residential user combined to grow alfalfa, half of which gets shipped overseas. Like ban one company from selling livestock feed to Asia/Saudi Arabia and the drought for 40 million people is solved.
but people just throw their hands up "looks like another drought this year! Thats California!".
Perhaps we need more collective action & coordination?
I don’t see how we could politically undermine these systems, but we could all do more to contribute to open source workarounds.
We could contribute more to smart tv/e-reader/phone & tablet jailbreak ecosystems. We could contribute more to the fediverse projects. We could all contribute more to make Linux more user friendly.
I admire volunteer work, but I don't think we should focus too hard on paths forward that summarize to "the volunteers need to work harder". If we like what they're doing we show find ways to make it more likely to happen.
For instance, we could forbid taxpayer money from being spent on proprietary software and on hardware that is insufficiently respectful of its user, and we could require that 50% of the money not spent on the now forbidden software instead be spent on sponsorships of open source contributors whose work is likely to improve the quality of whatever open alternatives are relevant.
Getting Microsoft and Google out of education would be huge re: denormalizing the practice of accepting eulas and letting strangers host things you rely on without understanding how they're leveraging that position against your interests.
I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
Apologies, but I'm copy/pasting a previous reply of mine to a similar sentiment:
Art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
Yet humans are the ones enacting an AI for art (of some kind). Is not therefore not art because even though a human initiated the process, the machine completed it?
If you argue that, then what about kinetic sculptures, what about pendulum painting, etc? The artist sets them in motion but the rest of the actions are carried out by something nonhuman.
And even in a fully autonomous sense; who are we to define art as being artefacts of human emotion? How typically human (tribalism). What's to say that an alien species doesn't exist, somewhere...out there. If that species produces something akin to art, but they never evolved the chemical reactions that we call emotions...I suppose it's not art by your definition?
And what if that alien species is not carbon based? If it therefore much of a stretch to call art that an eventual AGI produces art?
My definition of art is a superposition of everything and nothing is art at the same time; because art is art in the eye of the arts beholder. When I look up at the night sky; that's art, but no human emotion produced that.
You seem to be conflating natural beauty and the arts.
Just because something beautiful can be created without emotion, that doesn't mean it's art. It just means something pleasing was created.
We have many species on earth that are "alien" to us - they don't create with emotion, they create things that are beautiful because that's just how it ended up.
Bees don't create hexagonal honeycomb because they feel a certain way, it's just the most efficient way for them to do so. Spider webs are also created for efficacy. Down to the single cell, things are constructed in beautiful ways not for the sake of beauty, but out of evolution.
The earth itself creates things that are absolutely beautiful, but are not art. They are merely the result of chemical and kinetic processes.
The "art" of it all, is how humans interpret it and build upon it, with experience, imagination, free will and emotions.
What you see in the night sky, that is not art. That is nature.
The things that humans are compelled to create under the influence of all this beauty - that is the art.
With a kinetic structure, someone went through the effort to design it to do that. With AI art, sure you ask it to do something but a human isn't involved in the creative process in any capacity beyond that
This is a very reductionist claim about how people use AI in their art process. The truth is that the best artists use AI in a sort of dance between the human and machine. But always, the human is the prime mover through a process of iteration.
Sure, but in the case of AI it resembles the relationship of a patron to an art director. We generally don't assign artistry to the person hiring an art director to create artistic output, even if it requires heavy prompting and back and forth. I am not bold enough to try to encompass something as large and fundamental as art into a definition, though I suppose that art does cary something about the craft of using the medium.
At any rate, though there is some aversion to AI art for arts sake, the real aversion to AI art is that it squeezes one of the last viable options for people to become 'working artists' and funnels that extremely hard earned profit to the hands of the conglomerates that have enough compute to train generative models. Is making a living through your art something that we would like to value and maintain as a society? I'd say so.
No doubt, but if your Starcraft experience against AI was "somehow" exactly same with AI, gave you the same joy, and you cannot even say whether it was AI or other players, does that matter? I get this is kind of Truman Show-ish scenario, but does it really matter? If the end results are same, does it still matter? If it does, why? I get the emotional aspect of it, but in practice you wouldn't even know. Now is AI at that point for any of these, possibly not. We can tell AI right now in many interactions and art forms, because it's hollow, and it's just "perfectly mediocre".
It's kind of the sci-fi cliche, can you have feelings for an AI robot? If you can what does that mean.
I have to say this sort of thing is hard to think about, as it's pretty hypothetical right now. But I can't imagine how the current iteration of AI could give me the same joy? Me and my friend were roommates. We played other games together, including dnd. We struggled with another friend to build a LAN so we could play these games together.
I can't imagine having the same shared experience with an AI. Even if I could, knowing there is no consciousness there does changes things (if we can know such thing).
This reminds me of solipsism. I have no way of knowing if others are conscious, but it seems quite lonely to me if that were true. Even though it's the exact same thing to the outside. It's not?
We lost that like 100 years ago. Sitting and watching someone perform music in an intimate setting rarely happens anymore.
If you listen to an album by your favorite band, it is highly unlikely that your feelings/emotions and interpretations correlate with what they felt. Feeling a connection to a song is just you interpreting it through the lens of your own experience, the singer isn't connecting with listeners on some spiritual level of shared experience.
I am not an AI art fan, it grosses me out, but if we are talking purely about art as a means to convey emotions around shared experiences, then the amalgamation is probably closer to your reality than a famous musicians. You could just as easily impose your feelings around a breakup or death on an AI generated classical piano song, or a picture of a tree, or whatever.
There is also something personally ego-shattering about getting destroyed by another human. If I died 10 times trying to beat a boss in a game I wouldn't care much, but if someone beat me 10 times in a row at a multiplayer game I would be questioning everything.
I actually think this is the same point as who you’re responding to. If the human vs ai factor didn’t matter, you wouldn’t care if it was the human or ai on your co-op. The differences are subtle but meaningful and will always play a role in how we choose experiences
So are photos that are edited via Photoshop not art? Are they not art if they were taken on a digital camera? What about electronic music?
You could argue all these things are not art because they used technology, just like AI music or images... no? Where does the spectrum of "true art" begin and end?
They aren't arguing against technology, they're saying that a person didn't really make anything. With photoshop, those are tools that can aid in art. With AI, there isn't any creative process beyond thinking up a concept and having it appear. We don't call people who commission art artists, because they asked someone else to use their creativity to realise an idea. Even there, the artist still put in creative effort into the composition, the elements, the things you study in art appreciation classes. Art isn't just aesthetically pleasing things, it has meaning and effort put into it
I think your view makes sense. On the other hand, Flash revolutionized animation online by allowing artists to express their ideas without having to exhaustively render every single frame, thanks to algorithmic tweening. And yeah, the resulting quality was lower than what Disney or Dreamworks could do. But the ten thousand flowers that bloomed because a wall came down for people with ideas but not time utterly redefined huge swaths of the cultural zeitgeist in a few short years.
I strongly suspect automatic content synthesis will have similar effect as people get their legs under how to use it, because I strongly suspect there are even more people out there with more ideas than time.
I hear the complaints about AI being "weird" or "gross" now and I think about the complaints about Newgrounds content back in the day.
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
I don't know if I'm misinterpreting the word "influence", but low-effort internet memes have a lot more cultural impact than a lot of high-effort art. Also there's botnets, which influence political voting behaviour.
> low-effort internet memes have a lot more cultural impact than a lot of high-effort art.
Memes only have impact in aggregate due to emergent properties in a Mcluhanian sense. An individual meme has little to no impact compared to (some) works of art.
I see what you're getting at, but I think a better framing would be: there's an implicit understand amongst humans that, in the case of things ostensibly human-created, a human found it worth creating. If someone put in the effort to write something, it's because they believed it worth reading. It's part of the social contract that makes it seem worth reading a book or listening to a lecture even if you don't receive any value from the first word.
LLMs and AI art flip this around because potentially very little effort went into making things that potentially take lots of effort to experience and digest. That doesn't inherently mean they're not valuable, but it does mean there's no guarantee that at least one other person out there found it valuable. Even pre-AI it wasn't an iron-clad guarantee of course -- copy-writing, blogspam, and astroturfing existed long before LLMs. But everyone hates those because they prey on the same social contract that LLMs do, except in a smaller scale, and with a lower effort-in:effort-out ratio.
IMO though, while AI enables malicious / selfish / otherwise anti-social behavior at an unprecedented scale, it also enables some pretty cool stuff and new creative potential. Focusing on the tech rather than those using it to harm others is barking up the wrong tree. It's looking for a technical solution to a social problem.
Well, the LLMs were trained with data that required human effort to write, it's not just random noise. So the result they can give is, indirectly and probabilistically regurgitated, human effort.
I'm paying infrastructure costs for our little art community, chatbot crawling our servers and ignoring robots.txt, mining the work of our users so it can make copies, and being told that I don't get because this is such a paradigm shift, is pretty great..
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
That's pretty much what they said about photographs at first. I don't think you'll find a lot of people who argue that there's no art in photography now.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
This is an extremely crude characterisation of what many people feel. Plenty of artists oppose copyright-ignoring generative AI and "get" it perfectly, even use it in art, but in ways that avoid the lazy gold-rush mentality we're seeing now.
I hear you, that's not a problem of AI but a problem of copyright and other stuff. I suppose they'd enrage if an artist replicated their art too closely, rightly or wrongly. Isn't it flattery that your art is literally copied millions of times? I guess not when it doesn't pay you, which is a separate issue than AI in my opinion. Theoretically we can have worse that's only trained on public domain that'd have addressed that concern.
Just like you cannot put piracy into the bag in terms of movies, tv shows you cannot put AI into the bag it came from. Bottom line, this is happening (more like happened) now let's think about what that means and find a way forward.
Prime example is voice acting, I hear why voice actors are mad, if someone can steal your voice. But why not work on legal framework to sell your voice for royalties or whatever. I mean if we can get that lovely voice of yours without you spending your weeks, and still compensated fairly for it, I don't see how this is a problem. -and I know this is already happening, as it should.
This kind of talk I see as an extension of OP's rant. You talk as if mass theft of these LLM growing companies was inevitable. Hogwash and absolutely wrong. It isn't inevitable and it (my opinion) it shouldn't be.
I work at a company trying very hard to incorporate AI into pretty much everything we do. The people pushing it tend to have little understanding of the technology, while the more experienced technical people see a huge mismatch between its advertised benefits and actual results. I have yet to see any evidence that AI is "paradigm shifting" much less "revolutionary." I would be curious to hear any data or examples you have backing those claims up.
In regards to why tech people should be skeptical of AI: technology exists solely to benefit humans in some way. Companies that employ technology should use it to benefit at least one human stakeholder group (employees, customers, shareholders, etc). So far what I have seen is that AI has reduced hiring (negatively impacting employees), created a lot of bad user interfaces (bad for customers), and cost way more money to companies than they are making off of it (bad to shareholders, at least in the long run). AI is an interesting and so far mildly useful technology that is being inflated by hype and causing a lot of damage in the process. Whether it becomes revolutionary like the Internet or falls by the wayside like NFTs and 3D TV's is unknowable at this point.
Big tech senior software engineer working on a major AI product speaking:
I totally agree with the message in the original post.
Yes, AI is going to be everywhere, and it's going to create amazing value and serious challenges, but it's essential to make it optional.
This is not only for the sake of users' freedom. This is essential for companies creating products.
This is minority report, until it is not.
AI has many modes of failure, exploitability, and unpredictability. Some are known and many are not. We have fixes for some, and band aids for some other, but many are not even known yet.
It is essential to make AI optional, to have a "dumb" alternative to everything delegated to a Gen AI.
These options should be given to users, but also, and maybe even more importantly, be baked into the product as an actively maintained and tested plan-b.
The general trend of cost cutting will not be aligned with this. Many products will remove, intentionally or not, the non-ai paths, and when the AI fails (not if), they regret this decision.
This is not a criticisms of AI or a shift in trends toward it, it's a warning for anyone who does not take seriously, the fundamental unpredictability of generative AI
When people talk about AI, they aren't talking about the algorithms and models. They're talking about the business. If you can't honestly stand up and look at the way the AI companies and related business are operating and not feel at least a little unease, you're probably Sam Altman.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
Yeah, no. It's presumptuous to say that these are the only reasons. I don't think you understand at all.
> So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter?
Because to me, and many others, art is a form of communication. Artists toil because they want to communicate something to the world- people consume art because they want to be spoken to. It's a two-way street of communication. Every piece created by a human carries a message, one that's sculpted by their unique life experiences and journey.
AI-generated content may look nice on the surface, but fundamentally they say nothing at all. There is no message or intent behind a probabilistic algorithm putting pixels onto my screen.
When a person encounters AI content masquerading as human-made, it's a betrayal of expectations. There is no two-way communication, the "person" on the other side of the phone line is a spam bot. Think about how you would feel being part of a social group where the only other "people" are LLMs. Do you think that would be fulfilling or engaging after the novelty wears off?
Yes. The work of art should require skills that took years to hone, and innate talent. If it was produced without such, it is a fraud; I've been deceived.
But in fact I was not deceived in that sense, because the work is based on talent and skill: that of numerous unnamed, unattributed people.
It is simply a low-effort plagiarism, presented as an original work.
I think we're all just sick of having everything upended and forced on us by tech companies. This is true even if it is inevitable. It occurred to me lately that modern tech and the modern internet has sort of turned into something which is evil in the way that advertising is evil. (this is aside from the fact of course that the internet is riddled with ads)
Modern tech is 100% about trying to coerce you: you need to buy X, you need to be outraged by X, you must change X in your life or else fall behind.
I really don't want any of this, I'm sick of it. Even if it's inevitable I have no positive feelings about the development, and no positive feelings about anyone or any company pushing it. I don't just mean AI. I mean any of this dumb trash that is constantly being pushed on everyone.
Well you don't, and no tech company can force you to.
> you must change X in your life or else fall behind
This is not forced on you by tech companies, but by the rest of society adopting that tech because they want to. Things change as technology advances. Your feeling of entitlement that you should not have to make any change that you don't want to is ridiculous.
I honestly cannot agree more with this, while still standing behind what I said on the parent comment.
As someone who's been in tech for more than 25 years, I started to hate tech because of all things that you've said. I loved what tech meant, and I hate what it became (to the point I got out of the industry).
But majority of these disappear if we talk about offline models, open models. Some of that already happened and we know more of that will happen, just matter of time. In that world how any of us can say "I don't want a good amount of the knowledge in the whole fucking world in my computer, without even having an internet or paying someone, or seeing ads".
I respect if your stand is just like a vegetarian says I'm ethically against eating animals", I have no argument to that, it's not my ethical line but I respect it. However behind that point, what's the legitimate argument, shall we make humanity worse just rejecting this paradigm shifting, world changing thing. Do we think about people who's going to able to read any content in the world in their language even if their language is very obscure one, that no one cares or auto translate. I mean the what AI means for humanity is huge.
What tech companies and governments do with AI is horrific and scary. However government will do it nonetheless, and tech companies will be supported by these powers nonetheless. Therefore AI is not the enemy, let's aim our criticism and actions to real enemies.
Well it's not really about AI is it then; it's about Millenia of human evolution and the intrinsically human behaviours we've evolved.
Like greed. And apathy. Those are just some of the things that have enabled billionaires and trillionaires. Is it ever gonna change? Well it hasn't for millions of years, so no. As long as we remain human we'll always be assholes to each other.
If I look at a piece of art that was made by a human who earned money for making that art, then it means an actual real human out there was able to put food on their table.
If I look at a piece of "art" produced by a generative AI that was trained on billions of works from people in the previous paragraph, then I have wasted some electricity even further enriching a billionaire and encouraging a world where people don't have the time to make art.
Yes, but that electricity consumption benefits an actual person.
I'm so surprised that I often find myself having to explain this to AI boosters but people have more value than computers.
If you throw a computer in a trash compactor, that's a trivial amount of e-waste. If you throw a living person in a trash compactor, that's a moral tragedy.
The people who build, maintain, and own the datacenters. The people who work at and own the companies that make the hardware in the datacenters. The people who work to build new power plants to power the data centers. The truck drivers that transport all the supplies to build the data centers and power plants.
Call me crazy, but I'd rather live in a world with lots of artists making art and sharing it with people than a world full of data centers churning out auto-generated content.
I would be fine if data centers paid the full cost of their existence, but that isn't what happens in our world.
Instead the cost of pollution is externalised and placed on the backs of humanity's children. That includes the pollution created by those datacentres running off fossil fuel generators because it was cheaper to use gas in the short term than to invest in solar capacity and storage that pays back over the long term. The pollution from building semiconductors in servers and GPUs that will likely have less than a 10 year lifespan in an AI data center as newer generations have lower operating cost. The cost of water being used for evaporative cooling being pulled from aquifers at a rate that is unsustainable because it's cheaper than deploying more expensive heat pumps in a desert climate.... and the pollution of the information on the internet from AI slop.
The short term gains from AI have a real world cost that most of us in the tech industry are isolated from. It is far from clear how to make this sustainable. The sums of money being thrown at AI will change the world forever.
Given that data centers use less energy than the alternative human labor they replace, they actually improve pollution. Replacing those GPUs with more efficient models also improves pollution because those replacements use less electricity for the same workload than the units they replaced.
This is such a wild take. You're 100% correct that AI-generated art consumes less resources that humans making art and having to, you know, eat food and stuff.
Obviously, the optimal solution is to eliminate all humans and have data centers do everything.
One thing I've noticed - artists view their own job as more valuable, more sacred, more important than virtually any other person's job.
They canonize themselves, and then act all shocked and offended when the rest of the world doesn't share their belief.
Obviously the existence of AI is valuable enough to pay the cost of offsetting a few artists' jobs, it's not even a question to us, but to artists it's shocking and offensive.
It's shocking and offensive to artists and to like-minded others because AI labs have based the product that is replacing them off of their existing labor with no compensation. It would be one thing to build a computerized artist that out-competes human artists on merit (arguably happening now), this has happened to dozens of professions over hundreds of years. But the fact that it was built directly off of their past labors with no offer, plan, or even consideration of making them whole for their labor in the corpus is unjust on its face.
Certainly there are artists with inflated egos and senses of self-importance (many computer programmers with this condition too), but does this give us moral high ground to freely use their work?
How many people is it OK to exploit to create "AI"?
Every piece of work is built off of previous work. Henry Ford designed his car based off of the design of previous cars, but made them much more efficiently. No difference here. It's always been the case that once your work is out in the world the competition is allowed to learn from it.
I read this comment as implying a similar kind of exceptionalism for technology, but expressing a different set of values. It reminds me of the frustration I’ve heard for years from software engineers who work at companies where the product isn’t software and they’re not given the time and resources to do their best work because their bosses and nontechnical peers don’t understand the value of their work.
The opposite is also true, the tech world views itself as more sacred that any other part of humanity.
You say it's obvious that the existence of AI is valuable to offset a few artists' jobs, but it is far from obvious. The benefits of AI are still unproven (a more hallucinatory google? a tool to help programmers make architectural errors faster? a way to make ads easier to create and sloppier?). The discussion as to whether AI is valuable is common on hackernews even, so I really don't buy the "it's obvious" claim. Furthermore, the idea that it is only offsetting a few artists' jobs is also unproven: the future is uncertain, it may devastate entire industries.
> One thing I've noticed - artists view their own job as more valuable, more sacred, more important than virtually any other person's job.
> They canonize themselves, and then act all shocked and offended when the rest of the world doesn't share their belief.
You could've written this about software engineers and tech workers.
> Obviously the existence of AI is valuable enough to pay the cost of offsetting a few artists' jobs, it's not even a question to us
No, it's not obvious at all. Current AI models have made it 100x easier to spread disinformation, sow discord, and undermine worker rights. These have more value to me than being able to more efficiently Add Shareholder Value
I have noticed this, but it's not artists themselves. It's mostly coming from people who have zero artistic talent themselves, but really wish they did.
> I'm so surprised that I often find myself having to explain this to AI boosters but people have more value than computers.
That is true, but it does not survive contact with Capitalism. Let's zoom out and look at the larger picture of this simple scenario of "a creator creates art, another person enjoys art":
The creator probably spends hours or days painstakingly creating a work of art, consuming a certain amounts of electricity, water and other resources. The person enjoying that derives a certain amount of appreciation, say, N "enjoyment units". If payment is exchanged, it would reasonably be some function of N.
Now an AI pops up and, prompted by another human produces another similar piece of art in minutes, consuming a teeny, teeny fraction of what the human creator would. This Nature study about text generation finds LLMs are 40 - 150x more efficient in term of resource consumption, dropping to 4 - 16 for humans in India: https://www.nature.com/articles/s41598-024-76682-6 -- I would suspect the ratio is even higher for something as time-consuming as art. Note that the time taken for the human prompter is probably even less, just the time taken to imagine and type out the prompt and maybe refine it a bit.
So even if the other person derives only 0.1N "enjoyment" units out of AI art, in purely economic terms AI is a much, much better deal for everyone involved... including for the environment! And unfortunately, AI is getting so good that it may soon exceed N, so the argument that "humans can create something AI never could" will apply to an exceptionally small fraction of artists.
There are many, many moral arguments that could be made against this scenario, but as has been shown time and again, the definition of Capitalism makes no mention of morality.
> I can only say being against this is either it’s self-interest or not able to grasp it.
So we're just waving away the carbon cost, centralization of power, privacy fallout, fraud amplification, and the erosion of trust in information? These are enormous society-level effects (and there are many more to list).
Dismissing AI criticism as simply ignorance says more about your own.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
Running this paragraph through Gemini, returns a list of the fallacies employed, including - Attacking the Motive - "Even if the artists are motivated by self-interest, this does not automatically make their arguments about AI's negative impacts factually incorrect or "bad."
Just as a poor person is more aware through direct observation and experience, of the consequences of corporate capitalism and financialisation; an artist at the coal face of the restructuring of the creative economy by massive 'IP owners' and IP Pirates (i.e.: the companies training on their creative work without permission) is likely far more in touch the the consequences of actually existing AI than a tech worker who is financially incentivised to view them benignly.
> The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me.
This is a strange kind of anti-naturalistic fallacy. A paradigm shift (or indeed a revolution) is not in itself a good thing. One paradigm shift that has occurred for example in recent goepolitics is the normalisation of state murder - i.e.: extrajudicial assassination in the drone war or the current US govts use of missile attacks on alleged drug traffickers. One can generate countless other negative paradigm shifts.
> if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter?
1) You haven't produced it.
2) Such a thing - a beautiful product of AI that is not identifiably artificial - does not yet, and may never exist.
3) Scare quotes around intellectual property theft aren't an argument. We can abandon IP rights - in which case hurrah, tech companies have none - or we can in law at least, respect them. Anything else is legally and morally incoherent self justification.
4) Do you actually know anything about the history of art, any genre of it whatsoever? Because suggesting originality is impossible and 'efficiency' of production is the only form of artistic progress suggests otherwise.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously
>But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
I understand AI perfectly fine, thanks. I just reject the illegal vacuuming up of everyones' art for corporations while things like sampling in music remain illegal. This idea that everything must be efficient comes from the bowels of Silicon Valley and should die.
> However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
Again, the problem is less the tech itself and the corporations who have control of it. Yes, I'm against corporations gobbling up everyones' data for ads and AI surveillance. I think you might be the one who doesn't understand that not everything is roses and their might be more weeds in the garden than flowers.
It's not corps only though, AI at this point include many open models, and we'll have more of it as we go if needed. Just like how the original hacker culture was born, we have the open source movements, AI will follow it.
When LLMs first gotten of, people were talking about how governments will control them, but anyone who knows the history of personal computing and hacker culture knew that's not the way things go in this world.
Do I enjoy corpos making money off of anyone's work, including obvious things like literally pirating books and training their models (Meta), absolutely not. However you are blaming the wrong thing in here, it's not technology's fault it's how governments are always corrupted and side with money instead of their people. We should be lashing out to them not each other, not the people who use AI and certainly not the people who innovate and build it.
The people who "innovate" and build it are working for Meta and the companies that shamelessly steal from individuals. Companies are made up of individuals who make these decisions.
>I can only say being against this is either it’s self-interest or not able to grasp it.
I'm massively burnt out, what can I say? I can grasp new tech perfectly fine, but I don't want to. I quite honestly can't muster enough energy to care about "revolutionary" things anymore.
If anything I resent having to deal with yet more "revolutionary" bullshit.
To me, it matters because most serious art requires time and effort to study, ponder, and analyze.
The more stuff that exists in the world that superficially looks like art but is actually meaningless slop, the more likely it is that your time and effort is wasted on such empty nonsense.
As someone else put it succinctly, there's art and then there's content. AI generated stuff is content.
And not to be too dismissive of copywriters, but old Buzzfeed style listicles are content as well. Stuff that people get paid pennies per word for, stuff that a huge amount of people will bid on on a gig job site like Fiverr or what have you is content, stuff that people churn out by rote is content.
Creative writing on the other hand is not content. I won't call my shitposting on HN art, but it's not content either because I put (some) thought into it and am typing it out with my real hands. And I don't have someone telling me what I should write. Or paying me for it, for that matter.
Meanwhile, AI doesn't do anything on its own. It can be made to simulate doing stuff on its own (by running continuously / unlimited, or by feeding it a regular stream of prompts), but it won't suddenly go "I'm going to shitpost on HN today" unless told to.
…and the sting is that the majority of people employed in creative fields are hired to produce content, not art. AI makes this blatantly clear with no fallbacks to ease the mind.
The problem is that LLMs are just parrots who swoop into your house and steal everything, then claim it as theirs. That's not art, that's thievery and regurgitation. To resign oneself that this behavior is ok and inevitable is sad and cowardly.
To conflate LLMs with a printing press or the internet is dishonest; yes, it's a tool, but one which degrades society in its use.
Contemporary AI is bad in the same way a Walther P.38 is bad: it's a tool designed by an objectively, ontologically evil force specifically for their evil ends. We live in a world where there are no hunting rifles, no pea-shooters for little old women to protect themselves, no sport pistols. Just the AI equivalent or a weapon built for easy murder by people whose express end is taking over the world.
...Okay, now maybe take that and dial it back a few notches of hyperbole, and you'll have a reasonable explanation for why people have issues with AI as it currently exists. People are not wrong to recognize that, just because some people use AI for benign reasons, the people and companies that have formed a cartel for the tech mainly see those benign reasons as incidental to becoming middle men in every single business and personal computing task.
Of course, there is certainly a potential future where this is not the case, and AI is truly a prosocial, democratizing technology. But we're not there, and will have a hard time getting there with Zuckerburg, Altman, Nadella, and Musk at the helm.
As someone who spends quite a bit of time sketching and drawing for my own satisfaction, it does matter to me when something is created using AI.
I can tell whether something is a matte painting, Procreate, watercolor, or some other medium. I have enough taste to distinguish between a neophyte and an expert.
I know what it means to be that good.
Sure, most people couldn’t care less, and they’re happy with something that’s simply pleasant to look at.
But for those people, it wouldn’t matter even if it weren’t AI-generated. So what is the point?
You created something without having to get a human to do it. Yaay?
Except we already have more content than we know what to do with, so what exactly are we gaining here? Efficiency?
Generative AI was fed on the free work and joy of millions, only to mechanically regurgitate content without attribution. To treat creators as middlemen in the process.
Yaay, efficient art. This is really what is missing in a world with more content than we have time to consume.
The point of markets, of progress, is the improvement of the human condition. That is the whole point of every regulation, every contract, and every innovation.
I am personally not invested in a world that is worse for humanity
I mean we have already stopped caring about dump stock photos at the beginning for every blog post, so we already don't care about shit that's meaningless, let's it's still happening because there is an audience for it.
Art can be about many things, we have a lot of tech oriented art (think about demo scene). Noe one gives a shit about art that evokes nothing for them, therefore if AI evokes nothing who cares, if it does, is it bad suddenly because it's AI? How?
Actually I think AI will force good amount of mediums to logical conclusion if what you do is mediocre, and not original and AI can do same or better, then that's about you. Once you pass that threshold that's how the world cherish you as a recognized artist. Again you can be artist even 99.9% of the world thinks what you produced is absolute garbage, that doesn't change what you do and what that means to you. Again nothing to do with AI.
I do think AI involvement in programming is inevitable; but at this time a lot of the resistance is because AI programming currently is not the best tool for many jobs.
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
For you it's "purely for pleasure," for me it's for money, health and fire protection. I heat my home with my wood stove to bypass about $1,500/year in propane costs, to get exercise (and pleasure) out of cutting and splitting the wood, and to reduce the fuel load around my home. If those reasons went away I'd stop.
That's a good metaphor for the rapid growth of AI. It is driven by real needs from multiple directions. For it to become evitable, it would take coercion or the removal of multiple genuine motivators. People who think we can just say no must be getting a lot less value from it then me day to day.
You may be saving money but wood smoke is very much harmful to your lungs and heart according to the American Lung and American Heart Associations + the EPA. There's a good reason why we've adopted modern heating technologies. They may have other problems but particulate pollution is not one of them.
> For people with underlying heart disease, a 2017 study in the journal Environmental Research linked increased particulate air pollution from wood smoke and other sources to inflammation and clotting, which can predict heart attacks and other heart problems.
> A 2013 study in the journal Particle and Fibre Toxicology found exposure to wood smoke causes the arteries to become stiffer, which raises the risk of dangerous cardiac events. For pregnant women, a 2019 study in Environmental Research connected wood smoke exposure to a higher risk of hypertensive disorders of pregnancy, which include preeclampsia and gestational high blood pressure.
I acknowledge that risk. But I think it is outweighed by the savings, exercise and reduced fire danger. And I shouldn't discount the value to me of living in light clothing in winter when I burn wood, but heavily dressed to save money when burning propane. To stop me you'd have to compel me.
This is not a small thing for me. By burning wood instead of gas I gain a full week of groceries per month all year!
I acknowledge the risk of AI too, including human extinction. Weighing that, I still use it heavily. To stop me you'd have to compel me.
Cow A: "That building smells like blood and steel. I don't think we come back out of there"
Cow B: "Maybe. But the corn is right there and I’m hungry. To stop me, you'd have to compel me"
Past safety is not a perfect predictor of future safety.
I'm burning dead wood in a very high wildfire area. It is going to burn. The county takes a small percentage away ... to burn in huge pits. It really isn't possible that much if any of this wood will just slowly decay. All I'm doing is diverting a couple of cords a year to heat my home. There is additional risk to me, but I'm probably deferring the risk to others by epsilon by clearing a scintilla.
Probably the risk involved in cutting down trees is more than for breathing in wood smoke. I'm no better at predicting which way a tree will fall than which horse will win.
The industrial revolution was pushed down the throats of a lot of people who were sufficiently upset by the developments that they invented communism, universal suffrage*, modern* policing, health and safety laws, trade unions, recognisably modern* state pensions, the motor car (because otherwise we'd be knee-deep in horse manure), zoning laws, passports, and industrial-scale sewage pumping.
I do wonder who the AI era's version of Marx will be, what their version of the Communist Manifesto will say. IIRC, previous times this has been said this on HN, someone pointed out Ted Kaczynski's manifesto.
* Policing and some pensions and democracy did exist in various fashions before the industrial revolution, but few today would recognise their earlier forms as good enough to deserve those names today.
"You bet your ass we're all alike... We've been spoon fed baby food at school when we hungered for steak... The bits of meat that you did let slip through were pre-chewed and tasteless. We've been dominated by sadists, or ignored by the apathetic. The few that had something to teach found us willing pupils, but those few are like drops of water in the desert."
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
I think this is a variant of "every action is normative of itself". Using AI states that use of AI is normal and acceptable. In the same way that for any X doing X states that X is normal and acceptable - even if accompanied by a counterstatement that this is an exception and should not set a precedent.
Yeah, following the OP's logic, if I think this obsession with purity tests and politicizing every tool choice is more toxic than an LLM could ever be, then I should actively undermine that norm.
So I guess I'm morally obligated to use LLMs specifically to reject this framework? Works for me.
I really don't like the "everything is political" sentiment. Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
To clarify, I don't think pushing an ideology you believe in by posting a blog post is a bad thing. That's your right! I just think I have to read posts that feel like they have a very strong message with more caution. Maybe they have a strong message because they have a very good point - that's very possible! But often times, I see people using this as a way to say "if you're not with me, you're against me".
My problem here is that this idea that "everything is political" leaves no room for a middle ground. Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
All that to say, maybe I'm totally wrong, I don't know. I'm open to an argument against mine, because there's a very good chance I'm missing the point.
Your introductory paragraph comes across very much like "people who want to change the status quo are political and people who want to maintain it are not"; which is clearly nonsense. "how things are is how they should be" is as much of an ideology, just a less conspicuous one given the existing norms.
>Is my choice to write some boiler plate code using gen AI truly political?
I am much closer to agreeing with your take here, but as you recognise, there are lots of political aspects to your actions, even if they are not conscious. Not intentionally being political doesn't mean you are not making political choices; there are many more that your AI choice touches upon; privacy issues, wealth distribution, centralisation, etc etc. Of course these choices become limited by practicalities but they still exist.
> Your introductory paragraph comes across very much like "people who want to change the status quo are political and people who want to maintain it are not"; which is clearly nonsense. "how things are is how they should be" is as much of an ideology, just a less conspicuous one given the existing norms.
With respect, I’m curious how you read all of that out of what they said...and whether it actually proves their remarks correct.
I believe that one point of the author precisely is that there seems to be no room for middle ground left in the tech space:
Resisting the status quo of hostile technology is an endless uphill battle. It requires continous effort, mostly motivated by political or at least ideological reasons.
Not fighting it is not the same as being neutral, because not fighting it supports this status quo. It is the conscious or unconscious surrender to hostile systems, whose very purpose is to lull you into apathy through convenience.
I don't think you're wrong so much as you've tread into some semantic muddy water. What did the OP mean by 'inevitable', 'political' or 'everything'?. A lot hangs on the meaning. I lot of words could be written defending one interpretation over another and the chance of changing anyone's mind on the topic seems slim.
Very good point. At that point though, I think it becomes hard to read the post and take it with specifics. Not all writing has to be specific, but now I'm just a bit confused as to what was actually being said by the author.
But You do make a good point that those words are all potentially very loaded.
> Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
This is also my core reservation against the idea.
I think that the belief only holds weight in a society that is rife with opposing interpretations about how it ought to be managed. The claim itself feels like an attempt to force someone toward the interests of the one issuing it.
> Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
Apparently yes it is. This is all determined by your impressions on generative AI and its environmental and economic impact. The problem is that most blog posts are signaling toward a predefined in-group either through familiarity with the author or by a preconceived belief about the subject where it’s assumed that you should already know and agree with the author about these issues. And if you don’t you’re against them.
For example—I don’t agree that everything is inevitable. But I as I read the blog post in question I surmised that it’s an argument against the idea that human beings are not at the absolute will of technological progress. And I can agree with that much. So this influences how I interpret the claim “nothing is inevitable” in addition to the title of the post and in conjunction with the rest of the article (and this all is additionally informed by all the stuff I’m trying to express to you that surrounds this very paragraph).
I think that this is speaks to the present problem of how “politics” is conflated to additionally refer to one’s worldview, culture, etc., in and of itself instead of something distinct but not necessarily inseparable from these things.
Politics ought to indicate toward a more comprehensive way of seeing the world but this isn’t the case for most people today and I suspect that many people who claim to have comprehensive convictions are only 'virtue signaling’.
A person with comprehensive convictions about the world and how humans ought to function in it can better delineate the differences and necessary overlap between politics and other concepts that run downstream from their beliefs. But what do people actually believe in these days? That they can summarize in a sentence or two and that can objectively/authoritatively delineate an “in-group” from an “out-group” and that informs all of their cultural, political, environmental and economic considerations, and so on...
Online discourse is being cleaved into two sides vying for digital capital over hot air. The worst position you can take is a critical one that satisfies neither opponent.
You should keep reading all blog posts with a critical eye toward the appeals embedded within the medium. Or don’t read them at all. Or read them less than you read material that affords you with a greater context than the emotional state that the author was in when they wrote the post before they go back to releasing software communiques.
>Garbage companies using refurbished plane engines to power their data centers is not inevitable
Was wondering what the beef with this was until I realized author meant "companies that are garbage" and not "landfill operators using gas turbines to make power". The latter is something you probably would want.
There's many more. Aeroderivative gas turbines are not exactly new, and they have shorter lead times than regular gas turbines right now, so everybody getting their hands on any has been willing to buy them.
>Your computer changing where things are on every update is not inevitable.
This a million times. I honestly hate interacting with all software and 90% of the internet now. I don't care about your "U""X" front end garbage. I highly prefer text based sites like this
As my family's computer guy, my dad complains to me about this. And there's no satisfactory answer I can give him. My mom told me last year she is "done learning new technology" which seems like a fair goal but maybe not a choice one can make.
You ever see those "dementia simulator" videos where the camera spins around and suddenly all the grocery store aisles are different? That's what it must be like to be less tech literate.
It's been driving me nuts for at least a decade. I can't remember which MacOS update it was, but when they reorganized the settings to better align with iOS, it absolutely infuriated me. Nothing will hit my thunder button like taking my skills and knowledge away. I thought I might swear off Mac forever. I've been avoiding upgrading from 13 now. In the past couple of updates, the settings for displays is completely different for no reason. That's a dialog that one doesn't use very often, except for example, when giving a presentation. It's pretty jarring to plug in on stage in front of dozens or even hundreds of people and suddenly you have to figure out a completely unfamiliar and unintuitive way of setting up mirroring.
I blame GUIs. They disempower users and put them at mercy of UX "experts" who just rearrange the deck chairs when they get bored and then tell themselves how important they are.
The MacOs settings redesign really bothered me too. Maybe it's the 20+ years of muscle memory, or maybe the new one really is that bad, but I find myself clicking around excessively and eventually giving up and using search. I'm with you here.
- some options have moved to menus which make no sense at all (e.g. all the toggles for whether a panel's menubar icon appear in the menu bar have moved off the panel for that feature and onto the Control Centre panel. But Control Centre doesn't have any options of its own, so the entire panel is a waste of time and has created a confusing UX where previously there was a sensible one
- loads of useful stuff I do all the time has moved a layer deeper. e.g. there used to be a top-level item called "sharing" for file/internet/printer sharing settings. It's moved one level deeper, below "General". Admittedly, "the average user" who doesn't use sharing features much, let alone wanting to toggle and control them, probably prefers this, but I find it annoying as heck
- following on from that, and also exhibited across the whole settings UI is that UI patterns are now inconsistent across panels; this seems to be because the whole thing is a bunch of web views, presumably all controlled by a different team. So they can create whatever UI they like, with whatever tools make sense. Before, I assume, there was more consistency because panels seemed to reuse the same default controls. I'm talking about use of tabs, or drop-downs, or expanders, or modal overlays... every top level panel has some of these, and they use them all differently: some panels expand a list to reach sub controls, some add a model, some just have piles of controls in lozenges
- it renders much slower. On my m3 and m4 MPBs you can still see lag. It's utterly insane that on these basically cutting edge processors with heaps of RAM, spare CPUs, >10 GPU cores, etc, the system control panel still lags
- they've fallen into the trap of making "features" be represented by horizontal bars with a button or toggle on the right edge. This pattern is found in Google's Material UI as well. It _kinda_ makes sense on a phone, and _almost_ makes sense on a tablet. But on a desktop where most windows could be any width, it introduces a bunch of readability errors. When the window's wide, it's very easy for the eye to lose the horizontal connection between a label and its toggle/button/etc. To get around this, Apple have locked the width of the Settings app... but also seems a bit weird.
- don't get me started on what "liquid glass" has done to the look & feel
... Did you just complain about modern technology taking power away from users only to post an AI generated song about it? You know, the thing taking away power from musicians and filling up all modern digital music libraries with garbage?
There's some cognitive dissonance on display there that I'm actually finding it hard to wrap my head around.
> Did you just complain...only to post an AI generated song about it?
Yeah, I absolutely did. Only I wrote the lyrics and AI augmented my skills by giving it a voice. I actually put significant effort into that one; I spent a couple hours tweaking it and increasing its cohesion and punchiness, iterating with ideas and feedback from various tools.
I used the computer like a bicycle for my mind, the way it was intended.
It didn't augment your skills, it replaced skills you lack. If I generate art using DallE or Stable Diffusion, then edit in Krita/Photoshop/etc. it doesn't suddenly cover up the fact that I was unable to draw/paint/photograph the initial concept. It didn't augment my skills, it replaced them. If you generate "music" like that, it's not augmenting your poetry that you wish to use as lyrics - which may or may not be of good quality in it's own right - it replaced your ability to make music with it.
Computers are meant to be tools to expand our capabilities. You didn't do that. You replaced them. You didn't ride a bike, you called an Uber because you never learned to drive, or you were too lazy to do it for this use.
AI can augment skills by allowing for creative expressions - be it with AI stem separation, neural-network based distortion effects, etc. But the difference is those are tools to be used together with other tools to craft a thing. A tool can be fully automated - but then, if it is, you are no longer a artist. No more than someone that knows how to operate a CNC machine but not design the parts.
This is hard for some people to understand, especially those with an engineering or programming background, but there is a point to philosophy. Innate, valuable knowledge in how a thing was produced. If I find a stone arrow head buried under the dirt on land I know was once used for hunting by native Americans, that arrow head has intrinsic value to me because of its origin. Because I know it wasn't made as a replica and because I found it. There is a sliding scale, shades of gray here. An arrow head I had verified was actually old but which I did not find is still more valuable than one I know is a replica. Similarly, you can, I agree, slowly un-taint an AI work with enough input, but not fully. Similarly, if an digital artist painted something by hand then had StableDiffusion inpaint a small region as part of their process, that still bothers many, adds a taint of that tool to it because they did not take the time to do what the tool has done and mentally weigh each pixel and each line.
By using Suno, you're firmly in the "This was generated for me" side of that line for most people, certainly most musicians. That isn't riding a bike. That's not stretching your muscles or feeling the burn of the creative process. It's throwing a hundred dice, leaving the 6's up, and throwing again until they're all 6's. Sure, you have input, but I hardly see it as impressive. You're just a reverse centaur: https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
And for the record, I could write a multi-page rant about how Suno is not actually what I want; its shitty UI (which will no doubt change soon) and crappy reinvention of the DAW is absolutely underpowered for tweaking and composing songs how I want. We should instead be integrating these new music creation models into both professional tools and also making the AI tools less of a push-button one-stop shop, but giving better control rather than just meakly pawing in the direction of what you want with prompts.
Because none of these AI tech bros give a dam about music. I thought with ai we would be able to put all the "timbres" of instruments into vector database and create a truly new instrument sound. Like making a new color for the first time.
But no we get none of that. We get mega shitty corporate covers. I would rather hear music that's a little bad than artificially perfect sounding.
I personally agree with everything you say, and am equally frustrated with (years later) not being able to find MacOS settings quickly - though part of that's due to searching within settings being terrible. Screen mirroring is the worst offender for me, too.
However, I support ~80 non-technical users for whom that update was a huge benefit. They're familiar with iOS on their phones, so the new interface is (whaddya know) intuitive for them. (I get fewer support calls, so it's of indirect benefit to me, too.) I try to let go of my frustration by reminding myself that learning new technology is (literally) part of my job description, but it's not theirs.
That doesn't excuse all the "moving the deck chairs" changes - Tahoe re-design: why? - but I think Apple's broad philosophy of ignoring power users like us and aligning settings interfaces was broadly correct.
Funny story: when my family first got a Windows computer (3.1, so... 1992 or '93?) my first reaction was "this sucks. Why can't I just tell the computer what to do anymore?" But, obviously, GUIs are the only way the vast majority will ever be able to interact with a device - and, you know, there are lots of tasks for which a visual interface is objectively better. I'd appreciate better CLI access to MacOS settings: a one-liner that mirrors to the most recently-connected display would save me so much fumbling. Maybe that's AppleScript-able? If I can figure it out I'll share here.
Individual specific things are not inevitable. But many generic concepts are because of various market, social and other forces.
There's such a thing as "multiple invention", precisely because of this. Because we all live in the same world, we have similar needs and we have similar tools available. So different people in different places keep trying to solve the same problems, build the same grounding for future inventions. Many people want to do stuff at night, so many people push at the problem of lighting. Edison's particular light bulb wasn't inevitable, but electric lighting was inevitable in some form.
So with regards to generative AI, many people worked in this field for a long time. I played with fractals and texture generators as a kid. Many people want for many reasons. Artwork is expensive. Artwork is sometimes too big. Or too fixed, maybe we want variation. There's many reasons to push at the problem, and it's not coordinated. I had a period where I was fiddling around with generating assets for Second Life way back because I found that personally interesting. And I'm sure I was not the only one by any means.
That's what I understand by "inevitable", that without any central planning or coordination many roads are being built to the same destination and eventually one will get there. If not one then one of the others.
I share the sentiment about the state of things but can't relate to whether any of it is literally inevitable or not. To me it just doesn't matter. Inevitable or not, this is the world we live in, and its not like I (or any group of likeminded people) can change anything.
So technically inevitable or not, it doesn't matter. People will at large keep using smart refrigerators and Tiktok.
You are mistaken. The future is defined by the common man on the steet. Those are the same people who use Whatsapp, Facebook and instagram accounts heavily and regularly. They will soon become the biggest drivers of AI adaption.
The techies are drop in the ocean. You may build a new tech or device, but the adaption is driven by the crowd who just drift away without a pinch of resistance.
But then the pinch of resistance makes an island of likewise thinkers. And there doesn't need to be more than .05% of techies to make great products that otherwise anti-correlate with what people claim as inevitable.
We should stop with over-generalization like "The future is defined by the common man on the street." It's always much more complex than that. To every trend, there is a counter-trend (even sometimes alt-trends that are not actually opposites).
Because the average media incompetent Joe is easy to influence. Just spread some tinfoil hat theories, or fearmonger publicly and they have nothing else to talk about. They wouldn't even understand how bad big tech is if you forced them to research for a month. Humans are way more stupid than we believe (just look at all the people who'd love to cuddle a wild bear they meet in Yellowstone). That we managed to get on top of the food chain is a miracle.
The idea that the future is architected by our choices (or lack of it) is the crux of one of the opportunity spaces at the Advanced Research and Invention Agency (ARIA) in the UK.
(full disclosure: I'm working with the programme director on helping define the funding programme, so if you're working on related problems, by all means share your thoughts on the site or by reaching out!)
RE your link, I am not sure that anything of value has been articulated in describing this vague notion of "Collective Flourishing". After reading the page two or three times, the best I can understand is that it seems to be a call for technical solutions to problems that are actually deeply social in nature and require social solutions. Strange to see such nebulous slop coming from an official government agency.
Hi - the opportunity space page is meant to be broad; a more specific and targeted programme thesis within that is going to come out in the coming months. I should mention that it's not just a call for technical solutions because, as you note, a lot of this is deeply rooted in social systems and requires design solutions (i.e. it's not just building more tech). But that is good feedback!
Mobile device updates are the worst for aging parents. These devices are getting more complex to use not easier, you shouldn't have to upend your life once a year because UX design choices forces you to miss what you think is important, how to find it, or disable/enable features you don't want or used to have.
At the highest level, this becomes a question of whether we live in a predetermined universe or not. Historians do debate the Great Man vs Great Forces narrative of human development, but even if many historical events were "close calls" or "almost didn't happens" it doesn't mean that the counterfactual would be better. Discrete things like the Juicero might not have happened, but ridiculous "smart internet-connected products" that raised lots of VC money during the ZIRP era feels inevitable to me.
Do we really think LLMs and the generative AI craze would have not occurred if Sam Altman chose to stay at Y Combinator or otherwise got hit by a bus? People clearly like to interact with a seemingly smart digital agent, demonstrated as early as ELIZA in 1966 and SmarterChild in 2001.
My POV is that human beings have innate biases and preferences that tend to manifest what we invent and adopt. I don't personally believe in a supernatural God but many people around the world do. Alcoholic beverages have been independently discovered in numerous cultures across the world over centuries.
I think the best we can do is usually try to act according to our own values and nudge it in a direction we believe is best (both things OP is doing so this is not a dunk on them, just my take on their thoughts here).
"Marshall McLuhan once said, 'There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.' The handwaving rhetoric that I’ve called a Borg Complex is resolutely opposed to just such contemplation when it comes to technology and its consequences. We need more thinking, not less, and Borg Complex rhetoric is typically deployed to stop rather than advance discussion. What’s more, Borg Complex rhetoric also amounts to a refusal of responsibility. We cannot, after all, be held responsible for what is inevitable. Naming and identifying Borg Complex rhetoric matters only insofar as it promotes careful thinking and responsible action."
As a tech person the older I get the less tech interests me.
Analogical is where I get the fun from, no more smart watch, smart tv, spotify, connected home things, automatic coffee machine, no thank you.
Almost no new technology is respectful enough to its users for me to consider making accomodations for it in my life.
It's not just that it's not fun. Any fun I derive is canceled-out by the inevitable loss.
I've felt white-hot blazing anger so many times when a feature is taken away by an "update" that I am not permitted to revert. I don't want to feel that feeling anymore.
The author says>> "Not being in control of course makes people endlessy frustrated, but at the same time trying to wrestle control from the parasites is an uphill battle that they expect to lose, with more frustration as a result."
While this reaction is understandable, it is difficult to feel sympathy when so few people are willing to invest the time and effort required to actually understand how these systems work and how they might be used defensively. Mastery, even partial, is one of the few genuine avenues toward agency. Choosing not to pursue it effectively guarantees dependence.
Ironically, pointing this out often invites accusations of being a Luddite or worse.
The fact is, most of the systems people use in their day do day that behave the way described simply require no mastery whatsoever. If your product, service, or device is locked behind learning a new skill, any skill, that will inherently limit the possible size of the audience. Far more than most realize. We can rail against this reality, but it is unforgiving. The average person who is struggling to put food on the table only has so many hours in the week to spare to stick it to the man by learning a new operating system.
It seems we’re all experiencing a form of sticker shock, from the bill for getting ease-of-use out of software that we demanded for the past few decades.
It’s a trope now that loads of internet users will complain about Mac OS & Windows, while digging in their heels against switching to Linux. Just take your medicine people.
Sure, the technology that real people in your life, including most "normies" who only use tech to get stuff done, are using ChatGPT, but it's not "inevitable".
Everyone who runs a Google search and doesn't read past the Gemini result uses an LLM. That's easily a majority without even getting into other products.
Do you have any proof for that? ChatGPT has a DAU in the ~100m range as of last reporting. Even considering it's a global audience, that's almost a third of the US population.
Everything is inevitable. If it happened, then it couldn't have happened otherwise. "Your computer sending screenshots to microsoft so they can train AIs on it" was inevitable, because that's what incentives pushed them to do. Vocal opposition and boycotting might become a different kind of incentive, but in most cases it doesn't work. The fact of the matter is that corporations are powerful, shareholders are powerful, the collective mass of indifferent consumers are powerful, while you are powerless.
> The fact of the matter is that... you are powerless.
Trivialities don't add anything to the discussion. The question is "Why?" and then "How do we change that?". Even incomplete or inaccurate attempts at answering would be far more valuable than a demonstration of hand-wringing powerlessness.
This is the inevitability of unfettered capitalism. The pressure is towards generating wealth, with the intended hope that the side effect will be that this produces net 'good' for society. It has worked (to varying degrees) and has enabled the modern world. But it may well be running out of steam.
I do not think that the current philosophical world view will enable a different path. We've had resets or potential resets, COVID being a huge opportunity, but I think neither the public nor the political class had the strength to seize the moment.
We live in a world where we know the price of everything and the value of nothing. It will take dramatic change to put 'value' back where it belongs and relegate price farther down the ladder.
COVID was in many ways the opposite of a reset. It further solidified massive wealth disparity and power concentration. The positive feedback loop doesn’t appear to have a good off-ramp.
The off ramp to the loop was always a crisis like a recession or war. Covid could have been such an example. If the government hadn't bailed everyone out, the pandemic would have been a mass casualty event for companies and their owners. The financial crisis of 2008 also would have been such an event if not for the bailouts. The natural state of the economy is boom and bust, with wealth concentration increasing during the booms and dropping during the busts. But it turns out the busts are really painful for everyone, and so our government has learned how to mostly keep them from happening. Turns out people prefer the ever increasing concentration of wealth an power that this results in over the temporary pain of economic depression.
There is a possibly less harmful trend now than internet-connected beds: electrically grounded beds and bedding. There is a whole quackery around some alleged benefits of being electrically grounded while you sleep. LOL!
Hot take, but I blame 90% of these problems on the internet's overreliance on funding from advertisement. It all flows from there:
1. To display ads is to sacrifice user experience. This is a slippery slope and both developers and users get used to it, which affects even ad-free services. Things like "yes/maybe later" become normal.
2. Ads are only displayed when the user visits the service directly. Therefore we cannot have open APIs, federation, alternative clients, or user customization.
3. The advertisement infrastructure is expensive. This has to be paid with more ads. Like the rocket equation, this eventually plateaus, but by then the software is bloated and cannot be funded traditionally anymore, so any dips are fatal. Constant churn.
4. Well targeted ads are marginally more profitable, therefore all user information is valuable. Cue an entire era of tracking, privacy violations, and psychological manipulation.
5. Advertiser don't want to be associated with anything remotely controversial, so the circle of acceptable content shrinks every year. The fringes become worse and worse.
6. The system only works with a very large number of users. It becomes socially expected to participate, and at the same time, no customer support is provided when things go wrong.
I'm fairly sure ads are our generation's asbestos or leaded gasoline, and would be disappointed if they are not largely banned in the future.
I agree with the core point.
I’d add that modern marketing isn’t really aiming for heterogeneity, but for eventually producing behaviorally homogeneous user clusters.Many product and tech choices today are about shaping user behavior so users become more predictable and therefore more monetizable.
During this transition phase, the system tolerates (or creates) heterogeneity, but at the cost of complexity, friction, and inefficiency, which are mostly pushed onto users.
In that sense, this is less about “the future” and more about engineering markets through data, with trade-offs that are rarely made explicit
I think what people tend to forget when speaking of inevitability is that the scope of their statement is important.
*Existence* of a situation as inevitable isn't so bold of a claim. For example, someone will use an AI technology to cheat on an exam. Fine, it's possible. Heck, it is mathematically certain if we have a civilization that has exams and AI techs, and if that civilization runs infinitely.
*Generality* of a situation as inevitable, however, tends to go the other way.
Ads are inevitable assuming a capitalistic fueled Internet continues. If you can remember the early WWW, you know ads weren't there, but e-commerce existed in a rudimentary fashion. You couldn't buy anything by clicking a mouse, but you could go and download a catalog or some shareware with a phone number during installation for the sales team to give you a license key.
An interesting perspective on this is why Facebook kept the Sun's microsystems sign in their Palo Alto HQ.
Nothing in tech is inevitable, nothing in tech is irreplaceable, nothing in tech is permanent.
Shout out to the Juicero example, because there are so many people out there showing that AI can be also "just squeeze the bag with your hands".
"Ads are not inevitable." is a pretty bold statement that really damages the argument. Mixing fundamental things like that in with Juicero prevents a good will discussion.
Ads are one of the oldest and most fundamental parts of a modern society.
Mixing obviously dumb things in with fundamental ones doesn't improve the point.
I remember the was a guy who regularly posted tech predictions and then every year adjusted and reflected on his predictions. Can anyone help me find it?
I think this person is too optimistic. Everything that will give powerful people money or influence and not get them killed is pretty much near inevitable.
I’d still say, this is the future, like it or not. See how much capital has been poured into the caldron?
None of the items is technically inevitable, but the world runs on capital, and capital alone. Tech advances are just a by product of capital snooping around trying to increase itself.
Inevitability just means that something WILL happen, and many of those items are absolutely inevitable:
AI exists -> vacation photos exist -> it's inevitable that someone was eventually going to use AI to enhance their vacation photos.
As one of those niche power users who runs servers at home to be beholden to fewer tech companies, I still understand that most people would choose Netflix over a free jellyfin server they have to administer.
> Not being in control of course makes people endlessy frustrated
I regret to inform you, OP, that this is not true. It's true for exactly the kind of tech people like us who are already doing this stuff, because it's why we do it. Your assumption that people who don't just "gave up", as opposed to actively choosing not to spend their time on managing their own tech environment, is I think biased by your predilection for technology.
I wholeheartedly share OP's dislike of techno-capitalism(derogatory), but OP's list is a mishmash of
1) technologies, which are almost never intrinsically bad, and 2) business choices, which usually are.
An Internet-connected bed isn't intrinsically bad; you could set one up yourself to track your sleep statistics that pushes the data to a server you control.
It's the companies and their choices to foist that technology on people in harmful ways that makes it bad.
This is the gripe I have with anti-AI absolutists: you can train AI models on data you own, to benefit your and other communities. And people are!
But companies are misusing the technology in service of the profit motive, at the expense of others whose data they're (sometimes even illegally) ingesting.
Place the blame in the appropriate place. Something something, hammers don't kill people.
This article has such palpable distain for the people who consume these products that it makes me wonder why the author even cares what kind of future they inhabit.
> But what is important to me is to keep the perspective of what consitutes a desirable future, and which actions get us closer or further from that.
Desirable to whom? I certainly don't think the status quo is perfect, but I do think dismissing it as purely the product of some faceless cadre of tech oligarchs desires is arrogant. People do have agency, the author just doesn't like what they have chosen to do with it...
I like the “wasn’t inevitable” list. The fact that two US corporations control 99% of phones is another one that feels about as comfortable as a rock in my boot and I hope this too is not inevitable, in the long run.
Imagine if the 80s and 90s had been PC vs Mac but you had to go to IBM for one or more critical pieces of software or software distribution infrastructure. The Cambrian explosion IBM-PC compatability didn’t happen overnight of course. I don’t think it will be (or ought to be) inevitable that phones remain opaque and locked down forever, but the day when freedom finally comes doesn’t really feel like it’s just around the corner.
> The Cambrian explosion IBM-PC compatability didn’t happen overnight of course.
There's a recording of an interview with Bill Gates floating around where he pretty much takes credit for that. He claims (paraphrasing because I listened to it almost 20 years ago) that he suggested a lot of the hardware to IBM because he knew he could repurpose DOS for it.
I also phrased this badly. Cambrian explosions by definition happen in very short time frames (“overnight”), but the conditions required to set off the explosion take a long time to brew and are few and far between.
We’re two decades into the smartphone era and my hope is that we’re still in the DEC / VAX / S370 stage, with the “IBM-PC” stage just around the corner still to come.
Also imagine that basic interactions were mediated by those monopolies: you had to print your bus ticket personally with software only available on your IBM.
I personally really like Apple Vision and the bar it’s pushing. However, using one of these devices long term in a walled garden sounds like a nightmare for privacy and marketing abuse of users.
> "Requiring a smartphone to exist in society is not inevitable."
Seeing smartphones morph from super neat computer/camera/music players in our pockets to unfiltered digital nicotine is depressing to think about.
Notifications abuse is _entirely_ to blame, IMO.
Every app that you think of when you think of "addictive" apps heavily relies on the notifications funnel (badges, toasts, dings) for engagement. I'm disappointed that we as a society have normalized unabated casino-tier attention grabs from our most personal computing devices.
Growth through free, ad-subsidized tiers also helped create this phenomenon, but that strategy wouldn't be nearly as effective without delivery via notifications.
Big AI (more like LLM/Stable Diffusion as a Service) is going to prey on that to levels never seen before, and I'm definitely not here for it.
Obligatory end-of-post anecdote: My phone stays home most of the time when I work out. I only bring my Apple Watch.
My gym bag has an iPad mini and a BOOX eReader, but I only use the iPad to do Peloton stretches and listen to KEXP Archives, as those can't be done from my watch (though I'm working on something for the latter).
Using this setup has given me a lot of opportunities to soak in my surroundings during rest periods. Most of that is just seeing people glued to Instagram, Facebook, and YouTube. TV addiction on steroids, in other words.
Thanks to this, people like me who use their phones as tools are forced to carry huge, heavy black slabs because small phones aren't viable and, as market analysis is showing, thin and lightweight slabs won't cut it either.
Yes, and I do that aggressively. But that's an opt-in behavior, and like most things that are opt-in, people can't be bothered, and companies know that.
“The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.”
David Graeber
I guess the problem is scale. A system based on altruism, trust and reciprocity might work great for a community of 20 people. But it doesn't scale to millions of people. Consequently, we end up with (in the West) various shades of democracy, "the least bad system". However, democracy doesn't work well when a tiny fabulously-rich elite is able to buy up all the media and a sizeable chunk of the politicians.
I've been thinking a lot lately, challenging some of my long-held assumptions...
Big tech, the current AI trend, social media websites serving up rage bait and misinformation (not to imply this is all they do, or that they are ALL bad), the current political climate and culture...
In my view, all of these are symptoms, and the cause is the perverse, largely unchallenged neoliberal world in which the West has spent the last 30-40 years (at least) living in.
Profit maximising comes before everything else. (Large) Corporate interests are almost never challenged. The result? Deliberately amoral public policy that serves the rich and powerful.
There are oases in this desert (which is, indeed, not inevitable), thankfully. As the author mentioned, there's FOSS. There's indie-created games/movies. There's everyday goodness between decent people.
The OP confuses "inevitable" with "totalizing" in part. Many bad things seem likely to be adopted by large numbers of people, and yet that does not mean everyone will be somehow forced into adopting these things.
The irony of naming this post "This is not the future" and leaving no room for the possibility that this is actually probably the future.
Whole post just reads as someone disgruntled at the state of the world and reeling that they aren't getting their way. Theres a toxic air of intellectual and moral superiority in that blog
I mean... of course it's not the future. It's the present. I have been seeing AI-generated posters and menus in real life on daily basis for about half a year. AI-upscale is completely normalized for average users. I don't know about other fields, but for graphic design we have well past this discussion.
Just like TikTok. The author doesn't think TikTok is inevitable, and I fully agree with them! But in our real timeline TikTok exists. So TikTok is, unquestionably, the present. Wide adoption of gen-AI is present.
Just get a new iPhone and keeping paying for iCloud. I really resonate with older folks going with the flow on technology. My dad has to watch 90 second ads on his TV to see YouTube videos. I hate it for him.
As a counterpoint, consider reading Kevin Kelly’s “The Inevitable.” I also avoid most social media and have an aversion to all things “smart,” but these may actually be “inevitable.” In a capitalist society, these forces feel inevitable because there’s no Anubis balancing the scales. Only local decisions, often short-sighted. When demand is being created in a market, people will compete to capture it any way possible. Some of those ideas will last, and others will be a flash in the pan. It’s not clear to me that if you reran the tapes of history again, you’d get a significantly different outcome that didn’t include things like short-form video to exploit attention.
Any theory of how people behave works so long as (key) people follow it.
It's not really game theory but economics: the supply curve for nicely contended markets, and transaction costs for everything. Game theory only addresses the information aspects of transaction costs, and translates mostly only for equal power and information (markets).
The more enduring theory is the roof; i.e., it mostly reduces to what team you're on: which mafia don, or cold-war side, or technology you're leveraging for advantage. In this context, signaling matters most: identifying where you stand. As an influencer, the signal is that you're the leading edge, so people should follow you. The betas vie to grow the alpha, and the alpha boosts or cuts betas to retain their role as decider. The roof creates the roles and empowers creatures, not vice-versa.
The character of the roof depends on resources available: what military, economic, spiritual or social threat is wielded (in the cold war, capitalism, religion or culture wars).
The roof itself - the political franchise of the protection racket - is the origin of "civilization". The few escapes from such oppression are legendary and worth emulating, but rare. Still, that's our responsibility: to temper or escape.
Fatalism is a part of the language of fascism. Statements like, "it is inevitable," are supposing that we cannot change the future and should submit to what our interlocutor is proposing. It's a rhetorical tool to avoid critique. Someone who says, "programming as a profession is over, AI will inevitably replace developers so learn to use it and get with the program," isn't inviting discussion. But this is not how the future works and TFA is right to point out that these things are rarely ever, "inevitable."
What is inevitable? The heat death of the universe. You probably don't need to worry about it much.
Everything else can change. If someone is proposing that a given technology is, "inevitable," it's a signal that we should think about what that technology does, what it's being used to do to people, and who profits from doing it to them.
> Most old people in particular (sorry mom) have given up and resigned themselves to drift wherever their computing devices take them, because under the guise of convenience, everything is so hostile that there is no point trying to learn things, and dark patterns are everywhere. Not being in control of course makes people endlessy frustrated, but at the same time trying to wrestle control from the parasites is an uphill battle that they expect to lose, with more frustration as a result.
I'm pretty cynical, but one ray of hope is that AI-assisted coding tools have really brought down the skill requirement for doing some daunting programming tasks. E.g. in my case, I have long avoided doing much web or UI programming because there's just so much to learn and so many deep rabbit holes to go down. But with AI tools I can get off the ground in seconds or minutes and all that gruddy HTML/JavaScript/CSS with bazillions of APIs that I could go spend time studying and tinkering with have already been digested by the AI. It spits out some crap that does the thing I mostly want. ChatGPT 5+ is pretty good at navigating all the Web APIs so it was able to generate some WebAudio mini apps to start working with. The code looks like crap, so I hit it with a stick and get it to reorganize the code a little and write some comments, and then I can dive in and do the rest myself. It's a starting point, a prototype. It got me over the activation energy hump, and now I'm not so reluctant to actually try things out.
But like I said, I'm cynical. Right now the AI tools haven't been overly enshittified to the point they only serve their masters. Pretty soon they will be, and in ways we can't yet imagine.
I feel like it's time for a new direction in tech. Open source was a lot of fun (because I was young), but then it sort of became the default for a lot of infrastructure. I liked PG's take on startups and some cool ones came out of that era. But now the whole thing is collapsing on itself and "Silicon Valley" is broadly becoming disliked if not reviled by a lot of people, with cartoonish figures like Musk, or Joe Lonsdale calling for public hangings. The sense of wonder and innovation feels gone to me. It's still out there somewhere, though, I think, and it'd be nice to recover some of that. LLM's owned by megacorps aren't really where it's at for me - I want to see small teams in garages making something new and fun.
The less I use tech these days the happier I seem to be.
I'm basically down to Anki cards, Chrono Trigger, and the Money Matters newsletter on my phone (plus calls and messaging).
Recently I've dropped Youtube in favor of reading the New Yorker's that're piling up more frequently.
Is it just me or is software actively getting worse too? I feel like I'm noticing more rough edges, the new Mac OS update doesn't feel as smooth as I use to expect from Apple products.
Life is just calmer, get an antenna and PBS, use your library, look at the fucking birds lol. The deluge of misinformation isn't worth it for the good nuggets at this point
The path we're on is not inevitable. But narratives keep it locked in.
Narratives are funny because they can be completely true and a total lie.
There's now a repeated narrative about how the AI bubble is like the railroads and dotcom and therefore will end the same. Maybe. But that makes it seem inevitable. But those who have that story can't see anything else and might even cause that to happen, collectively.
We can frame things with stories and determine the outcomes by them. If enough people believe that story, it becomes inevitable. There are many ways to look at the same thing and many different types of stories we can tell - each story makes different things inevitable.
So I have a story I'd like to promote:
There were once these big companies that controlled computing. They had it locked down. Then came ibm clones and suddenly, the big monopolies couldn't keep up with innovation via the larger marketplaces that opened up with standard (protocol) hardware interfaces. And later, the internet was new and exciting - compuserve and AOL were so obviously going to control the internet. But then open protocols and services won because how could they not? It was inevitable that a locked down walled garden could not compete with the dynamism that open protocols allowed.
Obviously now, this time is no different. And, in fact, we're at an inflection point that looks a lot like those other times in computing that favored tiny upstarts that made lives better but didn't make monopoly-sized money. The LLMs will create new ways to compete (and have already) that big companies will be slow to follow. The costs of creating software will go down so that companies will have to compete on things that align with user's interests.
User's agency will have to be restored. And open protocols will again win over closed for the same reasons they did before. Companies that try to compete with the old, cynical model will rapidly lose customers and will not be able to adapt. The money possible to be made in software will decline but users will have software in their interests. The AI megacorps have no moat - chinese downloadable models are almost as good. People will again control their own data.
"This post is to underline that Nothing is inevitable."
Inevitable and being a holdout are conceptually different and you can't expect society as a whole to care or respect your personal space with regards to it.
They listed smartphones as a requirement an example. That is great, have fun with your flip phone, but that isn't for most people.
Just because you don't find something desirable doesn't mean you deserve extra attention or a special space. It also doesn't you can call people catering to the wants of the masses as "grifters".
I would argue that the Apple Vision Pro is a rare example of something that WAS inevitable BECAUSE of the misbehavior described. Every large VR company COULD have just created and iterated on the basic design that Palmer Luckey (aside: who is also a villain here, too; don't read to much into the name-drop) forced everyone to recognize as viable, slowly developing the technology in the organic way other computing platforms came about. Instead, they all played a decade-long game of chicken, trying to bait their rivals into being the Xerox or Rio of the technology, to their... well, Apple. That is, letting them do all the hard work in standing up the tech's basic user experience, use cases, etc., then swooping in with a polished product offering.
None of these companies wanted to get Apple'd, and they (particularly Facebook) did everything they could to pay lip service to developing VR without funding anything really groundbreaking (or even obvious). Apple finally had to release something after years of promising shareholders that they weren't going to get left behind in the market, and with nothing material to skim off of competitors, the AVP is what we got.
Until Apple figures out how to dig up and purify its deep-rooted cultural rot, and learn how to actually innovate independently, every halfway-aware competitor is going to hold back development on anything they might want to appropriate. In the meantime, we all lose.
Another post arguing against the profit mechanism and modern commercial capitalism without realizing it. It's arguing against the symptoms. The problem is a system that incentivizes creating markets where a market was not needed and convincing people it will make their lives better. Yes, the problem is cultural, but it's also deeply ingrained in our economic protocol now. You can't scream into the void about specific problems and expect change unless you look at the root causes.
Following this thread takes you into political territory and governmental/regulatory capture, which I believe is the root issue that cannot be solved in late stage capitalism.
We are headed towards (or already in) corporate feudalism and I don't think anything can realistically be done about it. Not sure if this is nihilism or realism but the only real solution I see is on the individual level: make enough money that you don't have to really care about the downsides of the system (upper middle class).
So while I agree with you, I think I just disagree with the little bit you said about "cant expect anything to change without-" and would just say: cant expect anything to change except through the inertia of what already is in place.
It's technically not inevitable, sure. What has to happen for this not to become the future though, and what are the odds of that happening?
The rational choice is to act as if this was ensured to be the future. If it ends up not being the case, enough people will have made that mistake that your failure will be minuscule in the grand scheme of things, and if it's not and this is the future, you won't be left behind.
Sure beats sticking your feet in the sand and most likely fucking up or perhaps being right in the end, standing between the flames.
Fine piece of what it's really about. The feeling of losing one's joy and possible applause for doing a good job.
But the inevitable is not a fact, it's a rigged fake that is, unfortunately, adapted by humans which flock in such large groups, echoing the same sentiments that it for those people seem real and inevitable.
Humans in general are extremely predictable, yet, so predictable that they seem utterly stupid and imbecile.
I think AR/VR definitely turns off a lot of people, me included. Further it is seen as anti-social and unnecessary to many. I think the lackluster sales prove the “not inevitable” point.
I don't necessarily disagree with the crux of your point. But I suspect the lackluster sales also had something to do with the $3,500 price tag. Meta has sold sold over 20x as many Oculus units.
For any point of the past, this is the future. It is not what is "best" by some metric, nor what is right, or should be, is what actually happens. It is like saying that evolution goal is intelligence or whatever, is what sticks in the wall.
Can we change direction on how things are going? Yes, but you must understand what means the "we" there, at least in the context of global change of direction.
In general I agree.The list of non-inevitable things at the end is interesting.
Broadly speaking, I would bin them into two categories. The first category contains things like this:
> Tiktok is not inevitable.
Things like this become widespread without coercion. I don't use TikTok or any short-form video and there's nothing forcing me to. For a while, Facebook fed me reels, and I fell for it once or twice, but recognized how awful it was and quit. However, the Tiktok and junk food are appealing to many people even though they are slop. The dark truth is that many people walking around just like slop, and unless there's any restraint imposed from external actors, they'll consume as much as is shoveled into their troughs.
But, at the end of the day, you can live your life without using Tiktok at all.
The other category would be things that become widespread on the back of coercion, to varying degrees.
> Requiring a smartphone to exist in society is not inevitable.
This is much trickier to do than living without Tiktok. It's harder to get through airports or even rent a parking space now. Your alternative options will be removed by others.
After reading and watching things by quite a few historians, one of the points that sticks out is: most things in history were not inevitable. Someone had to do them and other people had to help them or at least not oppose them.
A lot of history's turning points were much closer than we think.
It had huge impact on world history: it indirectly lead to German unification, it possibly lead to both world wars in the form we know them, it probably impacted colonial wars and as a result the territory of many former colonies, and probably also not their current populations (by determining where colonists came from and how many of them).
I'm fairly sure there were a few very close battles during the East India Company conquest of India, especially in the period when Robert Clive was in charge.
Another one for Germany: after Wilhem I died at 90, his liberal son Frederick III died aged only 56, after a reign of just 99 days. So instead Germany had Wilhem II as the emperor, a conservative that wrecked all of Bismark's successful foreign policies.
Oh, Japan attacking Pearl Harbor/the US. If the Japanese Army faction would have won the internal struggle and had tried to attack the Soviets again in 1941, the USSR would have probably been toast and the US would have probably intervened only slowly and indecisively.
I can't really remember many others right now, but every country and every continent has had moments like these. A lot of them are sheer bad luck but a good chunk are just miscalculation.
History is full of what-ifs, a lot of them with huge implications for the world.
> Oh, Japan attacking Pearl Harbor/the US. If the Japanese Army faction would have won the internal struggle and had tried to attack the Soviets again in 1941, the USSR would have probably been toast and the US would have probably intervened only slowly and indecisively.
Where's Japan getting the oil to fight USSR? The deposits are all too far east [1].
Even with the US out of the war we were denying them steel / oil but the US embargo is much less effective without a pacific navy.
Japan didn't really need to win the war directly, it just needed to put enough boots on the ground to topple the USSR by helping Germany. The Soviets couldn't afford to send a few army groups to East Asia, especially not in 1941 or 1942.
Casual thinking, as the author implies is necessary, would make you realize all of this was inevitable. The whole reason the tech boom existed, the whole reason the author is typing on what I can only guess is a 2005 T-series, the whole reason the internet made it, the whole reason all of this works, is STRICTLY because they wrenched control from us.
If FOSS was gonna work it would've by now. I love FOSS, but FOSS enthusiasts are so obnoxiously snobby about everything. In 30+ years linux has wrenched all of 1-2% more of the desktop market away from the giants.
Everything mentioned in the article was inevitable (maybe not in this exact form, but as a principle) for us fans of Jacques Ellul and Uncle Ted. At least the money was quite good for many in the industry, for some of them it still is.
Rather than pointlessly lament the destructive power of fire, we should be spending our time learning how to fire-proof our homes.
These are all natural forces, they may be human forces but they are still natural forces. We can't stop them, we can only mitigate them -- and we _can_ mitigate them, but not if we just stick our fingers in our ears and pretend it's not going to happen.
It absolutely is. Whatever you can imagine. All of it.
There are plenty "technology" things which have come to pass, most notably weapons, which have been developed which are not allowed to be used by someone to thier fullest due to laws, and social norms against harming others. Theese things are technology, and they would allow someone to attain wealth much more efficiently....
Parrots retort that they are regulated because society sees them as a threat.
Well, therein is the disconnect, society isn't immutable, and can come to those conclusions about other technologies tomorrow if it so chooses...
Maybe I am misunderstanding, but I disagree a whole lot. The whole problem is that is it inevitable. Technology is an enormous organism. It does not care about the ethical or moral considerations of it. It's a tug-of-war who can use most technique to succeed -- if you do not use it, you fall behind. Individuals absolutely can not shape the future of technology. States can attempt, but as they make use of technology for propaganda and similar reasons -- they are also in a requirement of it. It is inevitable as long as you keep digging.
Technology does nothing without humans enacting it, and humans do care about its ethical and moral considerations. Or at least some do, and everyone should. Individuals do collectively shape the future of technology.
I wish that was true. Not adopting technology carries penalties. Almost always enough for it to be a requirement.
Even so, "humans do care about its ethical and moral considerations": whose ethics? enforced how? measured how? you're going to fight efficiency and functionality? good luck.
A bit mean-spirited I think, but not unreasonable. organism: "a system or organization consisting of interdependent parts, compared to a living being". I removed the "living" part of it to allow for a bit more... abstract thinking.
Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.
Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.
[1] https://en.wikipedia.org/wiki/Vickrey%E2%80%93Clarke%E2%80%9...
https://blog.google/products/admanager/simplifying-programma...
1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph
2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.
3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority
TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation
[1]https://en.wikipedia.org/wiki/Focal_point_(game_theory)
Individual families felt like, if they took away or postponed their kids’ phones, their kid would be left out and ostracized—which was probably true as long as all the other kids had them. And if a group of families or a school wanted to coordinate on something different, they’d have to 1) be ok with seeming “backwards,” and 2) squabble about how specifically to operationalize the idea.
Haidt framed it as “four simple norms,” which offered specific new Schelling points for families to use as concrete alternatives to “it’s inevitable.” And in shockingly little time, it’s at the point where 26 states have enshrined the ideas into legislation [1].
[0] https://www.cnbc.com/2025/02/19/jonathan-haidt-on-smartphone...
[1] https://apnews.com/article/cellphones-phones-school-ban-stat...
Now let's do the same to AI slop, what many (including Haidt and co., Australia, etc) have done to lessen kids social media usage.
FWIW, I gave up on FB/Twttr crap a while ago... Unfortunately, I'm still stuck with WhatsApp and LI (both big blights) for now.
YMMV
Unfortunately, it's going to destroy the Internet (and possibly society) in the process.
At the risk of dorm-room philosophizing: My instincts are all situated in the past, and I don’t know whether that’s my failure of imagination or whether it’s where everybody else is ending up too.
Do the new information-gathering Schelling points look like the past—trust in specific individual thinkers, words’ age as a signal of their reliability, private-first discussions, web of trust, known-human-edited corpora, apprenticeship, personal practice and experience?
Is there instead no meaningful replacement, and the future looks like people’s “real” lives shrinking back to human scale? Does our Tower of Babel just collapse for a while with no real replacement in sight? Was it all much more illusory than it felt all along, and the slop is just forcing us to see that more clearly?
Did the Cronkite-era-television—>cable transition feel this way to people before us?
AI slop, unfortunately, is just starting.
It is true that nobody trusts anything online... esp the Big Media and the backlash against it in the last decade+ or so. But that's exactly where AI slop is coming in. Note the crazier and crazier conspiracy theories that are taking hold all around, and not just in the MAGA-verse. And there's plenty of takers for AI slop - both consumers of it, and producers of it.
And there's plenty of profit all around. (see crypto, NFTs, and all manners of grifting)
So no, I dont think "nobody will read it". It's more like "everybody's reading it"
But I do agree on the denouement... it's destroying the internet and society along with it
In particular, the "games" can operate on the level of non-human actors like genes, or memes, or dollars. Several fields generate much more accurate conclusions when you detach yourself from an anthrocentric viewpoint, eg. evolutionary biology was revolutionized by the idea of genes as selfish actors rather than humans trying to pass along their genes; in particular, it explains such concepts as death, sexual selection, and viruses. Capitalism and bureaucracy both make a lot more sense when you give up the idea of them existing for human betterment and instead take the perspective of them existing simply for the purpose of existing (i.e. those organizations that survive are, well, those organizations that survive; there is nothing morally good or bad about them, but the filter that they passed is simply that they did not go bankrupt or get disbanded).
But underneath those, game theory is still fundamental. You can use it to analyze the incentives and selection pressures on the system, whether they are at sub-human (eg. viral, genetic, molecular), human, or super-human (memetic, capitalist, organizational, bureaucratic, or civilizational) scales.
To me, this inevitability only is guaranteed if we assume a framing of non-cooperative game theory with idealized self-interested actors. I think cooperative game theory[1] better models the dynamics of the real world. More important than thinking on the level of individual humans is thinking about the coalitions that have a common interest to resist abusive technology.
[1]: https://en.wikipedia.org/wiki/Cooperative_game_theory
The really simple finding is that when you have both repetition and reputation, cooperation arises naturally. Because now you've changed the payoff matrix; instead of playing a single game with the possibility of defection without consequences, defection now cuts you off from payoffs in the future. All you need is repeated interaction and the ability to remember when you've been screwed, or learn when your counterparty has screwed others.
This has been super relevant for career management, eg. you do much better in orgs where the management chain has been intact for years, because they have both the ability and the incentive to keep people loyal to them and ensure they cooperate with each other.
[1] https://en.wikipedia.org/wiki/Tit_for_tat
[2] https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation
If cooperative coalitions to resist undesirable abusive technology models the real world better, why is the world getting more ads? (E.g. One of the author's bullet points was, "Ads are not inevitable.")
Currently in the real world...
- Ads frequency goes up : more ad interruptions in tv shows, native ads embedded in podcasts, sponsors segments in Youtube vids, etc
- Ads spaces goes up : ads on refrigerator screens, gas pumps touch screens, car infotainment systems, smart TVs, Google Search results, ChatGPT UI, computer-generated virtual ads in sports broadcasts overlayed on courts and stadiums, etc
What is the cooperative coalition that makes "ads not inevitable"?
For the entirety of the 2010's we had SaaS startups invading every space of software, for a healthy mix of better and worse, and all of them (and a number even today) are running the exact same playbook, boiled down to broad terms: burn investor money to build a massive network-effected platform, and then monetize via attention (some combo of ads, user data, audience reach/targeting). The problem is thus: despite all these firms collecting all this data (and tanking their public trust by both abusing it and leaking it constantly) for years and years, we really still only have ads. We have specifically targeted ads, down to downright abusive metrics if you're inclined and lack a soul or sense of ethics, but they are and remain ads. And each time we get a better targeted ad, the ones that are less targeted go down in value. And on and on it has gone.
Now, don't misunderstand, a bunch of these platforms are still perfectly fine business-wise because they simply show an inexpressible, unimaginable number of ads, and even if they earn shit on each one, if you earn a shit amount of money a trillion times, you'll have billions of dollars. However it has meant that the Internet has calcified into those monolith platforms that can operate that way (Facebook, Instagram, Google, the usuals) and everyone else either gets bought by them or they die. There's no middle-ground.
All of that to say: yes, on balance, we have more ads. However the advertising industry in itself has never been in worse shape. It's now dominated by those massive tech companies to an insane degree. Billboards and other such ads, which were once commonplace are now solely the domain of ambulance chasing lawyers and car dealerships. TV ads are no better, production value has tanked, they look cheaper and shittier than ever, and the products are solely geared to the boomers because they're the only ones still watching broadcast TV. Hell many are straight up shitty VHS replays of ads I saw in the fucking 90's, it's wild. We're now seeing AI video and audio dominate there too.
And going back to tech, the platforms stuff more ads into their products than ever and yet, they're less effective than ever. A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian, but the problem is nobody notices the background wallpaper, which means despite the saturation, ads get less attention then ever before. And worse still, the folks who don't block cost those ad companies impressions and resources to serve those ads that are being ignored.
So, to bring this back around: the coalition that makes ads "inevitable" isn’t consumers or creators, it's investors and platforms locked into the same anxiety‑economy business model. Cooperative resistance exists (ad‑blockers, subscription models, cultural fatigue), but it’s dwarfed by the sheer scale of capital propping up attention‑monetization. That’s why we see more ads even as they get less effective.
This actually strikes me as a good thing. The more we can get big dumb ads out of meatspace and confine everything to devices, the better, in my opinion (though once they figure out targeted ads in public that could suck).
I know this is an unpopular opinion here, but I get a lot more value out of targeted social media ads than I ever did billboards or TV commercials. They actually...show me niche things that are relevant to my interests, that I didn't know about. It's much closer to the underlying real value of advertising than the Coca-Cola billboard model is.
> A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian...
Also this. It's not dystopian. It's genuinely a better experience than sitting through a single commercial break of a TV show in the 90s (of which I'm sure we all sat through thousands). They blend in. They are easily skippable, they don't dominate near as much of your attention. It's no worse than most of the other stuff competing for your attention. It doesn't seem that difficult to me to navigate a world with background ad radiation. But maybe I'm just a sucker.
You are describing two different advertising strategies that have differing goals. The billboard/tv commercial is a blanket type that serves to foster a default in viewers minds when they consider a particular want/need. Meanwhile, the targeted stuff tries to identify a need you might be likely to have and present something highly specific that could trigger or refine that interest.
I mean the issue is the billboards aren't going away, they're just costing less and less which means you get ads for shittier products (see aforementioned lawyers, reverse mortgages and other financial scams, dick pills, etc.). If they were getting taken down I'd heartily agree with you.
> I know this is an unpopular opinion here, but I get a lot more value out of targeted social media ads than I ever did billboards or TV commercials. They actually...show me niche things that are relevant to my interests, that I didn't know about. It's much closer to the underlying real value of advertising than the Coca-Cola billboard model is.
Perhaps they work for you. I still largely get the experience that after I buy a toilet seat for example on Amazon, Amazon then regularly shows me ads for additional toilet seats, as though I've taken up throne collecting as a hobby or something.
> Also this. It's not dystopian. It's genuinely a better experience than sitting through a single commercial break of a TV show in the 90s (of which I'm sure we all sat through thousands). They blend in. They are easily skippable, they don't dominate near as much of your attention. It's no worse than most of the other stuff competing for your attention.
I mean, I personally loathe the way my attention is constantly being redirected, or attempted to be, by loud inane bullshit. I tolerate it, of course, what other option does one have, but I certainly wouldn't call it a good or healthy thing. I think our society would leap forward 20 years if we pushed the entirety of ad-tech into the ocean.
At some point it won't be worth it to maintain them, hopefully.
> I still largely get the experience that after I buy a toilet seat for example on Amazon, Amazon then regularly shows me ads for additional toilet seats, as though I've taken up throne collecting as a hobby or something.
This is definitely a thing, I feel like it's getting better though and stuff like that drops off pretty quickly. But it still doesn't bother me nearly as much as watching the same 30 second TV commercial for the 100th time, I just swipe or scroll past, and overall it's still much better than seeing the lowest common denominator stuff.
> I mean, I personally loathe the way my attention is constantly being redirected, or attempted to be, by loud inane bullshit. I tolerate it, of course, what other option does one have, but I certainly wouldn't call it a good or healthy thing. I think our society would leap forward 20 years if we pushed the entirety of ad-tech into the ocean.
I hear you, the attention economy is a brave new world, and there will probably be some course corrections. I don't think ads are really the problem though, in some ways everything vying for your attention is an ad now. Through technology we democratized the means of information distribution, and I would rather have it this way than having four TV channels, but there are some growing pains for sure.
I'll second the absolute shit out of that. My only exposure to TV anymore is hotels and I cannot fathom why anyone would spend ANY money on it as a service, let alone what I know cable costs. The ads are so LOUD now and they repeat the same like 4 or 5 of them over and over. Last business trip I could lipsync a Wendy's ad like I'd done it my whole life.
> I hear you, the attention economy is a brave new world, and there will probably be some course corrections. I don't think ads are really the problem though, in some ways everything vying for your attention is an ad now.
See I don't like the term attention economy, I vastly prefer anxiety economy. An attention economy implies at least some kind of give and take, where a user's attention is rewarded rather than simply their lack of it is attempted to be punished. The constant fomenting of FOMO and blatant use of psychological torments does not an amicable relationship make. It makes it feel like a constant back and forth of blows, disabling notifications, muting hashtags, unsubscribing from emails because you simply can't stand the NOISE anymore.
Absolutely a cooperative game - nobody was forced to build them, nobody was forced to finance them, nobody was forced to buy them. this were all willing choices all going in the same direction. (Same goes for many of the other examples)
I lived in a building some years ago there where the landlord bragged about their Google Nest thermostat as an apartment amenity - I deliberately never connected it to my wifi while I lived there (and more modern smart devices connect to ambient cell phone networks in order to defeat this attack). In the building I currently live in, there are a bunch of elevators and locks that can be controlled by a smartphone app (so, something is gonna break when AWS goes down). I noticed this when I was initially viewing the apartment and I considered it a downside - and ultimately chose to move there anyway because every rental unit has downsides and ultimately you have to pick a set of compromises you can live with.
I view this as mostly a problem of housing scarcity - if housing units are abundant, it's easier for a person to buy thier own home and then not put internet-managed smart furniture in it; or at least have more leverage against landlords. But the region I live in is unfortunately housing-constrained.
Weather predictions are just math, for example, and they are always wrong to some degree.
I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
Math is almost the definition of inevitability. Logic doubly so.
Once there's a sophisticated enough human model to decipher our myriad of idiosyncrasies, we will all be relentlessly manipulated, because it is human nature to manipulate others. That future is absolutely inevitable.
Might as well fall into the abyss with open arms and a smile.
Idk if that's true.
Navier–Stokes may yet be proven Turing-undecidable, meaning fluid dynamics are chaotic enough that we can never completely forecast them no matter how good our measurement is.
Inside the model, the Navier–Stokes equations have at least one positive Lyapunov exponent. No quantum computer can out-run an exponential once the exponent is positive
And even if we could measure every molecule with infintesimal resolution, the atmosphere is an open system injecting randomness faster than we can assimilate it. Probability densities shred into fractal filaments (butterfly effect) making pointwise prediction meaningless beyond the Lyapunov horizon
You might be conflating determinism with causality. Determinism is a metaphysical stance too because it asserts absence of free will.
Regardless of the philosophical nuance between the two, you are implicitly taking the vantage point of "god" or Laplace's Demon: infinite knowledge AND infinite computability based on that knowledge.
Tech people ought to know that we can't compute our way out of combinatorial explosion. That we can't even solve for a simple 8x8 game called chess algorithmically. We are bound with framing choices and therefore our models will never be a lossless, unbiased compression of reality. Asserting otherwise is a metaphysical stance, implicitly claiming human agency can sum up to a "godlike", totalizing compute.
In sum, models will never be sophisticated enough, claiming otherwise has always ended up being a form of totalitarianism, willful assertion one's favorite "framing", which inflicted a lot of pain in the past. What we need is computational humility. One good thing about tech interviews that it teaches people resource complexity of computation.
So even as you chastise people for shying away from logically concluding the obvious, you're trusting your intuition over the scientific consensus. Which is fine, I've absolutely read theories or claims about quantum mechanics and said "Bullshit," safe in the knowledge that my belief or disbelief won't help or hinder scientific advancement or the operation of the universe, but I'd avoid being so publicly smug about it if I were you.
If you start studying basically any field that isn't computer science you will in fact discover that the world is rife with randomness, and that the dreams of a Laplace or Bentham are probably unrealizable, even if we can get extremely close (but of course, if you constrain behavior in advance through laws and restraints, you've already made the job significantly easier).
Thinking that reality runs like a clock is literally a centuries outdated view of reality.
Just because we weren't able to discover all of the law of physics, doesn't mean they don't apply to us.
As a physicist I think people are more sure about what an electron is, for example, than they should be, given that there is no axiomatic formulation of quantum field theory that isn't trivial, but at least there we are in spitting distance of having something to talk about such that (in very limited situations, mind you) we can speak of the inevitable. But the OP rather casually suggested, implicitly, if not explicitly, that the submitted article was wrong because "game theory," which is both glib and just like technically not a conclusion one could reasonably come to with an honest appraisal of the limitations of these sorts of ways of thinking about the world.
Now couple the fact that most people are terrible at modeling with the fact that they tend to ignore implicit constraints… the result is something less resembling science but something resembling religion.
The concept of Game Theory is inevitable because it's studying an existing phenomenon. Whether or not the researchers of Game Theory correctly model that is irrelevant to whether the phenomenon exists or not.
The models such as Prisoner's Dilemma are not inevitable though. Just because you have two people doesn't mean they're in a dilemma.
---
To rephrase this, Technology is inevitable. A specific instance of it (ex. Generative AI) is not.
If you want to fix these things, you need to come up with a way to change the nature of the game.
Game theory makes a lot of simplifying assumptions. In the real world most decisions are made under constraints, and you typically lack a lot of information and can't dedicate enough resources to each question to find the optimal choice given the information you have. Game theory is incredibly useful, especially when talking about big, carefully thought out decisions, but it's far from a perfect description of reality
It does because it's trying to get across the point that although the world seems impossibly complex it's not. Of course it is in fact _almost_ impossibly complex.
This doesn't mean that it's redundant for more complex situations, it only means that to increase its accuracy you have to deepen its depth.
Whether that word is reductionism is an exercise left to Chomsky?
They are at best an attempt to use our tools of reason and observation to predict nature, and you can point to thousands of examples, from market crashes to election outcomes, to observe how they can be flawed and fail to predict.
= “nature”, if you're an evolutionist
as for models, all models are wrong etc.
> It's also true that you, personally, are just one of those 8B+ people
Unless you communicate and coordinate!
It is not a physics theory, that works regardless of our will.
Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.
Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.
People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.
It's not a question about the one who cannot influence the 7.9B. The question is disparity, which is like income disparity.
What happens when fewer and fewer people influence greater and greater numbers. Understanding the risk in that.
It is out of question that it is highly useful and simplifies it to an extent that we can mathematically model interactions between agents but only under our underlying assumptions. And these assumptions must not be true, matter of fact, there are studies on how models like the homo oeconomicus have led to a self-fulfilling reality by making people think in ways given by the model, adjusting to the model, and not otherwise, that the model ideally should approximate us. Hence, I don't think you can plainly limit or frame this reality as a product of game theory.
That's not how mathematics works. "it's just math therefore it's a true theory of everything" is silly.
We cannot forget that mathematics is all about models, models which, by definition, do not account for even remotely close to all the information involved in predicting what will actually occur in reality. Game Theory is a theory about a particular class of mathematical structures. You cannot reduce all of existence to just this class of structures, and if you think you can, you'd better be ready to write a thesis on it.
Couple that with the inherent unpredictability of human beings, and I'm sorry but your Laplacean dreams will be crushed.
The idea that "it's math so it's inevitable" is a fallacy. Even if you are a hardcore mathematical Platonist you should still recognize that mathematics is a kind of incomplete picture of the real, not its essence.
In fact, the various incompleteness theorems illustrate directly, in Mathematic's own terms, that the idea that a mathematical perspective or any logical system could perfectly account for all of reality is doomed from the start.
But don't forget, we discovered Nash Equilibrium, which changed how people behaved, in many interesting scenarios.
Also, from a purely Game Theoretic standpoint... All kinds of atrocities are justifiable, if it propagates your genes...
[1] http://en.wikipedia.org/wiki/The_Trap_(television_documentar...
I'll say frankly that I personally object to Star Wars on an aesthetic level - it is ultimately an artistically-flawed media franchise even if it has some genuinely compelling ideas sometimes. But what really bothers me is that Star Wars in its capacity as a modern culturally-important story cycle is also intellectual property owned by the Disney corporation.
The idea that the problems of the world map neatly to a confict between an evil empire and a plucky rebellion is also basically propagandistic (and also boring). It's a popular storytelling frame - that's why George Lucas wrote the original Star Wars movies that way. But I really don't like seeing someone watch a TV series using the Star Wars intellectual property package and then using the story the writers chose to write - writers ultimately funded by Disney - as a basis for how they see themselves in the world poltically.
Selective information dissemination, persuasion, and even disinformation are for sure the easiest ways to change the behaviors of actors in the system. However, the most effective and durable way to "spread those lies" are for them to be true!
If you can build a technology which makes the real facts about those incentives different than what it was before, then that information will eventually spread itself.
For me, the canonical example is the story of the electric car:
All kinds of persuasive messaging, emotional appeals, moral arguments, and so on have been employed to convince people that it's better for the environment if they drive an electric car than a polluting, noisy, smelly, internal-combustion gas guzzling SUV. Through the 90s and early 2000s, this saw a small number of early adopters and environmentalists adopting niche products and hybrids for the reasons that were persuasive to them, while another slice of society decided to delete their catalytic converters and "roll coal" in their diesels for their own reasons, while the average consumer was still driving an ICE vehicle somewhere in the middle of the status quo.
Then lithium battery technology and solid-state inverter technology arrived in the 2010s and the Tesla Model S was just a better car - cheaper to drive, more torque, more responsive, quieter, simpler, lower maintenance - than anything the internal combustion engine legacy manufacturers could build. For the subset of people who can charge in their garage at home with cheap electricity, the shape of the game had changed, and it's been just a matter of time (admittedly a slow process, with a lot of resistance from various interests) before EVs were simply the better option.
Similarly, with modern semiconductor technology, solar and wind energy no longer require desperate pleas from the limited political capital of environmental efforts, it's like hydro - they're just superior to fossil fuel power plants in a lot of regions now. There are other negative changes caused by technology, too, aided by the fact that capitalist corporations will seek out profitable (not necessarily morally desirable) projects - in particular, LLMs are reshaping the world just because the technology exists.
Once you pull a new set of rules and incentives out of Pandora's box, game theory results in inevitable societal change.
Yes, the mathematicians will tell you it's "inevitable" that people will cheat and "enshittify". But if you take statistical samplings of the universe from an outsider's perspective, you would think it would be impossible for life to exist. Our whole existence is built on disregard for the inevitable.
Reducing humanity to a bunch of game-theory optimizing automatons will be a sure-fire way to fail The Great Filter, as nobody can possibly understand and mathematically articulate the larger games at stake that we haven't even discovered.
* Actors have access to limited computation
* The "rules" of the universe are unknowable and changing
* Available sets of actions are unknowable
* Information is unknowable, continuous, incomplete, and changes based on the frame of reference
* Even the concept of an "Actor" is a leaky abstraction
There's a field of study called Agent-based Computational Economics which explores how systems of actors behaving according to sets of assumptions behave. In this field you can see a lot of behaviour that more closely resembles real world phenomena, but of course if those models are highly predictive they have a tendency to be kept secret and monetized.
So for practical purposes, "game theory is inevitable" is only a narrowly useful heuristic. It's certainly not a heuristic that supports technological determinism.
You need to track everyone and everything on the internet because you did not want to cap your wealth at a reasonable price for the service. You are willing to live with accumulated sins because "its not as bad as murder". The world we have today has way more to do with these things than anything else. We do not operate as a collective, and naturally, we don't get good outcomes for the collective.
OP is 100% correct. either you accept that the vast majority are mindless automatons (not hard to get onboard with that honestly, but still, seems an overestimate), or there's some kind of structural unbalance, an asymmetry that's actively harmful and not the passive outcome of a 8B independent actors.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
Keep in mind, thirty years ago, I was a kid. I thought that fast food was awesome. My parents would allow me a fast food meal at best once a month, and my "privileged" friends had a fast food meal a week.
Now, I'd rather starve than eat something coming from a fast food.
But around me, normies at eating at least once a day from a fast food.
We have at least ten big franchises in the country, and at every corner there's a kebab/tacos/weird place selling trash.
So, from my POV, I'd thought that, in general, people are eating much more fast food than thirty years ago.
Like now it's possible to go days in some cities without seeing a single obese person. It's still a big problem. Outside of the cities and in lower class areas, but... I think the changes are trickling down / propagating? That's been my impression at least.
Surprised by your take on fast food, by the way. When I complain about fast food like was ubiquitous in the 90s I think of McDonald's and other highly processed things. The type that are covered in salt and cheap oil and artifical smells and where the meat is like reconstituted garbage, where lunch is 1500 calories, where everyone gets a giant soda, where kids are enticed with cheap plastic crap.
But a corner kebab or taco place seems like an unequivocal positive for society, I have no complaints about their existence at all. I feel like most people eating at corner shops for half of their meals is pretty much ideal--if it's affordable to do so then it is a very sensible and economically positive division of labor. On the condition that the food be of decent quality, of course. Which sometimes it is. Perhaps not as much as it should be though, but people do have standards and will pick the better places.
But it seems that some things were and are still different.
Related to fitness, sure, there's millions of people who "go to the gym" at least one a week and buy food supplements and protein powders...
But they'll happily eat fast foot several times a week.
And if we talk about ultra-processed food, it's even worse.
> But a corner kebab or taco place seems like an unequivocal positive for society, I have no complaints about their existence at all.
That's probably a big difference, because nobody here will dare say that those place serves actual food. Not because of the cultural aspect, but just because it's the case. They use the lowest quality in every ingredients, use lots of bad oils to cook, put tons of salt and other additives... And don't get me started on the hygiene side. People are perfectly aware of that and they'll even joke about it while eating their 50% fat kebab.
At least McDonald's have the hygiene on their side!
We don't have the same obesity epidemic, partly due to portion sizing and mobility, but almost half the population is overweight and figures are still going up.
https://www.obesitefrance.fr/lobesite-cest-quoi/les-chiffres...
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
If you disavow short form video as a medium altogether, something I'm strongly considering, then you can. It does mean you have to make sacrifices, for example Youtube doesn't let you disable their short form video feature so it is inevitable for people who choose they don't want to drop Youtube. That is still a choice though, so it is not truly inevitable.
The larger point is that there are always people pushing some sort of future, sketching it as inevitable. But the reality is that there always remains a choice, even if that choice means you have to make sacrifices.
The author is annoyed at people throwing the towel in the ring and declaring AI is inevitable, when the author apparently still sees a path to not tolerating AI. Unfortunately the author doesn't really constructively show that path, so the whole article is basically a luddite complaint.
This is not a new thing. TV monetizes human attention. Tiktok is just an evolution of TV. And Tiktok comes from China which has a very different society. If short-form algo slop video can thrive in both liberal democracies and a heavily censored society like China, than it's probably somewhat inevitable.
TikTok and other current efforts have that monetization as their primary purpose.
The profit-first-everything-else-never approach typical in late-stage capitalism was not inevitable. It is very possible to see the specific turns that led us to this point, and they did not have to happen.
Just objectively false and assumes that the path humans took to allow this is the only path that unfolded.
Much of this tech could have been regulated early on, preventing garbage like short-form slop, from existing.
So in short, none of what you are describing is "inevitable". Someone might come up with it, and others can group together and say: "We aren't doing that, that is awful".
My personal experience is that most people dont mind these things, for example short form content: most of my friends genuinely like that sort of content and i can to some extent also understand why. Just like heroin or smoking it will take some generations to regulate it (and tbf we still have problems with those two even though they are arguably much worse)
The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers. These things don't have any utility otherwise and no reason to exist outside of those.
Scams and fraud are such potent drivers that perhaps it was inevitable, but one could imagine a more competent regulatory regime that nipped this stuff in the bud.
nb: avoiding financial regulations and money laundering are forms of fraud
The idea of a cheap, universal, anonymous digital currency itself is old (e.g. eCash and Neuromancer in the '80s, Snow Crash and Cryptonomicon in the '90s).
It was inevitable that someone would try implementing it once the internet was widespread - especially as long as most banks are rent-seeking actors exploiting those relying on currency exchanges, as long as many national currencies are directly tied to failing political and economic systems, and as long as the un-banking and financially persecution of undesirables was a threat.
Doing it so extremely decentralized and with a the whole proof-of-work shtick tacked on top was not inevitable and arguably not a good way to do it, nor the cancer that has grown on top of it all...
Imagine new coordination technology X. We can remove any specific tech reference to remove prior biases. Say it is a neutral technology that could enable new types of positive coordination as well as negative.
3 camps exist.
A: The grifters. They see the opportunity to exploit and individually gain.
B: The haters. They see the grifters and denigrate the technology entirely. Leaving no nuance or possibility for understanding the positive potential.
C: The believers. They see the grift and the positive opportunity. They try and steer the technology towards the positive and away from the negative.
The basic formula for where the technology ends up is -2(A)-(B) +C. It's a bit of a broad strokes brush but you can probably guess where to bin our current political parties into these negative categories. We need leadership which can identify and understand the positive outcomes and push us towards those directions. I see very little strength anywhere from the tech leaders to politicians to the social media mob to get us there. For that, we all suffer.
Lol. Permissionless payments certainly have utility. Making it harder for governments to freeeze/seize your assets has utility. Buying stuff the government disallows, often illegitimately, has value. Currency that can't be inflated has value.
Any outside of pure utility, they have tons of ideological reason to exist outside scams and fraud. Your inability to imagine or dismissal of those is telling as to your close-mindedness.
Shouldn't you have already moved on to AI hype? The fact that you're still worshipping crypto is telling as to your close-mindedness.
Something might be "inevitable" in the sense that someone is going to create it at some point whether we like it or not.
Something is also not "inevitable" in the sense that we will be forced to use it or you will not be able to function in society. <-- this is what the author is talking about
We do not need to tolerate being abused by the elites or use their terrible products because they say so. We can just say no.
What i dont like about this sort of article is that it fails to come up with _any_ meaningful ideas on how to convince others to "just say no"
But further, the human condition has been developing for tens of thousands of years, and efforts to exploit the human condition for a couple of thousand (at least) and so we expect that a technology around for a fraction of that would escape all of the inevitable 'abuses' of it?
What we need to focus on is mitigation, not lament that people do what people do.
I doubt that. There is a reason the videos get longer again.
So people could have ignored the short form from the beginning. And wasn’t the matching algorithm the teal killer feature that amazed people, not the length of the videos?
Anecdotally, I hear lots of people talking about the short attention span of Zoomers and Gen Alpha (which they define as 2012+; I'd actually shift the generation boundary to 2017+ for the reasons I'm about to mention). I don't see that with my kid's 2nd-grade classmates: many of them walk around with their nose in a book and will finish whole novels. They're the first class after phonics was reintroduced in the 2023-2024 kindergarten year; every single kid knew how to read by the end of kindergarten. Basic fluency in skills like reading and math matters.
That was also roughly the time period where mobile phones and their networks started to become reliably able to stream video at scale. That seems like a more plausible proximate cause for the timing of the rise of TikTok.
The regulatory, cultural, social, even educational factors surrounding these ideas are what could have made these not inevitable. But changes weren’t made, as there was no power strong enough to enact something meaningful.
More generally I think the problems we got into were inevitable. They are the result of platforms optimizing for their own interests at the expense of both creatives and users, and that is what any company would do.
All the platforms enshittified, they exploit their users first, by ranking addictive content higher, then they also influence creatives by making it clear only those who fit the Algorithm will see top rankings. This happens on Google, YT, Meta, Amazon, Play Store, App Store - it's everywhere. The ranking algorithm is "prompting" humans to make slop. Creatives also optimize for their self interest and spam the platforms.
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
but people just throw their hands up "looks like another drought this year! Thats California!".
I don’t see how we could politically undermine these systems, but we could all do more to contribute to open source workarounds.
We could contribute more to smart tv/e-reader/phone & tablet jailbreak ecosystems. We could contribute more to the fediverse projects. We could all contribute more to make Linux more user friendly.
For instance, we could forbid taxpayer money from being spent on proprietary software and on hardware that is insufficiently respectful of its user, and we could require that 50% of the money not spent on the now forbidden software instead be spent on sponsorships of open source contributors whose work is likely to improve the quality of whatever open alternatives are relevant.
Getting Microsoft and Google out of education would be huge re: denormalizing the practice of accepting eulas and letting strangers host things you rely on without understanding how they're leveraging that position against your interests.
France and Germany are investing in open source (https://chipp.in/news/france-and-germany-launch-docs-an-open...), though perhaps not as aggressively as I've proposed. Let's join them.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
Art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
Yet humans are the ones enacting an AI for art (of some kind). Is not therefore not art because even though a human initiated the process, the machine completed it?
If you argue that, then what about kinetic sculptures, what about pendulum painting, etc? The artist sets them in motion but the rest of the actions are carried out by something nonhuman.
And even in a fully autonomous sense; who are we to define art as being artefacts of human emotion? How typically human (tribalism). What's to say that an alien species doesn't exist, somewhere...out there. If that species produces something akin to art, but they never evolved the chemical reactions that we call emotions...I suppose it's not art by your definition?
And what if that alien species is not carbon based? If it therefore much of a stretch to call art that an eventual AGI produces art?
My definition of art is a superposition of everything and nothing is art at the same time; because art is art in the eye of the arts beholder. When I look up at the night sky; that's art, but no human emotion produced that.
Just because something beautiful can be created without emotion, that doesn't mean it's art. It just means something pleasing was created.
We have many species on earth that are "alien" to us - they don't create with emotion, they create things that are beautiful because that's just how it ended up.
Bees don't create hexagonal honeycomb because they feel a certain way, it's just the most efficient way for them to do so. Spider webs are also created for efficacy. Down to the single cell, things are constructed in beautiful ways not for the sake of beauty, but out of evolution.
The earth itself creates things that are absolutely beautiful, but are not art. They are merely the result of chemical and kinetic processes.
The "art" of it all, is how humans interpret it and build upon it, with experience, imagination, free will and emotions.
What you see in the night sky, that is not art. That is nature.
The things that humans are compelled to create under the influence of all this beauty - that is the art.
The rest of the people using the Studio Ghibli filter ?
At any rate, though there is some aversion to AI art for arts sake, the real aversion to AI art is that it squeezes one of the last viable options for people to become 'working artists' and funnels that extremely hard earned profit to the hands of the conglomerates that have enough compute to train generative models. Is making a living through your art something that we would like to value and maintain as a society? I'd say so.
It's kind of the sci-fi cliche, can you have feelings for an AI robot? If you can what does that mean.
I can't imagine having the same shared experience with an AI. Even if I could, knowing there is no consciousness there does changes things (if we can know such thing).
This reminds me of solipsism. I have no way of knowing if others are conscious, but it seems quite lonely to me if that were true. Even though it's the exact same thing to the outside. It's not?
If you listen to an album by your favorite band, it is highly unlikely that your feelings/emotions and interpretations correlate with what they felt. Feeling a connection to a song is just you interpreting it through the lens of your own experience, the singer isn't connecting with listeners on some spiritual level of shared experience.
I am not an AI art fan, it grosses me out, but if we are talking purely about art as a means to convey emotions around shared experiences, then the amalgamation is probably closer to your reality than a famous musicians. You could just as easily impose your feelings around a breakup or death on an AI generated classical piano song, or a picture of a tree, or whatever.
What? There's still live music events in quiet clubs where indie artists perform
You could argue all these things are not art because they used technology, just like AI music or images... no? Where does the spectrum of "true art" begin and end?
If you use GenAI to simply remove effort, then it’s a savings of efficiency, not an expression of ability.
If they used GenAI to create pictures that couldn’t be taken, or to create compositions, novel tableaus or effects - then that is artistic.
I suppose post-modernism may not give a hoot.
I strongly suspect automatic content synthesis will have similar effect as people get their legs under how to use it, because I strongly suspect there are even more people out there with more ideas than time.
I hear the complaints about AI being "weird" or "gross" now and I think about the complaints about Newgrounds content back in the day.
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
Memes only have impact in aggregate due to emergent properties in a Mcluhanian sense. An individual meme has little to no impact compared to (some) works of art.
its the hype oyher people make for them, fairly typically after the artist has died that makes the impact.
stealing the mona lisa is what gave it its impact, rather than than brush strokes
LLMs and AI art flip this around because potentially very little effort went into making things that potentially take lots of effort to experience and digest. That doesn't inherently mean they're not valuable, but it does mean there's no guarantee that at least one other person out there found it valuable. Even pre-AI it wasn't an iron-clad guarantee of course -- copy-writing, blogspam, and astroturfing existed long before LLMs. But everyone hates those because they prey on the same social contract that LLMs do, except in a smaller scale, and with a lower effort-in:effort-out ratio.
IMO though, while AI enables malicious / selfish / otherwise anti-social behavior at an unprecedented scale, it also enables some pretty cool stuff and new creative potential. Focusing on the tech rather than those using it to harm others is barking up the wrong tree. It's looking for a technical solution to a social problem.
Yep, this is the current understanding that is being hard challenged by LLMs.
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
This is an extremely crude characterisation of what many people feel. Plenty of artists oppose copyright-ignoring generative AI and "get" it perfectly, even use it in art, but in ways that avoid the lazy gold-rush mentality we're seeing now.
Just like you cannot put piracy into the bag in terms of movies, tv shows you cannot put AI into the bag it came from. Bottom line, this is happening (more like happened) now let's think about what that means and find a way forward.
Prime example is voice acting, I hear why voice actors are mad, if someone can steal your voice. But why not work on legal framework to sell your voice for royalties or whatever. I mean if we can get that lovely voice of yours without you spending your weeks, and still compensated fairly for it, I don't see how this is a problem. -and I know this is already happening, as it should.
In regards to why tech people should be skeptical of AI: technology exists solely to benefit humans in some way. Companies that employ technology should use it to benefit at least one human stakeholder group (employees, customers, shareholders, etc). So far what I have seen is that AI has reduced hiring (negatively impacting employees), created a lot of bad user interfaces (bad for customers), and cost way more money to companies than they are making off of it (bad to shareholders, at least in the long run). AI is an interesting and so far mildly useful technology that is being inflated by hype and causing a lot of damage in the process. Whether it becomes revolutionary like the Internet or falls by the wayside like NFTs and 3D TV's is unknowable at this point.
I totally agree with the message in the original post. Yes, AI is going to be everywhere, and it's going to create amazing value and serious challenges, but it's essential to make it optional.
This is not only for the sake of users' freedom. This is essential for companies creating products.
This is minority report, until it is not.
AI has many modes of failure, exploitability, and unpredictability. Some are known and many are not. We have fixes for some, and band aids for some other, but many are not even known yet.
It is essential to make AI optional, to have a "dumb" alternative to everything delegated to a Gen AI.
These options should be given to users, but also, and maybe even more importantly, be baked into the product as an actively maintained and tested plan-b.
The general trend of cost cutting will not be aligned with this. Many products will remove, intentionally or not, the non-ai paths, and when the AI fails (not if), they regret this decision.
This is not a criticisms of AI or a shift in trends toward it, it's a warning for anyone who does not take seriously, the fundamental unpredictability of generative AI
Yeah, no. It's presumptuous to say that these are the only reasons. I don't think you understand at all.
> So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter?
Because to me, and many others, art is a form of communication. Artists toil because they want to communicate something to the world- people consume art because they want to be spoken to. It's a two-way street of communication. Every piece created by a human carries a message, one that's sculpted by their unique life experiences and journey.
AI-generated content may look nice on the surface, but fundamentally they say nothing at all. There is no message or intent behind a probabilistic algorithm putting pixels onto my screen.
When a person encounters AI content masquerading as human-made, it's a betrayal of expectations. There is no two-way communication, the "person" on the other side of the phone line is a spam bot. Think about how you would feel being part of a social group where the only other "people" are LLMs. Do you think that would be fulfilling or engaging after the novelty wears off?
Yes. The work of art should require skills that took years to hone, and innate talent. If it was produced without such, it is a fraud; I've been deceived.
But in fact I was not deceived in that sense, because the work is based on talent and skill: that of numerous unnamed, unattributed people.
It is simply a low-effort plagiarism, presented as an original work.
All of those things had positive as well as negative consequences. It's not entirely unreasonable to argument against any of those, at least in part.
Modern tech is 100% about trying to coerce you: you need to buy X, you need to be outraged by X, you must change X in your life or else fall behind.
I really don't want any of this, I'm sick of it. Even if it's inevitable I have no positive feelings about the development, and no positive feelings about anyone or any company pushing it. I don't just mean AI. I mean any of this dumb trash that is constantly being pushed on everyone.
Well you don't, and no tech company can force you to.
> you must change X in your life or else fall behind
This is not forced on you by tech companies, but by the rest of society adopting that tech because they want to. Things change as technology advances. Your feeling of entitlement that you should not have to make any change that you don't want to is ridiculous.
As someone who's been in tech for more than 25 years, I started to hate tech because of all things that you've said. I loved what tech meant, and I hate what it became (to the point I got out of the industry).
But majority of these disappear if we talk about offline models, open models. Some of that already happened and we know more of that will happen, just matter of time. In that world how any of us can say "I don't want a good amount of the knowledge in the whole fucking world in my computer, without even having an internet or paying someone, or seeing ads".
I respect if your stand is just like a vegetarian says I'm ethically against eating animals", I have no argument to that, it's not my ethical line but I respect it. However behind that point, what's the legitimate argument, shall we make humanity worse just rejecting this paradigm shifting, world changing thing. Do we think about people who's going to able to read any content in the world in their language even if their language is very obscure one, that no one cares or auto translate. I mean the what AI means for humanity is huge.
What tech companies and governments do with AI is horrific and scary. However government will do it nonetheless, and tech companies will be supported by these powers nonetheless. Therefore AI is not the enemy, let's aim our criticism and actions to real enemies.
Like greed. And apathy. Those are just some of the things that have enabled billionaires and trillionaires. Is it ever gonna change? Well it hasn't for millions of years, so no. As long as we remain human we'll always be assholes to each other.
If I look at a piece of art that was made by a human who earned money for making that art, then it means an actual real human out there was able to put food on their table.
If I look at a piece of "art" produced by a generative AI that was trained on billions of works from people in the previous paragraph, then I have wasted some electricity even further enriching a billionaire and encouraging a world where people don't have the time to make art.
I'm so surprised that I often find myself having to explain this to AI boosters but people have more value than computers.
If you throw a computer in a trash compactor, that's a trivial amount of e-waste. If you throw a living person in a trash compactor, that's a moral tragedy.
These are all real people.
Instead the cost of pollution is externalised and placed on the backs of humanity's children. That includes the pollution created by those datacentres running off fossil fuel generators because it was cheaper to use gas in the short term than to invest in solar capacity and storage that pays back over the long term. The pollution from building semiconductors in servers and GPUs that will likely have less than a 10 year lifespan in an AI data center as newer generations have lower operating cost. The cost of water being used for evaporative cooling being pulled from aquifers at a rate that is unsustainable because it's cheaper than deploying more expensive heat pumps in a desert climate.... and the pollution of the information on the internet from AI slop.
The short term gains from AI have a real world cost that most of us in the tech industry are isolated from. It is far from clear how to make this sustainable. The sums of money being thrown at AI will change the world forever.
Obviously, the optimal solution is to eliminate all humans and have data centers do everything.
They canonize themselves, and then act all shocked and offended when the rest of the world doesn't share their belief.
Obviously the existence of AI is valuable enough to pay the cost of offsetting a few artists' jobs, it's not even a question to us, but to artists it's shocking and offensive.
Certainly there are artists with inflated egos and senses of self-importance (many computer programmers with this condition too), but does this give us moral high ground to freely use their work?
How many people is it OK to exploit to create "AI"?
You say it's obvious that the existence of AI is valuable to offset a few artists' jobs, but it is far from obvious. The benefits of AI are still unproven (a more hallucinatory google? a tool to help programmers make architectural errors faster? a way to make ads easier to create and sloppier?). The discussion as to whether AI is valuable is common on hackernews even, so I really don't buy the "it's obvious" claim. Furthermore, the idea that it is only offsetting a few artists' jobs is also unproven: the future is uncertain, it may devastate entire industries.
> They canonize themselves, and then act all shocked and offended when the rest of the world doesn't share their belief.
You could've written this about software engineers and tech workers.
> Obviously the existence of AI is valuable enough to pay the cost of offsetting a few artists' jobs, it's not even a question to us
No, it's not obvious at all. Current AI models have made it 100x easier to spread disinformation, sow discord, and undermine worker rights. These have more value to me than being able to more efficiently Add Shareholder Value
That is true, but it does not survive contact with Capitalism. Let's zoom out and look at the larger picture of this simple scenario of "a creator creates art, another person enjoys art":
The creator probably spends hours or days painstakingly creating a work of art, consuming a certain amounts of electricity, water and other resources. The person enjoying that derives a certain amount of appreciation, say, N "enjoyment units". If payment is exchanged, it would reasonably be some function of N.
Now an AI pops up and, prompted by another human produces another similar piece of art in minutes, consuming a teeny, teeny fraction of what the human creator would. This Nature study about text generation finds LLMs are 40 - 150x more efficient in term of resource consumption, dropping to 4 - 16 for humans in India: https://www.nature.com/articles/s41598-024-76682-6 -- I would suspect the ratio is even higher for something as time-consuming as art. Note that the time taken for the human prompter is probably even less, just the time taken to imagine and type out the prompt and maybe refine it a bit.
So even if the other person derives only 0.1N "enjoyment" units out of AI art, in purely economic terms AI is a much, much better deal for everyone involved... including for the environment! And unfortunately, AI is getting so good that it may soon exceed N, so the argument that "humans can create something AI never could" will apply to an exceptionally small fraction of artists.
There are many, many moral arguments that could be made against this scenario, but as has been shown time and again, the definition of Capitalism makes no mention of morality.
So we're just waving away the carbon cost, centralization of power, privacy fallout, fraud amplification, and the erosion of trust in information? These are enormous society-level effects (and there are many more to list).
Dismissing AI criticism as simply ignorance says more about your own.
Running this paragraph through Gemini, returns a list of the fallacies employed, including - Attacking the Motive - "Even if the artists are motivated by self-interest, this does not automatically make their arguments about AI's negative impacts factually incorrect or "bad."
Just as a poor person is more aware through direct observation and experience, of the consequences of corporate capitalism and financialisation; an artist at the coal face of the restructuring of the creative economy by massive 'IP owners' and IP Pirates (i.e.: the companies training on their creative work without permission) is likely far more in touch the the consequences of actually existing AI than a tech worker who is financially incentivised to view them benignly.
> The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me.
This is a strange kind of anti-naturalistic fallacy. A paradigm shift (or indeed a revolution) is not in itself a good thing. One paradigm shift that has occurred for example in recent goepolitics is the normalisation of state murder - i.e.: extrajudicial assassination in the drone war or the current US govts use of missile attacks on alleged drug traffickers. One can generate countless other negative paradigm shifts.
> if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter?
1) You haven't produced it.
2) Such a thing - a beautiful product of AI that is not identifiably artificial - does not yet, and may never exist.
3) Scare quotes around intellectual property theft aren't an argument. We can abandon IP rights - in which case hurrah, tech companies have none - or we can in law at least, respect them. Anything else is legally and morally incoherent self justification.
4) Do you actually know anything about the history of art, any genre of it whatsoever? Because suggesting originality is impossible and 'efficiency' of production is the only form of artistic progress suggests otherwise.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously
>But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
I understand AI perfectly fine, thanks. I just reject the illegal vacuuming up of everyones' art for corporations while things like sampling in music remain illegal. This idea that everything must be efficient comes from the bowels of Silicon Valley and should die.
> However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
Again, the problem is less the tech itself and the corporations who have control of it. Yes, I'm against corporations gobbling up everyones' data for ads and AI surveillance. I think you might be the one who doesn't understand that not everything is roses and their might be more weeds in the garden than flowers.
When LLMs first gotten of, people were talking about how governments will control them, but anyone who knows the history of personal computing and hacker culture knew that's not the way things go in this world.
Do I enjoy corpos making money off of anyone's work, including obvious things like literally pirating books and training their models (Meta), absolutely not. However you are blaming the wrong thing in here, it's not technology's fault it's how governments are always corrupted and side with money instead of their people. We should be lashing out to them not each other, not the people who use AI and certainly not the people who innovate and build it.
I'm massively burnt out, what can I say? I can grasp new tech perfectly fine, but I don't want to. I quite honestly can't muster enough energy to care about "revolutionary" things anymore.
If anything I resent having to deal with yet more "revolutionary" bullshit.
Actual AI? Sure. The LLM slop we currently refer to as AI? lol, lmao even
To me, it matters because most serious art requires time and effort to study, ponder, and analyze.
The more stuff that exists in the world that superficially looks like art but is actually meaningless slop, the more likely it is that your time and effort is wasted on such empty nonsense.
And not to be too dismissive of copywriters, but old Buzzfeed style listicles are content as well. Stuff that people get paid pennies per word for, stuff that a huge amount of people will bid on on a gig job site like Fiverr or what have you is content, stuff that people churn out by rote is content.
Creative writing on the other hand is not content. I won't call my shitposting on HN art, but it's not content either because I put (some) thought into it and am typing it out with my real hands. And I don't have someone telling me what I should write. Or paying me for it, for that matter.
Meanwhile, AI doesn't do anything on its own. It can be made to simulate doing stuff on its own (by running continuously / unlimited, or by feeding it a regular stream of prompts), but it won't suddenly go "I'm going to shitpost on HN today" unless told to.
To conflate LLMs with a printing press or the internet is dishonest; yes, it's a tool, but one which degrades society in its use.
...Okay, now maybe take that and dial it back a few notches of hyperbole, and you'll have a reasonable explanation for why people have issues with AI as it currently exists. People are not wrong to recognize that, just because some people use AI for benign reasons, the people and companies that have formed a cartel for the tech mainly see those benign reasons as incidental to becoming middle men in every single business and personal computing task.
Of course, there is certainly a potential future where this is not the case, and AI is truly a prosocial, democratizing technology. But we're not there, and will have a hard time getting there with Zuckerburg, Altman, Nadella, and Musk at the helm.
I know what it means to be that good.
Sure, most people couldn’t care less, and they’re happy with something that’s simply pleasant to look at.
But for those people, it wouldn’t matter even if it weren’t AI-generated. So what is the point?
You created something without having to get a human to do it. Yaay?
Except we already have more content than we know what to do with, so what exactly are we gaining here? Efficiency?
Generative AI was fed on the free work and joy of millions, only to mechanically regurgitate content without attribution. To treat creators as middlemen in the process.
Yaay, efficient art. This is really what is missing in a world with more content than we have time to consume.
The point of markets, of progress, is the improvement of the human condition. That is the whole point of every regulation, every contract, and every innovation.
I am personally not invested in a world that is worse for humanity
Art can be about many things, we have a lot of tech oriented art (think about demo scene). Noe one gives a shit about art that evokes nothing for them, therefore if AI evokes nothing who cares, if it does, is it bad suddenly because it's AI? How?
Actually I think AI will force good amount of mediums to logical conclusion if what you do is mediocre, and not original and AI can do same or better, then that's about you. Once you pass that threshold that's how the world cherish you as a recognized artist. Again you can be artist even 99.9% of the world thinks what you produced is absolute garbage, that doesn't change what you do and what that means to you. Again nothing to do with AI.
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
That's a good metaphor for the rapid growth of AI. It is driven by real needs from multiple directions. For it to become evitable, it would take coercion or the removal of multiple genuine motivators. People who think we can just say no must be getting a lot less value from it then me day to day.
> For people with underlying heart disease, a 2017 study in the journal Environmental Research linked increased particulate air pollution from wood smoke and other sources to inflammation and clotting, which can predict heart attacks and other heart problems.
> A 2013 study in the journal Particle and Fibre Toxicology found exposure to wood smoke causes the arteries to become stiffer, which raises the risk of dangerous cardiac events. For pregnant women, a 2019 study in Environmental Research connected wood smoke exposure to a higher risk of hypertensive disorders of pregnancy, which include preeclampsia and gestational high blood pressure.
https://www.heart.org/en/news/2019/12/13/lovely-but-dangerou...
This is not a small thing for me. By burning wood instead of gas I gain a full week of groceries per month all year!
I acknowledge the risk of AI too, including human extinction. Weighing that, I still use it heavily. To stop me you'd have to compel me.
Probably the risk involved in cutting down trees is more than for breathing in wood smoke. I'm no better at predicting which way a tree will fall than which horse will win.
I like the metaphor of burning wood, I also think it's going to be left for fun.
I do wonder who the AI era's version of Marx will be, what their version of the Communist Manifesto will say. IIRC, previous times this has been said this on HN, someone pointed out Ted Kaczynski's manifesto.
* Policing and some pensions and democracy did exist in various fashions before the industrial revolution, but few today would recognise their earlier forms as good enough to deserve those names today.
Serena Butler.
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
I think this is a variant of "every action is normative of itself". Using AI states that use of AI is normal and acceptable. In the same way that for any X doing X states that X is normal and acceptable - even if accompanied by a counterstatement that this is an exception and should not set a precedent.
So I guess I'm morally obligated to use LLMs specifically to reject this framework? Works for me.
To clarify, I don't think pushing an ideology you believe in by posting a blog post is a bad thing. That's your right! I just think I have to read posts that feel like they have a very strong message with more caution. Maybe they have a strong message because they have a very good point - that's very possible! But often times, I see people using this as a way to say "if you're not with me, you're against me".
My problem here is that this idea that "everything is political" leaves no room for a middle ground. Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
All that to say, maybe I'm totally wrong, I don't know. I'm open to an argument against mine, because there's a very good chance I'm missing the point.
>Is my choice to write some boiler plate code using gen AI truly political?
I am much closer to agreeing with your take here, but as you recognise, there are lots of political aspects to your actions, even if they are not conscious. Not intentionally being political doesn't mean you are not making political choices; there are many more that your AI choice touches upon; privacy issues, wealth distribution, centralisation, etc etc. Of course these choices become limited by practicalities but they still exist.
With respect, I’m curious how you read all of that out of what they said...and whether it actually proves their remarks correct.
Resisting the status quo of hostile technology is an endless uphill battle. It requires continous effort, mostly motivated by political or at least ideological reasons.
Not fighting it is not the same as being neutral, because not fighting it supports this status quo. It is the conscious or unconscious surrender to hostile systems, whose very purpose is to lull you into apathy through convenience.
But You do make a good point that those words are all potentially very loaded.
This is also my core reservation against the idea.
I think that the belief only holds weight in a society that is rife with opposing interpretations about how it ought to be managed. The claim itself feels like an attempt to force someone toward the interests of the one issuing it.
> Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
Apparently yes it is. This is all determined by your impressions on generative AI and its environmental and economic impact. The problem is that most blog posts are signaling toward a predefined in-group either through familiarity with the author or by a preconceived belief about the subject where it’s assumed that you should already know and agree with the author about these issues. And if you don’t you’re against them.
For example—I don’t agree that everything is inevitable. But I as I read the blog post in question I surmised that it’s an argument against the idea that human beings are not at the absolute will of technological progress. And I can agree with that much. So this influences how I interpret the claim “nothing is inevitable” in addition to the title of the post and in conjunction with the rest of the article (and this all is additionally informed by all the stuff I’m trying to express to you that surrounds this very paragraph).
I think that this is speaks to the present problem of how “politics” is conflated to additionally refer to one’s worldview, culture, etc., in and of itself instead of something distinct but not necessarily inseparable from these things.
Politics ought to indicate toward a more comprehensive way of seeing the world but this isn’t the case for most people today and I suspect that many people who claim to have comprehensive convictions are only 'virtue signaling’.
A person with comprehensive convictions about the world and how humans ought to function in it can better delineate the differences and necessary overlap between politics and other concepts that run downstream from their beliefs. But what do people actually believe in these days? That they can summarize in a sentence or two and that can objectively/authoritatively delineate an “in-group” from an “out-group” and that informs all of their cultural, political, environmental and economic considerations, and so on...
Online discourse is being cleaved into two sides vying for digital capital over hot air. The worst position you can take is a critical one that satisfies neither opponent.
You should keep reading all blog posts with a critical eye toward the appeals embedded within the medium. Or don’t read them at all. Or read them less than you read material that affords you with a greater context than the emotional state that the author was in when they wrote the post before they go back to releasing software communiques.
Was wondering what the beef with this was until I realized author meant "companies that are garbage" and not "landfill operators using gas turbines to make power". The latter is something you probably would want.
This a million times. I honestly hate interacting with all software and 90% of the internet now. I don't care about your "U""X" front end garbage. I highly prefer text based sites like this
You ever see those "dementia simulator" videos where the camera spins around and suddenly all the grocery store aisles are different? That's what it must be like to be less tech literate.
I blame GUIs. They disempower users and put them at mercy of UX "experts" who just rearrange the deck chairs when they get bored and then tell themselves how important they are.
https://suno.com/song/797be726-c1b5-4a85-b14a-d67363cd90e9
- some options have moved to menus which make no sense at all (e.g. all the toggles for whether a panel's menubar icon appear in the menu bar have moved off the panel for that feature and onto the Control Centre panel. But Control Centre doesn't have any options of its own, so the entire panel is a waste of time and has created a confusing UX where previously there was a sensible one
- loads of useful stuff I do all the time has moved a layer deeper. e.g. there used to be a top-level item called "sharing" for file/internet/printer sharing settings. It's moved one level deeper, below "General". Admittedly, "the average user" who doesn't use sharing features much, let alone wanting to toggle and control them, probably prefers this, but I find it annoying as heck
- following on from that, and also exhibited across the whole settings UI is that UI patterns are now inconsistent across panels; this seems to be because the whole thing is a bunch of web views, presumably all controlled by a different team. So they can create whatever UI they like, with whatever tools make sense. Before, I assume, there was more consistency because panels seemed to reuse the same default controls. I'm talking about use of tabs, or drop-downs, or expanders, or modal overlays... every top level panel has some of these, and they use them all differently: some panels expand a list to reach sub controls, some add a model, some just have piles of controls in lozenges
- it renders much slower. On my m3 and m4 MPBs you can still see lag. It's utterly insane that on these basically cutting edge processors with heaps of RAM, spare CPUs, >10 GPU cores, etc, the system control panel still lags
- they've fallen into the trap of making "features" be represented by horizontal bars with a button or toggle on the right edge. This pattern is found in Google's Material UI as well. It _kinda_ makes sense on a phone, and _almost_ makes sense on a tablet. But on a desktop where most windows could be any width, it introduces a bunch of readability errors. When the window's wide, it's very easy for the eye to lose the horizontal connection between a label and its toggle/button/etc. To get around this, Apple have locked the width of the Settings app... but also seems a bit weird.
- don't get me started on what "liquid glass" has done to the look & feel
There's some cognitive dissonance on display there that I'm actually finding it hard to wrap my head around.
Yeah, I absolutely did. Only I wrote the lyrics and AI augmented my skills by giving it a voice. I actually put significant effort into that one; I spent a couple hours tweaking it and increasing its cohesion and punchiness, iterating with ideas and feedback from various tools.
I used the computer like a bicycle for my mind, the way it was intended.
Computers are meant to be tools to expand our capabilities. You didn't do that. You replaced them. You didn't ride a bike, you called an Uber because you never learned to drive, or you were too lazy to do it for this use.
AI can augment skills by allowing for creative expressions - be it with AI stem separation, neural-network based distortion effects, etc. But the difference is those are tools to be used together with other tools to craft a thing. A tool can be fully automated - but then, if it is, you are no longer a artist. No more than someone that knows how to operate a CNC machine but not design the parts.
This is hard for some people to understand, especially those with an engineering or programming background, but there is a point to philosophy. Innate, valuable knowledge in how a thing was produced. If I find a stone arrow head buried under the dirt on land I know was once used for hunting by native Americans, that arrow head has intrinsic value to me because of its origin. Because I know it wasn't made as a replica and because I found it. There is a sliding scale, shades of gray here. An arrow head I had verified was actually old but which I did not find is still more valuable than one I know is a replica. Similarly, you can, I agree, slowly un-taint an AI work with enough input, but not fully. Similarly, if an digital artist painted something by hand then had StableDiffusion inpaint a small region as part of their process, that still bothers many, adds a taint of that tool to it because they did not take the time to do what the tool has done and mentally weigh each pixel and each line.
By using Suno, you're firmly in the "This was generated for me" side of that line for most people, certainly most musicians. That isn't riding a bike. That's not stretching your muscles or feeling the burn of the creative process. It's throwing a hundred dice, leaving the 6's up, and throwing again until they're all 6's. Sure, you have input, but I hardly see it as impressive. You're just a reverse centaur: https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
But no we get none of that. We get mega shitty corporate covers. I would rather hear music that's a little bad than artificially perfect sounding.
However, I support ~80 non-technical users for whom that update was a huge benefit. They're familiar with iOS on their phones, so the new interface is (whaddya know) intuitive for them. (I get fewer support calls, so it's of indirect benefit to me, too.) I try to let go of my frustration by reminding myself that learning new technology is (literally) part of my job description, but it's not theirs.
That doesn't excuse all the "moving the deck chairs" changes - Tahoe re-design: why? - but I think Apple's broad philosophy of ignoring power users like us and aligning settings interfaces was broadly correct.
Funny story: when my family first got a Windows computer (3.1, so... 1992 or '93?) my first reaction was "this sucks. Why can't I just tell the computer what to do anymore?" But, obviously, GUIs are the only way the vast majority will ever be able to interact with a device - and, you know, there are lots of tasks for which a visual interface is objectively better. I'd appreciate better CLI access to MacOS settings: a one-liner that mirrors to the most recently-connected display would save me so much fumbling. Maybe that's AppleScript-able? If I can figure it out I'll share here.
There's such a thing as "multiple invention", precisely because of this. Because we all live in the same world, we have similar needs and we have similar tools available. So different people in different places keep trying to solve the same problems, build the same grounding for future inventions. Many people want to do stuff at night, so many people push at the problem of lighting. Edison's particular light bulb wasn't inevitable, but electric lighting was inevitable in some form.
So with regards to generative AI, many people worked in this field for a long time. I played with fractals and texture generators as a kid. Many people want for many reasons. Artwork is expensive. Artwork is sometimes too big. Or too fixed, maybe we want variation. There's many reasons to push at the problem, and it's not coordinated. I had a period where I was fiddling around with generating assets for Second Life way back because I found that personally interesting. And I'm sure I was not the only one by any means.
That's what I understand by "inevitable", that without any central planning or coordination many roads are being built to the same destination and eventually one will get there. If not one then one of the others.
So technically inevitable or not, it doesn't matter. People will at large keep using smart refrigerators and Tiktok.
The techies are drop in the ocean. You may build a new tech or device, but the adaption is driven by the crowd who just drift away without a pinch of resistance.
We should stop with over-generalization like "The future is defined by the common man on the street." It's always much more complex than that. To every trend, there is a counter-trend (even sometimes alt-trends that are not actually opposites).
Worth getting on your radar if this stuff is of interesting: https://aria.org.uk/opportunity-spaces/collective-flourishin...
(full disclosure: I'm working with the programme director on helping define the funding programme, so if you're working on related problems, by all means share your thoughts on the site or by reaching out!)
Do we really think LLMs and the generative AI craze would have not occurred if Sam Altman chose to stay at Y Combinator or otherwise got hit by a bus? People clearly like to interact with a seemingly smart digital agent, demonstrated as early as ELIZA in 1966 and SmarterChild in 2001.
My POV is that human beings have innate biases and preferences that tend to manifest what we invent and adopt. I don't personally believe in a supernatural God but many people around the world do. Alcoholic beverages have been independently discovered in numerous cultures across the world over centuries.
I think the best we can do is usually try to act according to our own values and nudge it in a direction we believe is best (both things OP is doing so this is not a dunk on them, just my take on their thoughts here).
This is great, thanks for the link.
People want things to be simpler, easier, frictionless.
Resistance to these things has a cost and generally the ROI is not worth it for most people as whole
nothing in real life is ideal, that just reality
It's not just that it's not fun. Any fun I derive is canceled-out by the inevitable loss.
I've felt white-hot blazing anger so many times when a feature is taken away by an "update" that I am not permitted to revert. I don't want to feel that feeling anymore.
While this reaction is understandable, it is difficult to feel sympathy when so few people are willing to invest the time and effort required to actually understand how these systems work and how they might be used defensively. Mastery, even partial, is one of the few genuine avenues toward agency. Choosing not to pursue it effectively guarantees dependence.
Ironically, pointing this out often invites accusations of being a Luddite or worse.
Philosophical claims have been made around this point. See, for example, "The Moral Obligation to Be Intelligent", an essay by John Erskine.
So many problems would be solved if a fraction of people would be more inclined to understand what's in front of them.
fMRI has always had folks highlighting how shaky the science is. It's not the strongest of experimental techniques.
Trivialities don't add anything to the discussion. The question is "Why?" and then "How do we change that?". Even incomplete or inaccurate attempts at answering would be far more valuable than a demonstration of hand-wringing powerlessness.
I do not think that the current philosophical world view will enable a different path. We've had resets or potential resets, COVID being a huge opportunity, but I think neither the public nor the political class had the strength to seize the moment.
We live in a world where we know the price of everything and the value of nothing. It will take dramatic change to put 'value' back where it belongs and relegate price farther down the ladder.
1. To display ads is to sacrifice user experience. This is a slippery slope and both developers and users get used to it, which affects even ad-free services. Things like "yes/maybe later" become normal.
2. Ads are only displayed when the user visits the service directly. Therefore we cannot have open APIs, federation, alternative clients, or user customization.
3. The advertisement infrastructure is expensive. This has to be paid with more ads. Like the rocket equation, this eventually plateaus, but by then the software is bloated and cannot be funded traditionally anymore, so any dips are fatal. Constant churn.
4. Well targeted ads are marginally more profitable, therefore all user information is valuable. Cue an entire era of tracking, privacy violations, and psychological manipulation.
5. Advertiser don't want to be associated with anything remotely controversial, so the circle of acceptable content shrinks every year. The fringes become worse and worse.
6. The system only works with a very large number of users. It becomes socially expected to participate, and at the same time, no customer support is provided when things go wrong.
I'm fairly sure ads are our generation's asbestos or leaded gasoline, and would be disappointed if they are not largely banned in the future.
*Existence* of a situation as inevitable isn't so bold of a claim. For example, someone will use an AI technology to cheat on an exam. Fine, it's possible. Heck, it is mathematically certain if we have a civilization that has exams and AI techs, and if that civilization runs infinitely.
*Generality* of a situation as inevitable, however, tends to go the other way.
- McCabe (Kurt Russell), Vanilla Sky
I firmly believe this is where people will get the most annoyed in the long run. Not having any public facing human beings will lose people.
Shout out to the Juicero example, because there are so many people out there showing that AI can be also "just squeeze the bag with your hands".
Ads are one of the oldest and most fundamental parts of a modern society.
Mixing obviously dumb things in with fundamental ones doesn't improve the point.
None of the items is technically inevitable, but the world runs on capital, and capital alone. Tech advances are just a by product of capital snooping around trying to increase itself.
AI exists -> vacation photos exist -> it's inevitable that someone was eventually going to use AI to enhance their vacation photos.
As one of those niche power users who runs servers at home to be beholden to fewer tech companies, I still understand that most people would choose Netflix over a free jellyfin server they have to administer.
> Not being in control of course makes people endlessy frustrated
I regret to inform you, OP, that this is not true. It's true for exactly the kind of tech people like us who are already doing this stuff, because it's why we do it. Your assumption that people who don't just "gave up", as opposed to actively choosing not to spend their time on managing their own tech environment, is I think biased by your predilection for technology.
I wholeheartedly share OP's dislike of techno-capitalism(derogatory), but OP's list is a mishmash of
1) technologies, which are almost never intrinsically bad, and 2) business choices, which usually are.
An Internet-connected bed isn't intrinsically bad; you could set one up yourself to track your sleep statistics that pushes the data to a server you control.
It's the companies and their choices to foist that technology on people in harmful ways that makes it bad.
This is the gripe I have with anti-AI absolutists: you can train AI models on data you own, to benefit your and other communities. And people are!
But companies are misusing the technology in service of the profit motive, at the expense of others whose data they're (sometimes even illegally) ingesting.
Place the blame in the appropriate place. Something something, hammers don't kill people.
> But what is important to me is to keep the perspective of what consitutes a desirable future, and which actions get us closer or further from that.
Desirable to whom? I certainly don't think the status quo is perfect, but I do think dismissing it as purely the product of some faceless cadre of tech oligarchs desires is arrogant. People do have agency, the author just doesn't like what they have chosen to do with it...
Imagine if the 80s and 90s had been PC vs Mac but you had to go to IBM for one or more critical pieces of software or software distribution infrastructure. The Cambrian explosion IBM-PC compatability didn’t happen overnight of course. I don’t think it will be (or ought to be) inevitable that phones remain opaque and locked down forever, but the day when freedom finally comes doesn’t really feel like it’s just around the corner.
Posted, alas for now, from my iPhone
There's a recording of an interview with Bill Gates floating around where he pretty much takes credit for that. He claims (paraphrasing because I listened to it almost 20 years ago) that he suggested a lot of the hardware to IBM because he knew he could repurpose DOS for it.
We’re two decades into the smartphone era and my hope is that we’re still in the DEC / VAX / S370 stage, with the “IBM-PC” stage just around the corner still to come.
It happened because IBM by mistake allowed it to happen. Big tech companies nowadays are very proficient at not repeating those mistakes.
I was hoping to find such a list within the article, i.e. which companies and products should we be supporting that are doing things 'the right way'?
https://reactos.org/
https://elementary.io/
> "Requiring a smartphone to exist in society is not inevitable."
Seeing smartphones morph from super neat computer/camera/music players in our pockets to unfiltered digital nicotine is depressing to think about.
Notifications abuse is _entirely_ to blame, IMO.
Every app that you think of when you think of "addictive" apps heavily relies on the notifications funnel (badges, toasts, dings) for engagement. I'm disappointed that we as a society have normalized unabated casino-tier attention grabs from our most personal computing devices.
Growth through free, ad-subsidized tiers also helped create this phenomenon, but that strategy wouldn't be nearly as effective without delivery via notifications.
Big AI (more like LLM/Stable Diffusion as a Service) is going to prey on that to levels never seen before, and I'm definitely not here for it.
Obligatory end-of-post anecdote: My phone stays home most of the time when I work out. I only bring my Apple Watch.
My gym bag has an iPad mini and a BOOX eReader, but I only use the iPad to do Peloton stretches and listen to KEXP Archives, as those can't be done from my watch (though I'm working on something for the latter).
Using this setup has given me a lot of opportunities to soak in my surroundings during rest periods. Most of that is just seeing people glued to Instagram, Facebook, and YouTube. TV addiction on steroids, in other words.
Thanks to this, people like me who use their phones as tools are forced to carry huge, heavy black slabs because small phones aren't viable and, as market analysis is showing, thin and lightweight slabs won't cut it either.
You do know you can disable notifications on a per-app basis, or even entirely, right?
I guess the problem is scale. A system based on altruism, trust and reciprocity might work great for a community of 20 people. But it doesn't scale to millions of people. Consequently, we end up with (in the West) various shades of democracy, "the least bad system". However, democracy doesn't work well when a tiny fabulously-rich elite is able to buy up all the media and a sizeable chunk of the politicians.
I've been thinking a lot lately, challenging some of my long-held assumptions...
Big tech, the current AI trend, social media websites serving up rage bait and misinformation (not to imply this is all they do, or that they are ALL bad), the current political climate and culture...
In my view, all of these are symptoms, and the cause is the perverse, largely unchallenged neoliberal world in which the West has spent the last 30-40 years (at least) living in.
Profit maximising comes before everything else. (Large) Corporate interests are almost never challenged. The result? Deliberately amoral public policy that serves the rich and powerful.
There are oases in this desert (which is, indeed, not inevitable), thankfully. As the author mentioned, there's FOSS. There's indie-created games/movies. There's everyday goodness between decent people.
Whole post just reads as someone disgruntled at the state of the world and reeling that they aren't getting their way. Theres a toxic air of intellectual and moral superiority in that blog
Just like TikTok. The author doesn't think TikTok is inevitable, and I fully agree with them! But in our real timeline TikTok exists. So TikTok is, unquestionably, the present. Wide adoption of gen-AI is present.
https://kk.org/books/the-inevitable
It's not really game theory but economics: the supply curve for nicely contended markets, and transaction costs for everything. Game theory only addresses the information aspects of transaction costs, and translates mostly only for equal power and information (markets).
The more enduring theory is the roof; i.e., it mostly reduces to what team you're on: which mafia don, or cold-war side, or technology you're leveraging for advantage. In this context, signaling matters most: identifying where you stand. As an influencer, the signal is that you're the leading edge, so people should follow you. The betas vie to grow the alpha, and the alpha boosts or cuts betas to retain their role as decider. The roof creates the roles and empowers creatures, not vice-versa.
The character of the roof depends on resources available: what military, economic, spiritual or social threat is wielded (in the cold war, capitalism, religion or culture wars).
The roof itself - the political franchise of the protection racket - is the origin of "civilization". The few escapes from such oppression are legendary and worth emulating, but rare. Still, that's our responsibility: to temper or escape.
What is inevitable? The heat death of the universe. You probably don't need to worry about it much.
Everything else can change. If someone is proposing that a given technology is, "inevitable," it's a signal that we should think about what that technology does, what it's being used to do to people, and who profits from doing it to them.
I'm pretty cynical, but one ray of hope is that AI-assisted coding tools have really brought down the skill requirement for doing some daunting programming tasks. E.g. in my case, I have long avoided doing much web or UI programming because there's just so much to learn and so many deep rabbit holes to go down. But with AI tools I can get off the ground in seconds or minutes and all that gruddy HTML/JavaScript/CSS with bazillions of APIs that I could go spend time studying and tinkering with have already been digested by the AI. It spits out some crap that does the thing I mostly want. ChatGPT 5+ is pretty good at navigating all the Web APIs so it was able to generate some WebAudio mini apps to start working with. The code looks like crap, so I hit it with a stick and get it to reorganize the code a little and write some comments, and then I can dive in and do the rest myself. It's a starting point, a prototype. It got me over the activation energy hump, and now I'm not so reluctant to actually try things out.
But like I said, I'm cynical. Right now the AI tools haven't been overly enshittified to the point they only serve their masters. Pretty soon they will be, and in ways we can't yet imagine.
I'm basically down to Anki cards, Chrono Trigger, and the Money Matters newsletter on my phone (plus calls and messaging).
Recently I've dropped Youtube in favor of reading the New Yorker's that're piling up more frequently.
Is it just me or is software actively getting worse too? I feel like I'm noticing more rough edges, the new Mac OS update doesn't feel as smooth as I use to expect from Apple products.
Life is just calmer, get an antenna and PBS, use your library, look at the fucking birds lol. The deluge of misinformation isn't worth it for the good nuggets at this point
Narratives are funny because they can be completely true and a total lie.
There's now a repeated narrative about how the AI bubble is like the railroads and dotcom and therefore will end the same. Maybe. But that makes it seem inevitable. But those who have that story can't see anything else and might even cause that to happen, collectively.
We can frame things with stories and determine the outcomes by them. If enough people believe that story, it becomes inevitable. There are many ways to look at the same thing and many different types of stories we can tell - each story makes different things inevitable.
So I have a story I'd like to promote:
There were once these big companies that controlled computing. They had it locked down. Then came ibm clones and suddenly, the big monopolies couldn't keep up with innovation via the larger marketplaces that opened up with standard (protocol) hardware interfaces. And later, the internet was new and exciting - compuserve and AOL were so obviously going to control the internet. But then open protocols and services won because how could they not? It was inevitable that a locked down walled garden could not compete with the dynamism that open protocols allowed.
Obviously now, this time is no different. And, in fact, we're at an inflection point that looks a lot like those other times in computing that favored tiny upstarts that made lives better but didn't make monopoly-sized money. The LLMs will create new ways to compete (and have already) that big companies will be slow to follow. The costs of creating software will go down so that companies will have to compete on things that align with user's interests.
User's agency will have to be restored. And open protocols will again win over closed for the same reasons they did before. Companies that try to compete with the old, cynical model will rapidly lose customers and will not be able to adapt. The money possible to be made in software will decline but users will have software in their interests. The AI megacorps have no moat - chinese downloadable models are almost as good. People will again control their own data.
It's inevitable.
Inevitable and being a holdout are conceptually different and you can't expect society as a whole to care or respect your personal space with regards to it.
They listed smartphones as a requirement an example. That is great, have fun with your flip phone, but that isn't for most people.
Just because you don't find something desirable doesn't mean you deserve extra attention or a special space. It also doesn't you can call people catering to the wants of the masses as "grifters".
None of these companies wanted to get Apple'd, and they (particularly Facebook) did everything they could to pay lip service to developing VR without funding anything really groundbreaking (or even obvious). Apple finally had to release something after years of promising shareholders that they weren't going to get left behind in the market, and with nothing material to skim off of competitors, the AVP is what we got.
Until Apple figures out how to dig up and purify its deep-rooted cultural rot, and learn how to actually innovate independently, every halfway-aware competitor is going to hold back development on anything they might want to appropriate. In the meantime, we all lose.
We are headed towards (or already in) corporate feudalism and I don't think anything can realistically be done about it. Not sure if this is nihilism or realism but the only real solution I see is on the individual level: make enough money that you don't have to really care about the downsides of the system (upper middle class).
So while I agree with you, I think I just disagree with the little bit you said about "cant expect anything to change without-" and would just say: cant expect anything to change except through the inertia of what already is in place.
The rational choice is to act as if this was ensured to be the future. If it ends up not being the case, enough people will have made that mistake that your failure will be minuscule in the grand scheme of things, and if it's not and this is the future, you won't be left behind.
Sure beats sticking your feet in the sand and most likely fucking up or perhaps being right in the end, standing between the flames.
But the inevitable is not a fact, it's a rigged fake that is, unfortunately, adapted by humans which flock in such large groups, echoing the same sentiments that it for those people seem real and inevitable.
Humans in general are extremely predictable, yet, so predictable that they seem utterly stupid and imbecile.
The species as a whole will evolve inevitably; the individual animal may not.
However AI is the future for programming that’s for sure.
Ignore it as a programmer to make yourself irrelevant.
I don't get the reason for this one being in the list. Is that an abusive product in some way?
Can we change direction on how things are going? Yes, but you must understand what means the "we" there, at least in the context of global change of direction.
Broadly speaking, I would bin them into two categories. The first category contains things like this:
> Tiktok is not inevitable.
Things like this become widespread without coercion. I don't use TikTok or any short-form video and there's nothing forcing me to. For a while, Facebook fed me reels, and I fell for it once or twice, but recognized how awful it was and quit. However, the Tiktok and junk food are appealing to many people even though they are slop. The dark truth is that many people walking around just like slop, and unless there's any restraint imposed from external actors, they'll consume as much as is shoveled into their troughs.
But, at the end of the day, you can live your life without using Tiktok at all.
The other category would be things that become widespread on the back of coercion, to varying degrees.
> Requiring a smartphone to exist in society is not inevitable.
This is much trickier to do than living without Tiktok. It's harder to get through airports or even rent a parking space now. Your alternative options will be removed by others.
Besides the bigger issue is that this blog post offers no concrete way forward.
A lot of history's turning points were much closer than we think.
https://en.wikipedia.org/wiki/Miracle_of_the_House_of_Brande...
It had huge impact on world history: it indirectly lead to German unification, it possibly lead to both world wars in the form we know them, it probably impacted colonial wars and as a result the territory of many former colonies, and probably also not their current populations (by determining where colonists came from and how many of them).
I'm fairly sure there were a few very close battles during the East India Company conquest of India, especially in the period when Robert Clive was in charge.
Another one for Germany: after Wilhem I died at 90, his liberal son Frederick III died aged only 56, after a reign of just 99 days. So instead Germany had Wilhem II as the emperor, a conservative that wrecked all of Bismark's successful foreign policies.
Oh, Japan attacking Pearl Harbor/the US. If the Japanese Army faction would have won the internal struggle and had tried to attack the Soviets again in 1941, the USSR would have probably been toast and the US would have probably intervened only slowly and indecisively.
I can't really remember many others right now, but every country and every continent has had moments like these. A lot of them are sheer bad luck but a good chunk are just miscalculation.
History is full of what-ifs, a lot of them with huge implications for the world.
Where's Japan getting the oil to fight USSR? The deposits are all too far east [1].
Even with the US out of the war we were denying them steel / oil but the US embargo is much less effective without a pacific navy.
[1]: https://old.reddit.com/r/MapPorn/comments/s1cbj6/a_1960s_map...
“You take the blue pill, the story ends, you wake up in your bed and believe whatever you want to believe.”
Casual thinking, as the author implies is necessary, would make you realize all of this was inevitable. The whole reason the tech boom existed, the whole reason the author is typing on what I can only guess is a 2005 T-series, the whole reason the internet made it, the whole reason all of this works, is STRICTLY because they wrenched control from us.
If FOSS was gonna work it would've by now. I love FOSS, but FOSS enthusiasts are so obnoxiously snobby about everything. In 30+ years linux has wrenched all of 1-2% more of the desktop market away from the giants.
This was all inevitable.
What is the best way and how do we stop them?
These are all natural forces, they may be human forces but they are still natural forces. We can't stop them, we can only mitigate them -- and we _can_ mitigate them, but not if we just stick our fingers in our ears and pretend it's not going to happen.
It absolutely is. Whatever you can imagine. All of it.
Yea, I remember the time when trillion dollar companies were betting the house on Juicero /s
There are plenty "technology" things which have come to pass, most notably weapons, which have been developed which are not allowed to be used by someone to thier fullest due to laws, and social norms against harming others. Theese things are technology, and they would allow someone to attain wealth much more efficiently....
Parrots retort that they are regulated because society sees them as a threat.
Well, therein is the disconnect, society isn't immutable, and can come to those conclusions about other technologies tomorrow if it so chooses...
Even so, "humans do care about its ethical and moral considerations": whose ethics? enforced how? measured how? you're going to fight efficiency and functionality? good luck.
Oh my, the shear number of philosophers, biologists, ethicists, and, for that matter, bacterium rotating in their graves.
Life might be a technology… many technologies are not only not-living, they’re mutually contradictory with life.