IMO this looks largely like another circular investment. Amazon's investment is tied to OpenAI using AWS for their Frontier product and I assume Nvidia's conditions are that OpenAI continue buying hardware from them. Then there's SoftBank though given that those are the same guys that invested heavily in WeWork, I assume this is just very brash bullishness on their part.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
It's not "continue" buying as much as this is NVIDIA fronting the money for (most of) the hardware OpenAI has already ordered from them. It's like borrowing rent money from your drug dealer.
The "circular investment" is mostly start up companies using their stocks instead of cash to pay for server hardware and cloud computing. There is a few extra steps in between that make things look weird and convoluted, but the end results is really just big companies giving hardware and getting shares of ai companies in exchange for it.
> On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.
> I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.
Aaaannd get to claim the 100 as revenue to show investors that the company is performing better than if I had not made the deal, which also means that demand for the product stays inflated which also means I can keep my margins higher by not needing to discount my product.
Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.
In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.
We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.
How I see it is the companies want to jack their revenue and in turn jack the price of their stock and please shareholders. Those are the two main goals which this accomplishes, regardless of the underlying fundamentals.
I'm not a finance expert, but it may be because investment and purchase are are taxed differently (I don't know). You gave $100 away as investments, got $100 back as revenue. Meanwhile you establish that your product are worth $100 (while costing $80) and you have $100 worth of shares. Without considering side effects, you gave away $80 worth of product for $100 (supposed) worth of shares. But shares are subject to side effects and those side effects can be quite nice (making the news, establishing price,...).
The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.
> I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
What if the product only costs you $20 to produce?
They don't need to reach AGI. They just need to put all of the engineers on HN out of work.
A year ago I would have said that was crazy. In the last month, I've been using Claude Code to write 20kloc of Rust code every day (and I review all of it).
A week is now a day. If that figure doubles, I have no idea what will happen to us. And I think it's coming.
I'd assume the real trigger here is "reaching AGI," which would help OpenAI shrug off some of their Microsoft commitments thus making OpenAI models available on Amazon Bedrock. Which is what Amazon is really after.
It'd be interested in seeing how exactly the lawyers figured out how to define AGI. It must be a fairly mundane set of KPIs that they just arbitrarily call AGI, the term will probably devalue significantly in the coming years.
The actual quote is this though:
> hitting an AGI milestone or pursuing an IPO
So it seems softer than actually achieving AGI or finalising an IPO.
Very convenient to put "AGI" in all these agreements because the term is fundamentally undefinable. So throw out whatever numbers you want and fight about it and backtrack later.
So let´s see if I understood well this one:
Got 110 Billions with the promise that either AGI will happen soon (:) or going public before the end of the year.
Eitherway you get to double your 110 Billions no matter what (who will be left to pay the full bill after it, public or public)?
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
> Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.
e.g. it talks about running NVIDIA's systems (?) on AWS
> NVIDIA has long been one of our most important partners, and their chips are the foundation of AI computing. We are grateful for their continued trust in us, and excited to run their systems in AWS. Their upcoming generations should be great.
Probably something like NVLink Fusion. AWS has been doing deals with suppliers for which the smallest unit of deployable compute is a 44U rack (e.g. Oracle), so this is more of the same.
Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.
>Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.
They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.
What would really help is knowing the details of such funding. The hierarchy of who gets paid first in event of going under is very illuminating and while I am not a banker I always wonder if there are caveats too complicated even for the large investors to understand
I don't know that OpenAI specifically is the weak link but this definitely adds to the argument that the entire sector is a wash with the same three or four companies passing around the same $50B over and over. OpenAI is just the link that seems most likely to break first.
I've seen this sentiment (OpenAI collapse imminent) a lot on Youtube and Reddit, but it somehow evaded me on here
Bad comments about OpenAI's long-term viability I've seen plenty here. But that's not the same as the people predicting one of the hottest companies right now will somehow suddenly run out of cash all on its own
You'll always find someone claiming X or Y are close to collapse at any given time. As even a broken clock is right twice a day, eventually one of these predictions will randomly be proven correct. That person will then be elevated to a genius forecaster and rake in cash for a decade or two.
Actually it is the other way around; every upstart claims that their invention is the mostest revolutionariest thing ever. 99.9% of them are not. The nay sayers are right most of the time.
Recent high-profile examples include Segway, NFT, Crypto as a whole, pre-tranformers voice assistants and various "Design Thinking" projects like those Amazon prime buttons.
If nobody invested in OpenAI how long could they keep the lights on? They're not profitable yet, and a lot of the wealth that Sam Altman seems to be making revolves around strange circular deals.
By comparison, Anthropic is projected to break even in 2028. Google's Gemini is already profitable.
What source do you have the Gemini is profitable? Are you referring only to the chat app, or to Google'a AI Ventures division? Or including Google Cloud AI related revenue?
Not agreeing with the parent, but that hardly matters. Google has a real business, advertising, that brings in $400 billion a year and income around $150B. They can afford to throw away tens of billions every year while still remaining immensely profitable and quite solid as a business. OpenAI has no such income to spend so it's as the above comments reflect, entirely unsustainable while Google's spending on AI is a drop in the bucket for them.
I didn't really realize how big Gemini was until I saw that Qualia was using it, they apparently used 0.01% of Geminis total tokens (100 billion) in about 3 months, they're in production with the title and escrow industry, so that's a great deal of data going through Gemini, unlike some chat subscription this is all API driven, which I doubt Google is charging at a loss for.
This does not at all tell us Gemini is profitable or driving 15% of its profits. The article does not mention profits even once. It then goes on to bizarrely compare Gemini's monthly active users to Open AI's weekly active ones.
Yet Google is valued at a 10X multiple, Anthropic a 35X and OpenAI 100x revenue... Nothing abnormal about this at all, imagine think an extermely unprofitable autocomplete tool should be worth 100x revenue lmfao.
However, when youre entrenched with the military/gov like OpenAI (see discord/persona leaks) you're basically invinsible. Especially when that military is the reincarnation of Nazi Germany , and a fourth Reich (The USA).
OpenAI is another BS national security project larping as a private company, like facebook, like starlink ect ect. Hard to get congress to approve your mass surveillance programs so you get them funded through markets with fake entrepreneurs like Elon and Altman.
The title and escrow industry is using Gemini (via Qualia Clear) enough so that Qualia accounts for 100 billions of token usage in about 3 months. Just because you don't see who is using it, and how, doesn't mean that when the dust settles, the people actually using AI for real purposes wont keep using AI. I'm not sure which AI models big pharma is using, but there's already at least one new pharmaceutical drug in the secondary testing phase, showing strong results.
There will definitely be room for AI. OpenAI is just not really showing that they care about a particular business model. Probably a strong indicator that Sam Altman is probably the worst person to lead that company. Anthropic will be profitable before OpenAI ever will be.
Gemini is in the green in terms of spending / income ratio FYI. I'm not talking about stocks.
Maybe you should get your news from a different source. Personally I prefer raw sources. I watch every official press briefing to hear from the horses mouth. You come to find that regardless of who is president news orgs put their own spin on it and you miss things they dont cover. Its all streamed on official government accounts.
Lmao, press briefings from the office of the führer is such a solid source to base your reality off of.
By the way if Kamala, Biden or Newsom was in office id also call them führer.
We live in a technocratic authoritarian state, the worlds largest prison population, the most police executions, we are actively sponsoring multiple genocides, we've killed over one million civilions in the middle east in two decades.
our politicians on both sides will go out of their way to protect pedophilic members of the ruling class...
But you want to tell us we're exaggerating or interpreting a reality that doesnt exist, i think youre the one who's been convinced through the regimes doublespeak that everythings alright.
Please revaluate. The US government is literally the 4th reich and actively committing halocausts on multiple fronts.
Do you know any history? You dishonor the people who died from horrible atrocities in WWII to make some glib performative political posturing. It's shameful behavior. Do better. Be better.
We've killed over 1mm civilians in the middle east over the last two decades, have hundreds of our own concentration camps (prisons). You would have been rooting for the nazis if born i. Germany in the 1930s like you are now. You are a nazi, sorry you will be remembered as one in history. I could care less about what a Nazi thinks about himself
I don't think they are going to collapse. But it was only a couple of years ago that many people thought OpenAI had a big (some thought insurmountable) lead in a race to dominate a winner take all markee. Some people did correctly state that OpenAI had no moat in those days so credit there where it's due.
Now it's looking like a competitive blood bath where ever increasing levels of investment is needed just to main market position. Their frontier models are SOTA for 4 weeks before a competitor comes and takes the crown. They are standing on much shakier ground than they were 2 years ago.
A competitive bloodbath plus OpenAI has investment valuing it like it will achieve agi rather than (merely) being a huge advancement in computing, but not a fundamental rewriting of how all work is done.
the $30b investment from nvidia is instead of a previously-announced $100b investment from nvidia, so it's not like this is an entirely good-news story for OpenAI.
How much revenue have they generated? How about profit?
If investors keep throwing obscene money at OpenAI, sure, they can stay afloat forever. Can't argue with that. But if we're talking about a sustainable business, I still don't see it.
Selling Shovels is quite lucrative whether there is an actual mining business or just a gold rush.
At one point Jensen Huang will be out (retired or forced by staginating sales) and can definitely look back on a very successful career. That much is certain.
Nobody saw coming the huge demand for coding agents. Not even OpenAI or Anthropic themselves. Those were side projects just a year ago and now dominate token demand. And they keep rising.
Oh I do think they did see it, considering how good they are they've probably been a tuning focus for a while.
The signal the agent usage is sending though is that Anthropic is way ahead since all we hear about is Claude these days despite OpenAI spending so much more money, Antrophic is also out trialling vending machines,etc.
ChatGPT apart from generating text was a bit of a query/research tool but now that Google has their AI search augmentation shit somewhat together I'm not feeling much need for ChatGPT as a research partner.
So now the big question is, with coding and search niches curtailed, where will OpenAI be able to generate profits from to justify their insane spending?
There's this saying that if you owe the bank a million dollars, you have a big problem, but if you owe the bank 100 million dollars, the bank has a big problem.
Is the same thing true for corporations? At some point the numbers are so wild the entire economy must help you succeed? I don't mean "too big to fail" exactly, more like "so big eventual success is guaranteed at all costs"
Those are the same thing. The whole point of saying "too big to fail" is to evoke the moment in the housing crash where governments largely threw most of their citizens under the bus by bailing out banks rather than homeowners for the banks' wildly irresponsible decisions. "Too big to fail" means the government steps in and bails you out, and that phrase became popular because for many it was the final nail in the coffin for their trust in government
I wonder if there is "too big for IPO". Saudi Aramco in 2019 sold shares worth $25.6 billion in IPO. Even offering just 5% of OpenAI to public would shatter that record. Well, unless public isn't actually interested in investing such huge amounts.
Interesting story for sure (to be clear I'm not talking about the writing by Reuters), but would you buy or skip the OpenAi IPO?
To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.
> We continue to have a great relationship with Microsoft. Our stateless API will remain exclusive to Azure, and we will build out much more capacity with them.
This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?
Does anyone have any ethical concerns using openai regarding money donated to the current US administration in one way or another? I will search for more accurate details about that situation. I know about several other ethical concerns with openai that people have, including copyright and other considerations regarding the work being trained on, as well as lack of action regarding users who are harmed by their usage of the product, often regarding mental health, environmental concerns, actually quite a few others, but I am interested if many people think their political donations are an issue or not.
Nvidia will get all that money back via GPU purchases, Amazon via cloud rental and SoftBank is being typical SoftBank - a rich but not particularly bright kid in a class :) .
"I give you $30 billion if you use it to buy $30 billion of stuff from me" doesn't sound like a very good investment. Is Nvidia expecting more back than it puts in? Enough more to make the deal profitable?
"I give you 30B$ worth of hardware that costs me <10B$ to make in exchange for 30B$ worth of shares in your company" would be a more accurate description.
Well, I won't pretend I know the answer :) . But I assume that a) they are partially betting on making a normal return on investment (i.e. OAI not crashing), b) they profit from running a huge expense/revenue cycle (a company making say a million of profit and having a billion revenue is favored better than the same but with only ten million revenue), and c) even if all goes wrong, it is still better to get back most of the investment even if not everything and zero profit, compared to a possibility of just losing it all like SoftBank or other investors.
In the end it's exchanging GPUs for OpenAI shares. It's not a non-trade, and in the current market Nvidia could really sell the stuff for cash. The marginal cost is very much sharply positive.
> The Information had previously reported that $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.”
So basically, Amazon is buying into the IPO at an early price. Maybe this is the time to divest from MSCI world. I don’t want to be the bag holder in the world’s largest pump and dump.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
yea, proving my point that the index funds are maybe not the safest place if you want to invest into real value. And soon, twitter/Grok/spacex might be doing an IPO
It's this kind of dynamic that makes me pull back on my otherwise pretty AI-forward stance. There's an entire community of people who passionately believe it's obvious and undeniable that Elon Musk has solved problems that he has not solved and his companies deliver things they don't deliver. Tesla is absolutely unambiguous in their marketing material (https://www.tesla.com/fsd) that they do not have autonomous driving, but you're far from the first person I've encountered who's been tricked into believing otherwise.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.
OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.
The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls
Yes, this is kind of like Tesla promising full self driving in 2016
That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.
This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
I should note for epistemic honesty that I expected I would be able to come up with an example of a mistake I made recently that was clearly equally dumb, and now I don't have a response to offer because I can't actually come up with that example.
> If this comes to pass OpenAI's value is near unlimited.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
"End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
Okay, I can understand investment from SoftBank, and maybe somewhat from Amazon (if they plan to use OpenAI's models), but investment from NVidia who will then sell OpenAI the GPUs with X% markup doesn't make sense to me.
That's a pretty lofty valuation for a company that has yet to demonstrate code generation anywhere near Anthropic's models if they're leaning into the engineering angle.
"Calvinism makes pretty lofty claims for a religion who has yet to demonstrate soul salvation anywhere near Lutheranism if they're leaning into the reformation angle"
and they say its not a bubble! we saw it with oracle deal, big announcement and than nothing, same with nvidia and now same thing is going again i hpe this is cash infusion and not some credit deal.
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Remember when it was a huge milestone when gigantic companies like Apple and Microsoft were striving to be the first $1T company backed with decades of building actual businesses with actual profit?
Our economy has turned into an ouroboros: a circle of snakes shitting in each others mouth until they get so sick we the taxpayer will get the privilege of bailing them out. I'm really fucking excited to eat shit for the 3rd time in 18 years. Super pumped.
Feels like Nvidia getting in the game here might just put them at more risk. If things don't work out they'll be out their money and future sales and so on.
It is bad enough AI sucked up so much investment money, hitting companies that do make profitable things hard if AI bubble collapses would be bad...
It’s already a joke to call the slop generators “AI”, so giving it another fake name won’t really make much of a difference any more. Nothing short of a miracle will be able to top the “creative marketing” we already have.
There is not a single OpenAI model in the top 10 on openrouter's ranking page. The market is saying something about the comparative value of OpenAI.
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
Agreed it's not really good signal (many sampling biases) but user count is not relevant, most money is from heavy API users. 900M users with free or cheap subscription are nothing compared to even 10k heavy API users.
On the other hand, big users don't use openrouter. At $work we have our own routing logic.
1. openrouter is API usage. There is obviously consumer side
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
> At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
I agree, and most of my peers do as well. This is why most of us shifted to funding AI Applications startups back in 2023-24. Most of these players are still in stealth or aren't household names, but neither are ServiceNow, Salesforce, Palo Alto Networks, Wiz, or Snowflake.
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
- Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
- The $30B each from softbank and NVIDIA is paid in installments
So this is more a $35B fundraise, with a _promise_ of more, maybe, if conditions are met. Not _bad_, but yet more gaslighting from Mr Altman. Anyone reporting this as a closed fundraising deal is being disingenuous at best.
> - Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
Startup funding is often given in increments depending on milestones being met. Most startups just don’t announce that it’s conditional.
For large funding rounds, nobody gets a check for the full amount at once.
The funding would not be conditional on an IPO because that wouldn’t make any sense. The IPO is the liquidity event for the investors and there’s no reason for a startup to take private investment money that only enters the company after IPO.
The conditions are either an IPO or achieving AGI. I’d be curious to know how the contract defines AGI. If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer, insofar as we will have AGI when the markets decide we have AGI and not when some set of philosophical criteria seem to be satisfied.
> If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer
So if they hit 100 billion annual then it's AGI but if Kellogg's launches “FrostedFlakes-GPT" and steals 30% of the market it's no longer AGI at 70 billion?
This is pretty standard. Usually the conditions are performance benchmarks, but may also include IPO. Typically its done in multiple tranches, e.g. 15B at the start, 5 more if you gain +500m users, 5 more if your profit exceeds X, and the rest for IPO (im over simplifying)
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
Two economists were walking down the street when they spotted a giant dog turd on the ground.
One of them wanted to have some fun, so said to the other - "I'll give you $100 if you take a big bite of that turd".
His colleague figured $100 was a good chunk of cash, so did the deed. Feeling thoroughly humiliated, he pocketed the $100 and they carried on.
Further down the street they came upon another turd.
The angry economist now wanted revenge so made the same proposal back to his colleague, who also agreed and took a bite of the turd, earning back his $100.
Later one of them said to the other "you know, I can't help but feel we both ate shit for no reason."
His collegue replied "what do you mean? We raised the national GDP by $200."
I did upvote, it's witty, but it's a bit of a misrepresentation of how the economy works.
In practice, people don't tend to pay people to eat shit without gain. You are paying people to help you. Money gaslights everyone into helping each other, the most selfish people become the most selfless.
Of course, real capitalism is much more complex and much uglier than this fantasy. When certain people end up with long-term control of large piles of money, the whole thing gets distorted. They get to make lots of money on interest without doing anything, and making other people eat more shit for scraps. That's the "capital" part of capitalism.
But the toy world-model that this joke is making fun of, is actually the one core positive aspect of capitalism and brings all the prosperity we have: tricking people into helping each other.
It’s not craze. It’s technology shift. Bitcoin and 3D printing were craze. It’s like a move from analog photography to digital. I am telling you this as a very conservative person. Even for me it’s helpful.
3D printing is helpful too. The infrastructure created during the dot-com bubble of the late 1990s was also helpful. The UK is still profiting from the railway infrastructure created during the railway craze of the 1840s (https://en.wikipedia.org/wiki/Railway_Mania). The question is just how much of the valuation of AI companies is because they are useful and how much is speculation...
That's certainly a take, industry loves it. Sure, all that "everybody will print widgets at home instead of going to the store" stuff was never going to happen, but 3d printing is nonetheless here to stay.
It can be both a craze and a technology shift. AI isn't going away, it will transform some industries. But right now it's overhyped, overfunded and due a trip back to reality.
It most definitely COULD be a craze from the perspective of scope of investment, societal impact and timing. No one surfing the crest of this wave could be described as "conservative".
Personally at this point my combined AI spend is the most expensive recurring monthly subscription I have, and that’s even with my company also paying for the AI tools I use at work.
If it weren’t subsidized I would pay more. Wouldn’t be happy about it but I would do it.
At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.
Actually it is not about this stage. It is about the sustainability of this when training data runs out and there is less and less human generated content.
When training data runs out, they usefulness will diminish quickly. They will still be useful for searching documents etc, but I guess they are not good at that even now.
What bitcoin gave us essentially? Huge pump and dump schemes coordinated by big hands? Crypto investments which made 95% of investors poorer? What's left?
Maybe 0.01% of it was beneficial.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
Doubt Jensen sees himself as a “dealer” but considering the vendor lock-in and margins, he pretty much is the Tony Montana of Ai Chips.
It’s nuts that this type of financing is legal.
I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.
Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.
And inflate your revenue by $80.
Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.
In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.
We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.
Also Nvidia margins are waaay higher than 20%
The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.
What if the product only costs you $20 to produce?
It's clear that the stock market cannot be considered normal anymore, held up on hopes at prayers at best.
Those conditions are an IPO or reaching AGI [1].
Nvidia and SofBank will pay in installments.
Also very interesting that Microsoft decided to not invest in this round. A PR statement was made though [2].
[1] https://americanbazaaronline.com/2026/02/26/amazon-to-invest...
[2] https://openai.com/index/continuing-microsoft-partnership/
A year ago I would have said that was crazy. In the last month, I've been using Claude Code to write 20kloc of Rust code every day (and I review all of it).
A week is now a day. If that figure doubles, I have no idea what will happen to us. And I think it's coming.
The actual quote is this though:
> hitting an AGI milestone or pursuing an IPO
So it seems softer than actually achieving AGI or finalising an IPO.
Incredible, how an entire religion has sprung up around AGI.
Are they going to get stock for it or is it a PIPE?
Personally, I don’t think I want to get in on this at retail prices.
It can both be true at the same time that AI going to disrupt our world and that being an AI lab is a terrible business.
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
> Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.
We try to avoid having corporate press releases as the top-level link, though of course there are exceptions sometimes.
e.g. it talks about running NVIDIA's systems (?) on AWS
> NVIDIA has long been one of our most important partners, and their chips are the foundation of AI computing. We are grateful for their continued trust in us, and excited to run their systems in AWS. Their upcoming generations should be great.
https://www.nvidia.com/en-us/data-center/nvlink-fusion/
$30B at $380B post-money for Anthropic announced two weeks ago
This does not increase my confidence in OpenAI's future
> Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight
90% chance it's all PR but who knows
They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.
Might save you €20 next month.
Bad comments about OpenAI's long-term viability I've seen plenty here. But that's not the same as the people predicting one of the hottest companies right now will somehow suddenly run out of cash all on its own
Recent high-profile examples include Segway, NFT, Crypto as a whole, pre-tranformers voice assistants and various "Design Thinking" projects like those Amazon prime buttons.
If OpenAI keeps getting circular financing, of course they will not collapse yet.
I think it's still too early to tell. By what measure did you even determine that Nvidia is falling?
By comparison, Anthropic is projected to break even in 2028. Google's Gemini is already profitable.
https://advergroup.com/gemini-hits-650-million-users/
I didn't really realize how big Gemini was until I saw that Qualia was using it, they apparently used 0.01% of Geminis total tokens (100 billion) in about 3 months, they're in production with the title and escrow industry, so that's a great deal of data going through Gemini, unlike some chat subscription this is all API driven, which I doubt Google is charging at a loss for.
https://www.qualia.com/qualia-clear/
Unlike OpenAI, Google has an actual business model, not just strange circular deals.
Edit: I misswrote "majority of" instead of 15% of Google's profits.
This does not at all tell us Gemini is profitable or driving 15% of its profits. The article does not mention profits even once. It then goes on to bizarrely compare Gemini's monthly active users to Open AI's weekly active ones.
However, when youre entrenched with the military/gov like OpenAI (see discord/persona leaks) you're basically invinsible. Especially when that military is the reincarnation of Nazi Germany , and a fourth Reich (The USA).
OpenAI is another BS national security project larping as a private company, like facebook, like starlink ect ect. Hard to get congress to approve your mass surveillance programs so you get them funded through markets with fake entrepreneurs like Elon and Altman.
There will definitely be room for AI. OpenAI is just not really showing that they care about a particular business model. Probably a strong indicator that Sam Altman is probably the worst person to lead that company. Anthropic will be profitable before OpenAI ever will be.
Gemini is in the green in terms of spending / income ratio FYI. I'm not talking about stocks.
I can't believe people who think this actually exist.
By the way if Kamala, Biden or Newsom was in office id also call them führer.
We live in a technocratic authoritarian state, the worlds largest prison population, the most police executions, we are actively sponsoring multiple genocides, we've killed over one million civilions in the middle east in two decades.
our politicians on both sides will go out of their way to protect pedophilic members of the ruling class...
But you want to tell us we're exaggerating or interpreting a reality that doesnt exist, i think youre the one who's been convinced through the regimes doublespeak that everythings alright.
Please revaluate. The US government is literally the 4th reich and actively committing halocausts on multiple fronts.
Now it's looking like a competitive blood bath where ever increasing levels of investment is needed just to main market position. Their frontier models are SOTA for 4 weeks before a competitor comes and takes the crown. They are standing on much shakier ground than they were 2 years ago.
If investors keep throwing obscene money at OpenAI, sure, they can stay afloat forever. Can't argue with that. But if we're talking about a sustainable business, I still don't see it.
At one point Jensen Huang will be out (retired or forced by staginating sales) and can definitely look back on a very successful career. That much is certain.
The signal the agent usage is sending though is that Anthropic is way ahead since all we hear about is Claude these days despite OpenAI spending so much more money, Antrophic is also out trialling vending machines,etc.
ChatGPT apart from generating text was a bit of a query/research tool but now that Google has their AI search augmentation shit somewhat together I'm not feeling much need for ChatGPT as a research partner.
So now the big question is, with coding and search niches curtailed, where will OpenAI be able to generate profits from to justify their insane spending?
Also Softbank invested, which is never a great signal.
They also invested in Uber
Is the same thing true for corporations? At some point the numbers are so wild the entire economy must help you succeed? I don't mean "too big to fail" exactly, more like "so big eventual success is guaranteed at all costs"
To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.
I'd love to hear other thoughts though
This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?
Anyone knows more?
https://openai.com/index/amazon-partnership/
Or is it just to keep Nvidia from crashing?
Incredible.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
https://fortune.com/2026/02/26/tesla-robotaxis-4x-8x-worse-t...
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Yes, this is kind of like Tesla promising full self driving in 2016
"If your goal is to get your dirty car washed… you should probably drive it to the car wash "
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
Not that dumb, no. That's why it's laughable to claim that LLMs are intelligent.
"AGI" is the IPO.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
- Someone in the 16th century, probably
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Good times.
BTW, real money or credits?
It is bad enough AI sucked up so much investment money, hitting companies that do make profitable things hard if AI bubble collapses would be bad...
https://www.inc.com/leila-sheridan/nvidia-is-wavering-on-its...
What's the statue of limitations for securities fraud? The current administration won't last forever.
Nope. That 100B is in "promises" for over several years in total.
They have $15B out of the $50B from Amazon right now.
> The current administration won't last forever.
This is why OpenAI must IPO and when it does, I won't be surprised that a crash is followed up before 2030.
By then, they will "announce" "AGI" (Which actually means an IPO)
It’s already a joke to call the slop generators “AI”, so giving it another fake name won’t really make much of a difference any more. Nothing short of a miracle will be able to top the “creative marketing” we already have.
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
OpenRouter claims "5M+" users; OpenAI is claiming >900M weekly active users.
I don't really think it's possible to learn anything about the broader market by looking at the OpenRouter model rankings.
On the other hand, big users don't use openrouter. At $work we have our own routing logic.
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
Is it?
At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
That day will come. Not everyone needs a Ferrari.
Edit: I misread the parent, I think they're saying the same thing.
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
- Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
- The $30B each from softbank and NVIDIA is paid in installments
So this is more a $35B fundraise, with a _promise_ of more, maybe, if conditions are met. Not _bad_, but yet more gaslighting from Mr Altman. Anyone reporting this as a closed fundraising deal is being disingenuous at best.
Startup funding is often given in increments depending on milestones being met. Most startups just don’t announce that it’s conditional.
For large funding rounds, nobody gets a check for the full amount at once.
The funding would not be conditional on an IPO because that wouldn’t make any sense. The IPO is the liquidity event for the investors and there’s no reason for a startup to take private investment money that only enters the company after IPO.
So if they hit 100 billion annual then it's AGI but if Kellogg's launches “FrostedFlakes-GPT" and steals 30% of the market it's no longer AGI at 70 billion?
You'll never get a billion dollar check from anyone.
I've even seen startups raise like 500k pre-seed with tranches in it, lmao!
s/breathing/investment/g s/balloon/bubble/g s/air/money/g
(Vibes ~ Vibrations ~ Heat)
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
One of them wanted to have some fun, so said to the other - "I'll give you $100 if you take a big bite of that turd".
His colleague figured $100 was a good chunk of cash, so did the deed. Feeling thoroughly humiliated, he pocketed the $100 and they carried on.
Further down the street they came upon another turd.
The angry economist now wanted revenge so made the same proposal back to his colleague, who also agreed and took a bite of the turd, earning back his $100.
Later one of them said to the other "you know, I can't help but feel we both ate shit for no reason."
His collegue replied "what do you mean? We raised the national GDP by $200."
Money was just the means of the transaction.
surely that behavior leads to a good society and doesn't encourage nefarious behaviors
Seeing this phenomenon, a silicon valley entrepreneur get an idea with the following sales pitch:
"Turd-bars that will make you the fittest version of yourself , answer all your deepest questions, and take you to the promised land (mars)."
Surprisingly, the turd-bars sell well, and GDP rockets up. Meanwhile VCs with fomo are funding its competitor: the shit-sandwich.
In practice, people don't tend to pay people to eat shit without gain. You are paying people to help you. Money gaslights everyone into helping each other, the most selfish people become the most selfless.
Of course, real capitalism is much more complex and much uglier than this fantasy. When certain people end up with long-term control of large piles of money, the whole thing gets distorted. They get to make lots of money on interest without doing anything, and making other people eat more shit for scraps. That's the "capital" part of capitalism.
But the toy world-model that this joke is making fun of, is actually the one core positive aspect of capitalism and brings all the prosperity we have: tricking people into helping each other.
You reminded me of this Stewart Brand quote:
> Computers suppress our animal presence. When you communicate through a computer, you communicate like an angel.
You scratch my back for a $10M IOU.
The debts cancel out.
How is the economic gain calculated?
https://news.ycombinator.com/newsguidelines.html
That's certainly a take, industry loves it. Sure, all that "everybody will print widgets at home instead of going to the store" stuff was never going to happen, but 3d printing is nonetheless here to stay.
But it's not magical, and not much different to injection moulding or something in concept.
Almost everything created with home level 3d printers is plastic junk you can buy for a few dollars on aliexpress (without weird rough edges).
If it weren’t subsidized I would pay more. Wouldn’t be happy about it but I would do it.
At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.
Fear
An echo cannot go on forever!
This is an argument from 2024. Somehow, the models have continued to improve.
If they stopped improving today they are good enough as they already are to generate profound change.
The wave front is already visible, we’re just on the shore waiting for the impact.
May be there is some way to keep the model up-to date in less dramatic ways. But I think something gotto give..
I mean, even now the vibe coded stuff is reprehensible.
It is a bubble with extreme levels of debt + funding from too many promises from companies that are in these sort of rounds.
People being consumed by the hype will also be completely consumed by the crash.
Comments like this is exactly how a 2000 and a 2008 style crash will happen.
What bitcoin gave us essentially? Huge pump and dump schemes coordinated by big hands? Crypto investments which made 95% of investors poorer? What's left? Maybe 0.01% of it was beneficial.
I guess it isn't that noticeable from inside US, but the rest of the world is grateful.
Maybe speak for yourself? As part of the rest of the world, I am not grateful.