Tangent: IMO top tier CPU is a no brainer if you play games, run performance-sensitive software (molecular dynamics or w/e), or compile code.
Look at GPU purchasing. It's full of price games, stock problems, scalpers, 3rd party boards with varying levels of factory overclock, and unreasonable prices. CPU is a comparative cake walk: go to Amazon or w/e, and buy the one with the highest numbers in its name.
For games its generally not worthwhile since the performance is almost entirely based on gpu these days.
Almost all build guides will say ‘get midrange cpu X over high end chip Y and put the savings to a better GPU’.
Consoles in particular are just a decent gpu with a fairly low end cpu these days. The xbox one with a 1.75Ghz 8core AMD from a couple of generations ago now is still playing all the latest games.
>> For games its generally not worthwhile since the performance is almost entirely based on gpu these days.
It completely depends on the game. Civilization series, for example, are mostly CPU bound, which is why turns take longer and longer as the games progress.
Factorio it's an issue when you go way past the end game into the 1000+ hour megabases.
Stellaris is just poorly coded with lots of n^2 algorithms and can run slowly on anything once population and fleets grow a bit.
For civilisation the ai does take turns faster with a higher end cpu but imho it’s also no big deal since you spend most time scrolling the map and taking actions (gpu based perf).
I think it’s reasonable to state that the exceptions here are very exceptional.
Depending on the game there can be a large difference. Ryzen with larger cache have a large benefit in singleplayer games with many units like civilization or in most multiplayer games. Not so much GHz speed but being able to keep most of hot path code and data you need in cache.
It’s not quite that simple. Often the most expensive chips trade off raw click speed for more cores, which can be counterproductive if your game only uses 4 threads.
Employers, even the rich FANG types, are quite penny-wise and pound-foolish when it comes to developer hardware.
Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.
To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.
Every well funded startup I’ve worked for went through a period where employees could get nearly anything they asked for: New computers, more monitors, special chairs, standing desks, SaaS software, DoorDash when working late. If engineers said they needed it, they got it.
Then some period of time later they start looking at spending in detail and can’t believe how much is being spent by the 25% or so who abuse the possibly. Then the controls come.
> There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs,
You would think, but in the age of $6,000 fully specced MacBook Pros, $2,000 monitors, $3,000 standing desks, $1500 iPads with $100 Apple pencils and $300 keyboard cases, $1,000 chairs, SaaS licenses that add up, and (if allowed) food delivery services for “special circumstances” that turns into a regular occurrence it was common to see individuals incurring expenses in the tens of thousands range. It’s hard to believe if you’re a person who moderates their own expenditures.
Some people see a company policy as something meant to be exploited until a hidden limit is reached.
There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.
If $20k is misspent by 1 in 100 employees, that's still $200 per employee per year: peanuts, really.
Just like with "policing", I'd only focus on uncovering and dealing with abusers after the fact, not on everyone — giving most people "benefits" that instead makes them feel valued.
Is it “soft fraud” when a manager at an investment bank regularly demands unreasonable productivity from their junior analysts, causing them to work late and effectively reduce their compensation rate? Only if the word “abuse” isn’t ambiguous and loaded enough for you!
Lying about a laptop being stolen is black and white. I'm not sure how you are trying to say that is ambiguous.
I don't know what the hell you mean by the term unreasonable. Are you under the impression that investment banking analysts do not think they will have to work late before they take the role?
> Lying about a laptop being stolen is black and white. I'm not sure how you are trying to say that is ambiguous.
I've been at startups where there's sometimes late night food served.
I've never been at a startup where there was an epidemic about lying about stolen hardware.
Staying just late enough to order dinner on the company, and theft by the employee of computer hardware plus lying about it, are not in the same category and do not happen with equal frequency. I cannot believe the parent comment presented these as the same, and is being taken seriously.
Is this meant to be a gotcha question? Yes, unpaid overtime is fraud, and employers commit that kind of fraud probably just as regularly as employees doing the things up thread.
gp was talking about salaried employees which is legally exempt from overtime pay. There is no rigid 40-hour ceiling for salary pay.
Salary compensation is typical for white-collar employees such as analysts in investment banking and private equity, associates at law firms, developers at tech startups, etc.
Not an expert here, but from what I heard, that would be a bargain for a good office chair. And having a good chair or not - you literally feel the difference.
But also, when I tell one of my reports to spec and order himself a PC, there should be several controls in place.
Firstly, I should give clear enough instructions that they know whether they should be spending around $600, $1500, or $6000.
Second, although my reports can freely spend ~$100 no questions asked, expenses in the $1000+ region should require my approval.
Thirdly, there is monitoring of where money is going; spending where the paperwork isn't in order gets flagged and checked. If someone with access to the company amazon account gets an above-ground pool shipped to their home, you can bet there will be questions to be answered.
There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.
Ehh. Neither of these are soft fraud. The former is outright law-breaking, the latter…is fine. They stayed till they were supposed to.
> the latter…is fine. They stayed till they were supposed to.
This is the soft fraud mentality: If a company offers meal delivery for people who are working late who need to eat at the office and then people start staying late (without working) and then taking the food home to eat, that’s not consistent with the policies.
It was supposed to be a consolation if someone had to (or wanted to, as occurred with a lot of our people who liked to sleep in) stay late to work. It was getting used instead for people to avoid paying out of pocket for their own dinners even though they weren’t doing any more work.
Which is why we can’t have nice things: People see these policies as an opportunity to exploit them rather than use them as intended.
Are you saying the mentality is offensive? Or is there a business justification I am missing?
Note that employers do this as well. A classic one is a manager setting a deadline that requires extreme crunches by employees. They're not necessarily compensating anyone more for that. Are the managers within their rights? Technically. The employees could quit. But they're shaving hours, days, and years off of employees without paying for it.
If a company policy says you can expense meals when taking clients out, but sales people started expensing their lunches when eating alone, it’s clearly expense fraud. I think this is obvious to everyone.
Yet when engineers are allowed to expense meals when they’re working late and eating at the office, but people who are neither working late nor eating at the office start expensing their meals, that’s expense fraud.
These things are really not gray area. It seems more obvious when we talk about sales people abusing budgets, but there’s a blind spot when we start talking about engineers doing it.
Frankly this sort of thing should be ignored, if not explicitly encouraged, by the company.
Engineers are very highly paid. Many are paid more than $100/hr if you break it down. If a salaried engineer paid the equivalent of $100/hr stays late doing anything, expenses a $25 meal, and during the time they stay late you get the equivalent of 20 minutes of work out of them- including in intangibles like team bonding via just chatting with coworkers or chatting about some bug- then the company comes out ahead.
That you present the above as considered "expense fraud" is fundamentally a penny-wise, pound-foolish way to look at running a company. Like you say, it's not really a gray area. It's a feature not a bug.
> Like you say, it's not really a gray area. It's a feature not a bug.
Luckily that comes down to the policy of the individual company and is not enforced by law. I am personally happy to pay engineers more so they can buy this sort of thing themselves and we dont open the company to this sort of abuse. Then its a known cost and the engineers can decide from themselves if they want to spend that $30 on a meal or something else.
This isn’t about fraud anymore. It’s about how suspiciously managers want to view their employees. That’s a separate issue (but not one directed at employees).
If a company says you have permission to spend money on something for a purpose, but employees are abusing that to spend money on something that clearly violates that stated purpose, that’s into fraud territory.
This is why I call it the soft fraud mentality: When people see some fraudulent spending and decide that it’s fine because they don’t think the policy is important.
Managers didn’t care. It didn’t come out of their budget.
It was the executives who couldn’t ignore all of the people hanging out in the common areas waiting for food to show up and then leaving with it all together, all at once. Then nothing changed after the emails reminding them of the purpose of the policy.
When you look at the large line item cost of daily food delivery and then notice it’s not being used as intended, it gets cut.
This is disingenuous but soft-fraud is not a term I’d use for it. Fraud is a legal term. You either commit fraud or you do not. There is no “maybe” fraud—you comply with a policy or law or you don’t.
As you mentioned, setting policy that isn’t abused is hard. But abuse isn’t fraud—it’s abuse—and abuse is its own rabbit hole that covers a lot of these maladaptive behaviors you are describing.
I call the meal expense abuse “soft fraud” because people kind of know it’s fraud, but they think it’s small enough that it shouldn’t matter. Like the “eh that’s fine” commenter above: They acknowledged that it’s fraud, but also believe it’s fine because it’s not a major fraud.
If someone spends their employer’s money for personal benefit in a way that is not consistent with the policies, that is legally considered expense fraud.
There was a case local to me where someone had a company credit card and was authorized to use it for filling up the gas tank of the company vehicle. They started getting in the habit of filling up their personal vehicle’s gas tank with the card, believing that it wasn’t a big deal. Over the years their expenses weren’t matching the miles on the company vehicle and someone caught on. It went to court and the person was liable for fraud, even though the total dollar amount was low five figures IIRC. The employee tried to argue that they used the personal vehicle for work occasionally too, but personal mileage was expensed separately so using the card to fill up the whole tank was not consistent with policy.
I think people get in trouble when they start bending the rules of the expense policy thinking it’s no big deal. The late night meal policy confounds a lot of people because they project their own thoughts about what they think the policy should be, not what the policy actually is.
This might come as a bit of a surprise to you, but most (really all) employees are in it for money. So if you are astonished that people optimize for their financial gain, that’s concerning. That’s why you implement rules.
If you start trying to tease apart the motivations people have even if they are following those rules, you are going to end up more paranoid than Stalin.
> but most (really all) employees are in it for money
Yes, but some also have a moral conscience and were brought up to not take more than they need.
If you are not one of these types of people, then not taking complete over advantage of an offer like free meals probably seems like an alien concept.
I try to hire more people like this, it makes for a much stronger workforce when people are not all out to get whatever they can for themselves and look out for each others interests more.
But at the FAANGy companies I’ve worked at this issue persists. Mobile engineers working on 3yo computers and seeing new hires compile 2x (or more) faster with their newer machines.
Two, several tens of thousands are in the 5%-10% range. Hardly "peanuts". But I suppose you'll be happy to hear "no raise for you, that's just peanuts compared to your TC", right?
$3,000 standing desks?? It's some wood, metal and motors. I got one from IKEA in about 2018 for 500 gbp and it's still my desk today. You can get Chinese ones now for about 150 gbp.
I can understand paying more for fast processors and so on but a standing desk just goes up and down. What features do the high end desks have that I am missing out on?
I went with Uplift desks which are not $150 but certainly sub $1000. I think what I was paying for was the stability/solidity of the desk, the electronics and memory and stuff is probably commodified.
Stability is a big one, but the feel of the desk itself is also a price point. You're gonna be paying a lot depending on the type of tabletop you get. ($100-1k+ just for the top)
Netflix, at least the Open Connect org, was still open ended adjacent to whatever NTech provided (your issued laptop and remote working stuff). It was very easy to get "exotic" hardware. I really don't think anyone abused it.
I know a FAANG company whose IT department, for the last few years, has been "out of stock" for SSD drives over 250GB . They claim its a global market issue (it's not). There's constant complaining in the chats for folks who compile locally. The engineers make $300k+ so they just buy a second SSD from Amazon on their credit cards and self-install them without mentioning it to the IT dept. I've never heard a rational explaination for the "shortage" other than incompetence and shooting themselves in the foot. Meanwhile, spinning up a 100TB cloud drive has no friction whatsoever there. It's a cushy place to work tho, so folks just accept the comically dumb aspects everyone knows about.
I think you're maybe underestimating the aggregate cost of totally unconstrained hardware/travel spending across tens or hundreds of thousands of employees, and overestimating the benefits. There need to be some limits or speedbumps to spending, or a handful of careless employees will spend the moon.
When Apple switched to their own silicon, I was maintaining the build systems at a scaleup.
After I saw the announcement, I immediately knew I needed to try out our workflows on the new architecture. There was just no way that we wouldn't have x86_64 as an implicit dependency all throughout our stack. I raised the issue with my manager and the corporate IT team. They acknowledged the concern but claimed they had enough of a stockpile of new Intel machines that there was no urgency and engineers wouldn't start to see the Apple Silicon machines for at least another 6-12 months.
Eventually I do get allocated a machine for testing. I start working through all the breakages but there's a lot going on at the time and it's not my biggest priority. After all, corporate IT said these wouldn't be allocated to engineers for several more months, right? Less than a week later, my team gets a ticket from a new-starter who has just joined and was allocated an M1 and of course nothing works. Turns out we grew a bit faster than anticipated and that stockpile didn't last as long as planned.
It took a few months before we were able to fix most of the issues. In that time we ended up having to scavenge under-specced machines form people in non-technical roles. The amount of completely avoidable productivity wasted from people swapping machines would have easily reached into the person-years. And of course myself and my team took the blame for not preparing ahead of time.
Budgets and expenditure are visible and easy to measure. Productivity losses due to poor budgetry decisions, however, are invisible and extremely difficult to measure.
Actually, just time spent compiling, or waiting for other builds to finish makes investing in the top level macbook pro worth it every 3 years. I think the calculation assumed something like 1-2% of my time was spent compiling, and I cost like $100k per year.
Scaling cuts both ways. You may also be underestimating the aggregate benefits of slight improvements added up across hundreds or thousands of employees.
For a single person, slight improvements added up over regular, e.g., daily or weekly, intervals compound to enormous benefits over time.
The breakeven rate on developer hardware is based on the value a company extracts not their salary. Someone making X$/year directly has a great deal of overhead in terms of office space and managers etc, and above that the company only employees them because the company gains even more value.
Saving 1 second/employee/day can quickly be worth 10+$/employee/year (or even several times that). But you rarely see companies optimizing their internal processes based on that kind of perceived benefits.
Water cooler placement in a cube farm comes to mind as a surprisingly valuable optimization problem.
Not for an enterprise buying (or renting) furniture in bulk it isn’t. The chair will also easily last a decade and be turned over to the next employee if this one leaves… unlike computer hardware which is unlikely to be reused and will historically need to be replaced every 24-36 months even if your dev sticks around anyway.
> computer hardware which is unlikely to be reused and will historically need to be replaced every 24-36 months
That seems unreasonably short. My work computer is 10 years old (which is admittedly the other extreme, and far past the lifecycle policy, but it does what I need it to do and I just never really think about replacing it).
That’s more or less my point from a different angle: unlimited spend isn’t reasonable and the justification “but $other_thing is way more expensive!” Is often incorrect.
I think my employer hed a contest to see which of 4 office chairs people liked the most, then they bought the one that everyone hated. I’m not quite sure anymore what kind of reason was given.
It's straightforward to measure this; start a stopwatch every time your flow gets interrupted by waiting for compilation or your laptop is swapping to keep the IDE and browser running, and stop it once you reach flow state again.
We managed to just estimate the lost time and management (in a small startup) was happy to give the most affected developers (about 1/3) 48GB or 64GB MacBooks instead of the default 16GB.
At $100/hr minimum (assuming lost work doesn't block anyone else) it doesn't take long for the upgrades to pay off. The most affected devs were waiting an hour a day sometimes.
This applies to CI/CD pipelines too; it's almost always worth increasing worker CPU/RAM while the reduction in time is scaling anywhere close to linearly, especially because most workers are charged by the minute anyway.
Just to do web development? I regularly go into swap running everything I need on my laptop. Ideally I'd have VScode, webpack, and jest running continuously. I'd also occasionally need playwright. That's all before I open a chrome tab.
I do think a lot of software would be much better if all devs were working on hardware that was midrange five years ago and over a flaky WiFi connection.
This is especially relevant now that docker has made it easy to maintain local builds of the entire app (fe+be). Factor in local AI flows and the RAM requirements explode.
I have a whisper transcription module running at all times on my Mac. Often, I'll have a local telemetry service (langfuse) to monitor the 100s of LLM calls being made by all these models. With AI development it isnt uncommon to have multiple background agents hogging compute. I want each of them to be able to independently build + host and test their changes. The compute load apps up quickly. And I would never push agent code to a cloud env (not even a preview env) because I don't trust them like that and neither should you.
Anything below an M4 pro 64GB would be too weak for my workflow. On that point, Mac's unified VRAM is the right approach in 2025. I used windows/wsl devices for my entire life, but their time is up.
This workflow is the first time I have needed multiple screens. Pre-agentic coding, I was happy to work on a 14 inch single screen machine with standard thinkpad x1 specs. But, the world has changed.
FANG is not monolithic. Amazon is famously cheap. So is Apple in my opinion based on what I have heard (you get random refurbished hardware that is available not some standardized thing, sometimes with 8GB RAM sometimes something nicer) Apple is also famously cheap on their compensation. Back in the day they proudly said shit to the effect of "we deliberately don't pay you top of the market because you have to love Apple" to which the only valid answer is "go fuck yourself."
Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers.
Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need.
Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy.
iOS development is still mostly local which is why most of the iOS developers at my previous Big Tech employer got Mac Studios as compiler machines in addition to their MacBook Pros. This requires director approval but is a formality.
I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale.
Yahoo was cheap/stingy/cost concious as hell. They still had a well stocked ergo team, at least for the years I was there. You'd schedule an ergo consult during new hire orientation, and you'd get a properly sized seat and your desk height adjusted if needed and etc. Lots of ergo keyboards, although I didn't see a lot of kinesis back then.
Proper ergo is a cost concious move. It helps keep your employees able to work which saves on hiring and training. It reduces medical expenses, which affects the bottom line because large companies are usually self-insured; they pay a medical insurance company only to administer the plan, not for insurance --- claims are paid from company money.
The soft cap thing seems like exactly this kind of penny-foolish behavior though. I’ve seen people spend hours trying to optimize their travel to hit the cap — or dealing with flight changes, etc that come from the “expense the flight later” model.
All this at my company would be a call or chat to the travel agent (which, sure, kind of a pain, but they also paid for dedicated agents so wait time was generally good).
It’s so annoying that we’ve lost a legit and useful typographic convention just because some people think that AI overusing it means that all uses indicate AI.
Sure, I’ve stopped using em-dashes just to avoid the hassle of trying to educate people about a basic logical fallacy, but I reserve the right to be salty about it.
>3) Comma-separated list that's exactly 3 items long
Proper typography and hamburger paragraphs are canceled now because of AI? So much for what I learned high school english class.
>2) "It's not X, it's Y" sentence structure
This is a pretty weak point because it's n=1 (you can check OP's comment history and it's not repeated there), and that phrase is far more common in regular prose than some of the more egregious ones (eg. "delve").
Isn't it about equal treatment? You can't buy one person everything they want, just because they have high salary, otherwise the employee next door will get salty.
I previously worked at a company where everyone got a budget of ~$2000. The only requirement was you had to get a mac (to make it easier on IT I assume), the rest was up to you. Some people bought a $2000 macbook pro, some bought a $600 mac mini and used the rest on displays and other peripherals.
Some people would minimize the amount spent on their core hardware so they had money to spend on fun things.
So you’d have to deal with someone whose 8GB RAM cheap computer couldn’t run the complicated integration tests but they were typing away on a $400 custom keyboard you didn’t even know existed while listening to their AirPods Max.
I mean; looks like someone volunteered to make the product work on low spec machines. That's needed.
I've been on teams where corporate hardware is all max spec, 4-5 years ahead of common user hardware, provided phones are all flagships replaced every two years. The product works great for corporate users, but not for users with earthly budgets. And they wonder how competitors swallow market in low income countries.
I've often wondered how a personal company budget would work for electrical engineers.
At one place I had a $25 no question spending limit, but sank a few months trying to buy a $5k piece of test equipment because somebody thought maybe some other tool could be repurposed to work, or we used to have one of those but it's so old the bandwidth isn't useful now, or this project is really for some other cost center and I don't work for that cost center.
That doesn’t matter. If I’m going to spend 40% of my time alive somewhere, you bet a requirement is that I’m not working on ridiculously outdated hardware. If you are paying me $200k a year to sit around waiting for my PC to boot up, simply because Joe Support that makes 50k would get upset, that’s just a massive waste of money.
If we're talking about rich faang type companies, no, it's not about equal treatment. These companies can afford whatever hardware is requested. This is probably true of most companies.
Where did this idea about spiting your fellow worker come from?
I think you wanted to say "especially". You're exchanging clearly measurable amounts of money for something extremely nebulous like "developer productivity". As long as the person responsible for spend has a clear line of view on what devs report, buying hardware is (relatively) easy to justify.
Once the hardware comes out of a completely different cost center - a 1% savings for that cost center is promotion-worthy, and you'll never be able to measure a 1% productivity drop in devs. It'll look like free money.
I wish developers, and I'm saying this as one myself, were forced to work on a much slower machine, to flush out those who can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.
I feel like that's the wrong approach. Like saying a music producer to always work with horrible (think car or phone) speakers. True, you'll get a better mix and master if you test it on speakers you expect others to hear it through, but no one sane recommends you to default to use those for day-to-day work.
Same goes for programming, I'd lose my mind if everything was dog-slow, and I was forced to experience this just because someone thinks I'll make things faster for them if I'm forced to have a slower computer. Instead I'd just stop using my computer if the frustration ended up larger than the benefits and joy I get.
That's actually a good analogy. Bad speakers aren't just slow good speakers. If you try to mix through a tinny phone speaker you'll have no idea what the track will sound like even through halfway acceptable speakers, because you can't hear half of the spectrum properly. Reference monitors are used to have a standard to aim for that will sound good on all but the shittiest sound systems.
Likewise, if you're developing an application where performance is important, setting a hardware target and doing performance testing on that hardware (even if it's different from the machines the developers are using) demonstrably produces good results. For one, it eliminates the "it runs well on my machine" line.
Although, any good producer is going to listen to mixes in the car (and today, on a phone) to be sure they sound at least decent, since this is how many consumers listen to their music.
Yes, this is exactly my point :) Just like any good software developer who don't know exactly where there software will run, they test on the type of device that our users are likely to be running it with, or at least similar characteristics.
The beatings will continue until the code improves.
I get the sentiment but taken literally it's counter productive. If the business cares about perf, put it in the sprint planning. But they don't. You'll just be writing more features with more personal pain.
For what its worth, console gamedev has solved this. You test your game on the weakest console you're targeting. This usually shakes out as a stable perf floor for PC.
Yeah, I recognize this all too well. There is an implicit assumption that all hardware is top-tier, all phones are flagships, all mobile internet is 5G, everyone has regular access to free WiFi, etc.
Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.
Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.
> can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.
Efficiency is a good product goal: Benchmarks and targets for improvement are easy to establish and measure, they make users happy, thinking about how to make things faster is a good way to encourage people to read the code that's there, instead of just on new features (aka code that's not there yet)
However they don't sell very good: Your next customer is probably not going to be impressed your latest version is 20% faster than the last version they also didn't buy. This means that unless you have enough happy customers, you are going to have a hard time convincing yourself that I'm right, and you're going to continue to look for backhanded ways of making things better.
But reading code, and re-reading code is the only way you can really get it in your brain; it's the only way you can see better solutions than the compiler, and it's the only way you remember you have this useful library function you could reuse instead of writing more and more code; It's the only guaranteed way to stop software bloat, and giving your team the task of "making it better" is a great way to make sure they read it.
When you know what's there, your next feature will be smaller too. You might even get bonus features by making the change in the right place, instead of as close to the user as possible.
Management should be able to buy into that if you explain it to them, and if they can't, maybe you should look elsewhere...
> a much slower machine
Giving everyone laptops is also one of those things: They're slow even when they're expensive, and so developers are going to have to work hard to make things fast enough there, which means it'll probably be fine when they put it on the production servers.
I like having a big desktop[1] so my workstation can have lots of versions of my application running, which makes it a lot easier to determine which of my next ideas actually makes things better.
Using the best/fastest tools I can is what makes me faster, but my production hardware (i.e. the tin that runs my business) is low-spec because that's cheaper, and higher-spec doesn't have a measurable impact on revenue. But I measure this, and I make sure I'm always moving forward.
Perhaps the better solution would be to have the fast machine but have a pseudo VM for just the software you are developing that uses up all of those extra resources with live analysis. The software runs like it is on a slower machine, but you could potentially gather plenty of info that would enable you to speed up the program for everyone.
software should be performance tested, but you don't want a situation when time of single iteration is dominated by duration of functional tests and build time. the faster software builds and tests, the quicker solutions get delivered. if giving your developers 64GB or RAM instead of 32GB halves test and build time, you should happily spend that money.
Assuming you build desktop software; you can build it on a beastly machine, but run it on a reasonable machine. Maybe local builds for special occasions, but it's special, you can wait.
Sure, occasionally run the software on the build machine to make sure it works on beastly machines; but let the developers experience the product on normal machines as the usual.
Right, optimize for horrible tools so the result satisfies the bottom 20%. Counterpoint, id Software produced amazingly performant programs using top of the line gear. What are you trying to do is enforce a cultural norm by hobbling the programmer's hardware. If you want fast programs, you need to make that a criteria, slow hardware isn't going to get you there.
If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling.
And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too.
This is one of the things which cheeses me the most about LLVM. I can't build LLVM on less than 16GB of RAM without it swapping to high heaven (and often it just gets OOM killed anyways). You'd think that LLVM needing >16GB to compile itself would be a signal to take a look at the memory usage of LLVM but, alas :)
The thing that causes you run out of memory isn't actually anything in LLVM, it's all in ld. If you're building with debugging info, you end up pulling in all of the debug symbols for deduplication purposes during linking, and that easily takes up a few GB. Now link a dozen small programs in parallel (because make -j) and you've got an OOM issue. But the linker isn't part of LLVM itself (unless you're using lld), so there's not much that LLVM can do about it.
(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).
Tangential: TIL you can compile the Linux kernel in < 1 minute (on top-spec hardware). Seems it’s been a while since I’ve done that, because I remember it being more like an hour or more.
My memory must be faulty, then, because I was mostly building it on an Athlon XP 2000+, which is definitely a few generations newer than a Pentium Pro.
I’m probably thinking of various other packages, since at the time I was all-in on Gentoo. I distinctly remember trying to get distcc running to have the other computers (a Celeron 333 MHz and Pentium III 550 MHz) helping out for overnight builds.
Can’t say that I miss that, because I spent more time configuring, troubleshooting, and building than using, but it did teach me a fair amount about Linux in general, and that’s definitely been worth it.
I'd like to know why making debian packages containing the kernel now takes substantially longer than a clean build of the kernel. That seems deeply wrong and rather reduces the joy at finding the kernel builds so quickly.
I spent a few grand building a new machine with a 24-core CPU. And, while my gcc Docker builds are MUCH faster, the core Angular app still builds a few seconds slower than on my years old MacBook Pro. Even with all of my libraries split into atoms, built with Turbo, and other optimizations.
6-10 seconds to see a CSS change make its way from the editor to the browser is excruciating after a few hours, days, weeks, months, and years.
Web development is crazy. Went from a Java/C codebase to a webdev company using TS. The latter would take minutes to build. The former would build in seconds and you could run a simulated backtest before the web app would be ready.
It blew my mind. Truly this is more complicated than trading software.
This article skips a few important steps - how a faster CPU will have a demonstrable improvement on developer performance.
I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.
The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want
Compiler is usually IO bound on windows due to NTFS and the small files in MFT and lock contention problem. If you put everything on a ReFS volume it goes a lot faster.
I wish I was compiler bound. Nowadays, with everything being in the cloud or whatever I'm more likely to be waiting for Microsoft's MFA (forcing me to pick up my phone, the portal to distractions) or getting some time limited permission from PIM.
The days when 30 seconds pauses for the compiler was the slowest part are long over.
I don’t think that we live in an era where a hardware update can bring you down to 3s from 30s, unless the employer really cheaped out on the initial buy.
Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”
Important caveat that the author neglects to mention since they are discussing laptop CPUs in the same breath:
The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions.
You simply cannot cram enough cooling and power into a laptop to have it equal a desktop high end desktop CPU of the same generation. There is physically not enough room. Just about the only way to even approach that would be to have liquid cooling loop ports out the back that you had to plug into an under-desk cooling loop and I don't think anyone is doing that because at that point just get a frickin desktop computer + all the other conveniences that come with it (discrete peripherals, multiple monitors, et cetera). I honestly do not understand why so many devs seem to insist on doing work on a laptop. My best guess is this is mostly the apple crowd because apple "desktops" are for the most part - just the same hardware in a larger box instead of being actually a different class of machine. A little better on the thermals, but not the drastic jump you see between laptops and desktops from AMD and Intel.
Yes, just went from i3770 (12 years old!) to a 9900x as I tend to wait for a doubling of single core performance before upgrading (got through a lot of PCs in the 386/486 era!). It's actually only 50% faster according to cpubenchmark [0] but is twice as fast in local usage (multithread is reported about 3 times faster).
Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance.
M4 is amazing hardware held up by a sub-par OS. One of the biggest bottlenecks when compiling software on a Mac is notarization, where every executable you compile causes a HTTP call to Apple. In addition to being a privacy nightmare, this causes the configure step in autoconf based packages to be excruciatingly slow.
I jumped ahead about 5 generations of Intel, when I got my new laptop and while the performance wasn't much better, the fact that I changed from a 10 pound workstation beast that sounded like a vacuum cleaner, to a svelte 13 inch laptop that works with a tiny USB C brick, and barely runs its fans while being just as fast made it worthwhile for me.
The author used kernel compilation as a benchmark. Which is weird, because for most projects a build process isn't as scalable as that (especially in the node.js ecosystem), even less after a full build.
This compares a new desktop CPU to older laptop ones. There are much more complete benchmarks on more specialized websites [0, 1].
> If you can justify an AI coding subscription, you can justify buying the best tool for the job.
I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU.
Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer.
I find it crazy that some people only use a single laptop for their dev work.
Meanwhile I have 3 PCs, 5 monitors, keyboards and mouses, and still think they are not enough.
There are a lot of jobs that should run in a home server running 24/7 instead of abusing your poor laptop. Remote dedicated servers work, but the latency is killing your productivity, and it is pricey if you want a server with a lot of disk space.
Right now I am on my ancient cheap laptop with some 4 core intel and hard drive noises, the only time it has issues with webpages is when I have too many tabs open for its 4gigs of ram. My current laptop which is a 16 core Rhyzen 7 from about 2021 (x13) has never had an issue and I have yet to have too many tabs open on it. I think you might be having a OS/browser issue.
As an aside, being on my old laptop with its hard drive, can't believe how slow life was before SSDs. I am enjoying listening to the hard drive work away and I am surprised to realize that I missed it.
Maybe, but I can’t repro. Do you have a GPU? What browser? What web site? How much RAM do you have, and how much is available? What else is running on your machine? What OS, and is it a work machine with corporate antivirus?
I generally agree you should buy fast machines, but the difference between my 5950x (bought in mid 2021. I checked) and the latest 9950x is not particularly large on synthetic benchmarks, and the real world difference for a software developer who is often IO bound in their workflow is going to be negligible
If you have a bad machine get a good machine, but you’re not going to get a significant uplift going from a good machine that’s a few years old to the latest shiny
I enjoy building PCs so I've tried to justify upgrading my 5800x to a 9950x3d. But I really absolutely cannot justify it right now. I can play doom dark ages at 90fps 4k. I don't need it!
FYI, going from some Radeon I had from 6 years ago to a 9950x made a huge impact on game frame rate: choppy to smoother-than-i-can-percieve. And much faster compile times, and code execution if using thread pools. But, I think it was a 3 series Radeon, not 5. Totally worth it compared to say, GPU costs
I'm forced to use teams and SharePoint in my university as a student and I hate every single interaction with it, I wish curse upon their creators, and may their descendants never have a smooth user experience with any software they use.
Except for the ridiculous laggy interface, it has some functional bugs as well such as things just disappearing for a few days and then they pop up again
Too bad it’s so hard to get a completely local dev environment these days. It hardly matter what CPU I have since all the intensive stuff happens on another computer.
more generally - it is worth it to pay for a good developer experience. It's not exactly about the CPU. As you compared build times - it is worth it to make a build faster. And, happily, often you don't need new CPU for this.
OK, I'm convinced. Can someone tell me what to buy, specifically? Needs to run Ubuntu, support 2 x 4K monitors (3 would be nice), have at least 64GB RAM and fit on my desk. Don't particularly care how good the GPU is / is not.
Here's my starting point: gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc. Anything better?
Fastest threadripper on the market is usually a good bet. Worth considering mini-pc on vesa mount / in a cable tray + fast machine in another room.
Also, I've got a gmktec here (cheaper one playing thin client) and it's going to be scrapped in the near future because the monitor connections keep dropping. Framework make a 395 max one, that's tempting as a small single machine.
Threadripper is complete overkill for most developers and hella expensive especially at the top end. May also not even be that much faster for many work-loads. The 9950X3D is the "normal top-end" CPU to buy for most people.
Whether ~$10k is infeasibly expensive or a bargain depends strongly on what workloads you're running. Single threaded stuff? Sure, bad idea. Massively parallel set suites backed by way too much C++, where building it all has wound up on the dev critical path? The big machine is much cheaper than rearchitecting the build structure and porting to a non-daft language.
I'm not very enamoured with distcc style build farms (never seem to be as fast as one hopes and fall over a lot) or ccache (picks up stale components) so tend to make the single dev machine about as fast as one can manage, but getting good results out of caching or distribution would be more cash-efficient.
Yes of course it depends, which is why I used "most developers" and not "all developers". What is certainly is not is a good default option for most people, like you suggested.
Different class of machines, the Threadripper will be heavier on multicore and less bottlenecked by memory bandwidth, which is nice for some workloads (e.g. running large local AIs that aren't going to fit on GPU). The 9950X and 9950X3D may be preferable for workloads where raw single-threaded compute and fast cache access are more important.
The computer you listed is specifically designed for running local AI inference, because it has an APU with lots of integrated RAM. If that isn't your use case then AMD 9000 series should be better.
Beelink GTR9 Pro. It has dual 10G Ethernet interfaces. And get the 128GB RAM version, the RAM is not upgradeable. It isn't quite shipping yet, though.
The absolute best would be a 9005 series Threadripper, but you will easily be pushing $10K+. The mainstream champ is the 9950X but despite being technically a mobile SOC the 395 gets you 90% of the real world performance of a 9950X in a much smaller and power efficient computer:
Huh, that's a really good deal at 1500 USD for the 64Gb model considering the processor it's running. (It's the same one that's in the Framework desktop that there's been lots of noise about recently - lots of recent reviews on YouTube.)
Get the 128Gb model for (currently) 1999 USD and you can play with running big local LLMs too. The 8060 iGPU is roughly equivalment to a mid-level nVidia laptop GPU, so it's plenty to deal with a normal workload, and some decent gaming or equivalent if needed.
From what I've been able to tell from interwebz reviews, the Framework one is better/faster as the GMTek is thermally throttled more. Dunno about the Beelink.
Multimonitor with 4K tends to need fast GPU just for the bandwidth, else dragging large windows around can feel quite slow (found that out running 3 x 4K monitors on a low-end GPU).
Generally PCI-E lanes and memory bandwidth tend to be the big difference between mobile and proper desktop workstation processors.
Core count used to be a big difference but the ARM Procs in the Apple machines certainly meet the lower end workstation parts now. to exceed it you're spending big big money to get high core counts in the x86 space.
Proper desktop processors have lots and lots of PCI-E Lanes. The current cream of the crop Threadripper Pro 9000 series have 128 PCI-E 5.0 Lanes. A frankly enormous amount of fast connectivity.
M2 Ultra, the current closest workstation processor in Apple's lineup (at least in a comparable form factor in the Mac Pro) has 32 lanes of PCI-E 4.0 connectivity that's enhanced by being slotted into a PCI-E Switch fabric on their Mac Pro. (this I suspect is actually why there hasn't been a rework of the Mac Pro to use M3 Ultra - that they'll ditch the switch fabric for direct wiring on their next one)
Memory bandwidth is a closer thing to call here - using the Threadripper pro 9000 series as an example we have 8 channels of 6400MT/s DDR5 ECC. According to kingston the bus width of DDR5 is 64b so that'll get us ((6400 * 64)/8) = 51,200MB/s per channel; or 409.6 GB/s when all 8 channels are loaded.
On the M4 Max the reported bandwidth is 546 GB/s - but i'm not so certain how this is calculated as the maths doesn't quite stack up from the information i have (8533 MT/s, bus width of 64b, seems to point towards 68,264MB/s per channel. the reported speed doesn't neatly slot into those numbers).
In short the memory bandwidth bonus workstation processors traditionally have is met by the M4 Max, but PCI-E Extensibility is not.
In the mac world though that's usually not a problem as you're not able to load up a Mac Pro with a bunch of RTX Pro 6000s and have it be usable in MacOS. You can however load your machine with some high bandwidth NICs or HBAs i suppose (but i've not seen what's available for this platform)
The author is talking about multi-core performance rather than single core. Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers. Ampere offers chips than are an order of magnitude faster in multi-core but they are not exactly "desktop" chips. But they are a good data point to say it can be true for ARM if the offer is here.
> Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers.
* Apple: 32 cores (M3 Ultra)
* AMD: 96 cores (Threadripper PRO 9995WX)
* Intel: 60 cores (W‑9 3595X)
I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine.
Single core performance has not been stagnant. We're about double where we were in 2015 for a range of workloads. Branch prediction, OoO execution, SIMD, etc. make a huge difference.
The clock speed of a core is important and we are hitting physical limits there, but we're also getting more done with each clock cycle than ever before.
Doubling single-core performance in 10 years amounts to a less than 10% improvement year-over-year. That will feel like "stagnant" if you're on non-vintage hardware. Of course there are improvements elsewhere that partially offset this, but there's no need to upgrade all that often.
This. I just did a comparison between my MacBook Pro Early 2015 to MacBook Air M4 Early 2025.
*Intel Core i5-5287U*:
- *Single-Core Maximum Wattage*: ~7-12W
- *Process Node*: 14nm
- *GB6 Single Core *: ~950
- *Apple M4*:
- *Single-Core Maximum Wattage*: ~4-6W
- *Process Node*: 3nm
- *GB6 Single Core *: ~3600
Intel 14nm = TSMC 10nm > 7nm > 5nm > 3nm
In 10 years, we got ~3.5x Single Core performance at ~50% Wattage. i.e 7x Performance per Watt with 3 Node Generation improvements.
In terms of Multi Core we got 20x Performance per Watt.
I guess that is not too bad depending on how you look at it. Had we compared it to x86 Intel or AMD it would have been worst. I hope M5 have something new.
If you are gaming... than high core count chips like Epyc CPUs can actually perform worse in Desktops, and is a waste of money compared to Ryzen 7/Ryzen 9 X3D CPUs. Better to budget for the best motherboard, ram, and GPU combo supported by a specific application test-ranked CPU. In general, a value AMD GPU can perform well if you just play games, but Nvidia rtx cards are the only option for many CUDA applications.
Check your model numbers, as marketers ruined naming conventions:
Specifically: buy a good desktop computer.
I couldn't imagine working on a laptop several hours per day (even with an external screen + keyboard + mouse you're still stuck with subpar performance).
I've seen more and more companies embrace cloud workstations.
It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation.
Then your actual physical computer is just a dumb terminal.
There are big tech companies which are slowly moving their staff (for web/desktop dev to asic designers to HPC to finance and HR) to VDI, with the only exception being people who need a local GPU. They issue a lightweight laptop with long battery life as a dumb terminal.
The desktop latency has gotten way better over the years and the VMs have enough network bandwidth to do builds on a shared network drive. I've also found it easier to request hardware upgrades for VDIs if I need more vCPUs or memory, and some places let you dispatch jobs to more powerful hosts without loading up your machine.
With tools like Blaze/Bazel (Google) or Buck2 (Meta) compilations are performed on a massive parallel server farm and the hermetic nature of the builds ensures there are no undocumented dependencies to bite you. These are used for nearly everything at Big Tech, not just webdev.
It's for example being rolled out at my current employer, which is one of the biggest electronic trading companies in the world, mostly C++ software engineers, and research in Python.
While many people still run their IDE on the dumb terminal (VSCode has pretty good SSH integration), people that use vim or the like work fully remotely through ssh.
I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection.
All of the big clouds have regions throughout the world so you should be able to find one less than 100ms away fairly easily.
Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead.
I've been struggling with this topic a lot, I feel the slowness everyday and productivity loss of having slow computers, 30m for something that could take 10 times less... it's horrible.
It is true, but also funny to think back on how slow computers used to be. Even the run-of-the-mill cheap machines today are like a million times faster than supercomputers from the 70s and 80s. We’ve always had the issue that we have to wait for our computers, even though for desktop personal computers there has been a speedup of like seven or eight orders of magnitude over the last 50 years. It could be better, but that has always been true. The things we ask computers to do grows as fast as the hardware speeds up. Why?
So, in a way, slow computers is always a software problem, not a hardware problem. If we always wrote software to be as performant as possible, and if we only ran things that were within the capability of the machine, we’d never have to wait. But we don’t do that; good optimization takes a lot of developer time, and being willing to wait a few minutes nets me computations that are a couple orders of magnitude larger than what it can do in real time.
To be fair, things have improved on average. Wait times are reduced for most things. Not as fast as hardware has sped up, but it is getting better over time.
* "people" generally don't spend their time compiling the Linux kernel, or anything of the sort.
* For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so.
* Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks.
* If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop.
Just because it doesn't match your situation, doesn't make it a silly argument.
Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project.
This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe.
I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks.
The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup.
The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow.
I work in game development. All the developers typically have the same spec machine, chosen at the start of the project to be fairly high end with the expectation that when the project ships it'll be a roughly mid range spec.
Or, perhaps, make it easier to run your stuff on a big machine over -> there.
It doesn't have to be the cloud, but having a couple of ginormous machines in a rack where the fans can run at jet engine levels seems like a no-brainer.
Yeah. I would say, do get a better CPU, but do also research a bit deeper and really get a better CPU. Threadrippers are borderline workstation, too, though, esp. the pro SKUs.
Apple still has quite atrocious performance per $. So it economically makes sense for a top end developer or designer, but perhaps not the entire workforce let alone the non-professional users, students etc.
Look at GPU purchasing. It's full of price games, stock problems, scalpers, 3rd party boards with varying levels of factory overclock, and unreasonable prices. CPU is a comparative cake walk: go to Amazon or w/e, and buy the one with the highest numbers in its name.
Almost all build guides will say ‘get midrange cpu X over high end chip Y and put the savings to a better GPU’.
Consoles in particular are just a decent gpu with a fairly low end cpu these days. The xbox one with a 1.75Ghz 8core AMD from a couple of generations ago now is still playing all the latest games.
I think currently, that build guide doesn't apply based on what's going on with GPUs. Was valid in the past, and will be valid in the future, I hope!
It completely depends on the game. Civilization series, for example, are mostly CPU bound, which is why turns take longer and longer as the games progress.
Factorio it's an issue when you go way past the end game into the 1000+ hour megabases.
Stellaris is just poorly coded with lots of n^2 algorithms and can run slowly on anything once population and fleets grow a bit.
For civilisation the ai does take turns faster with a higher end cpu but imho it’s also no big deal since you spend most time scrolling the map and taking actions (gpu based perf).
I think it’s reasonable to state that the exceptions here are very exceptional.
Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.
To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.
Then some period of time later they start looking at spending in detail and can’t believe how much is being spent by the 25% or so who abuse the possibly. Then the controls come.
> There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs,
You would think, but in the age of $6,000 fully specced MacBook Pros, $2,000 monitors, $3,000 standing desks, $1500 iPads with $100 Apple pencils and $300 keyboard cases, $1,000 chairs, SaaS licenses that add up, and (if allowed) food delivery services for “special circumstances” that turns into a regular occurrence it was common to see individuals incurring expenses in the tens of thousands range. It’s hard to believe if you’re a person who moderates their own expenditures.
Some people see a company policy as something meant to be exploited until a hidden limit is reached.
There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.
Just like with "policing", I'd only focus on uncovering and dealing with abusers after the fact, not on everyone — giving most people "benefits" that instead makes them feel valued.
I don't know what the hell you mean by the term unreasonable. Are you under the impression that investment banking analysts do not think they will have to work late before they take the role?
I've been at startups where there's sometimes late night food served.
I've never been at a startup where there was an epidemic about lying about stolen hardware.
Staying just late enough to order dinner on the company, and theft by the employee of computer hardware plus lying about it, are not in the same category and do not happen with equal frequency. I cannot believe the parent comment presented these as the same, and is being taken seriously.
none of it is good lol
gp was talking about salaried employees which is legally exempt from overtime pay. There is no rigid 40-hour ceiling for salary pay.
Salary compensation is typical for white-collar employees such as analysts in investment banking and private equity, associates at law firms, developers at tech startups, etc.
Not an expert here, but from what I heard, that would be a bargain for a good office chair. And having a good chair or not - you literally feel the difference.
But also, when I tell one of my reports to spec and order himself a PC, there should be several controls in place.
Firstly, I should give clear enough instructions that they know whether they should be spending around $600, $1500, or $6000.
Second, although my reports can freely spend ~$100 no questions asked, expenses in the $1000+ region should require my approval.
Thirdly, there is monitoring of where money is going; spending where the paperwork isn't in order gets flagged and checked. If someone with access to the company amazon account gets an above-ground pool shipped to their home, you can bet there will be questions to be answered.
It’s like your friend group and time choosing a place to eat. It’s not your friends, it’s the law of averages.
Ehh. Neither of these are soft fraud. The former is outright law-breaking, the latter…is fine. They stayed till they were supposed to.
This is the soft fraud mentality: If a company offers meal delivery for people who are working late who need to eat at the office and then people start staying late (without working) and then taking the food home to eat, that’s not consistent with the policies.
It was supposed to be a consolation if someone had to (or wanted to, as occurred with a lot of our people who liked to sleep in) stay late to work. It was getting used instead for people to avoid paying out of pocket for their own dinners even though they weren’t doing any more work.
Which is why we can’t have nice things: People see these policies as an opportunity to exploit them rather than use them as intended.
Note that employers do this as well. A classic one is a manager setting a deadline that requires extreme crunches by employees. They're not necessarily compensating anyone more for that. Are the managers within their rights? Technically. The employees could quit. But they're shaving hours, days, and years off of employees without paying for it.
If a company policy says you can expense meals when taking clients out, but sales people started expensing their lunches when eating alone, it’s clearly expense fraud. I think this is obvious to everyone.
Yet when engineers are allowed to expense meals when they’re working late and eating at the office, but people who are neither working late nor eating at the office start expensing their meals, that’s expense fraud.
These things are really not gray area. It seems more obvious when we talk about sales people abusing budgets, but there’s a blind spot when we start talking about engineers doing it.
Engineers are very highly paid. Many are paid more than $100/hr if you break it down. If a salaried engineer paid the equivalent of $100/hr stays late doing anything, expenses a $25 meal, and during the time they stay late you get the equivalent of 20 minutes of work out of them- including in intangibles like team bonding via just chatting with coworkers or chatting about some bug- then the company comes out ahead.
That you present the above as considered "expense fraud" is fundamentally a penny-wise, pound-foolish way to look at running a company. Like you say, it's not really a gray area. It's a feature not a bug.
Luckily that comes down to the policy of the individual company and is not enforced by law. I am personally happy to pay engineers more so they can buy this sort of thing themselves and we dont open the company to this sort of abuse. Then its a known cost and the engineers can decide from themselves if they want to spend that $30 on a meal or something else.
This isn’t about fraud anymore. It’s about how suspiciously managers want to view their employees. That’s a separate issue (but not one directed at employees).
This is why I call it the soft fraud mentality: When people see some fraudulent spending and decide that it’s fine because they don’t think the policy is important.
Managers didn’t care. It didn’t come out of their budget.
It was the executives who couldn’t ignore all of the people hanging out in the common areas waiting for food to show up and then leaving with it all together, all at once. Then nothing changed after the emails reminding them of the purpose of the policy.
When you look at the large line item cost of daily food delivery and then notice it’s not being used as intended, it gets cut.
As you mentioned, setting policy that isn’t abused is hard. But abuse isn’t fraud—it’s abuse—and abuse is its own rabbit hole that covers a lot of these maladaptive behaviors you are describing.
I call the meal expense abuse “soft fraud” because people kind of know it’s fraud, but they think it’s small enough that it shouldn’t matter. Like the “eh that’s fine” commenter above: They acknowledged that it’s fraud, but also believe it’s fine because it’s not a major fraud.
If someone spends their employer’s money for personal benefit in a way that is not consistent with the policies, that is legally considered expense fraud.
There was a case local to me where someone had a company credit card and was authorized to use it for filling up the gas tank of the company vehicle. They started getting in the habit of filling up their personal vehicle’s gas tank with the card, believing that it wasn’t a big deal. Over the years their expenses weren’t matching the miles on the company vehicle and someone caught on. It went to court and the person was liable for fraud, even though the total dollar amount was low five figures IIRC. The employee tried to argue that they used the personal vehicle for work occasionally too, but personal mileage was expensed separately so using the card to fill up the whole tank was not consistent with policy.
I think people get in trouble when they start bending the rules of the expense policy thinking it’s no big deal. The late night meal policy confounds a lot of people because they project their own thoughts about what they think the policy should be, not what the policy actually is.
If you start trying to tease apart the motivations people have even if they are following those rules, you are going to end up more paranoid than Stalin.
Yes, but some also have a moral conscience and were brought up to not take more than they need.
If you are not one of these types of people, then not taking complete over advantage of an offer like free meals probably seems like an alien concept.
I try to hire more people like this, it makes for a much stronger workforce when people are not all out to get whatever they can for themselves and look out for each others interests more.
> So if you are astonished that people optimize for their financial gain, that’s concerning.
I’m not “surprised” nor “astonished” nor do you need to be “concerned” for me. That’s unnecessarily condescending.
I’m simply explaining how these generous policies come to and end through abuse.
You are making a point in favor of these policies: Many will see an opportunity for abuse and take it, so employers become more strict.
peanuts compared to their 500k TC
I do think a lot of this comment section is assuming $500K TC employees at employers with infinite cash to spend, though.
Two, several tens of thousands are in the 5%-10% range. Hardly "peanuts". But I suppose you'll be happy to hear "no raise for you, that's just peanuts compared to your TC", right?
I am 100x more expensive than the laptop. Anything the laptop can do instead of me is something the laptop should be doing instead of me.
You're underestimating the scope of time lost by losing a few percent in productivity per employee across hundreds of thousands of employees.
You want speed limits not speed bumps. And they should be pretty high limits...
After I saw the announcement, I immediately knew I needed to try out our workflows on the new architecture. There was just no way that we wouldn't have x86_64 as an implicit dependency all throughout our stack. I raised the issue with my manager and the corporate IT team. They acknowledged the concern but claimed they had enough of a stockpile of new Intel machines that there was no urgency and engineers wouldn't start to see the Apple Silicon machines for at least another 6-12 months.
Eventually I do get allocated a machine for testing. I start working through all the breakages but there's a lot going on at the time and it's not my biggest priority. After all, corporate IT said these wouldn't be allocated to engineers for several more months, right? Less than a week later, my team gets a ticket from a new-starter who has just joined and was allocated an M1 and of course nothing works. Turns out we grew a bit faster than anticipated and that stockpile didn't last as long as planned.
It took a few months before we were able to fix most of the issues. In that time we ended up having to scavenge under-specced machines form people in non-technical roles. The amount of completely avoidable productivity wasted from people swapping machines would have easily reached into the person-years. And of course myself and my team took the blame for not preparing ahead of time.
Budgets and expenditure are visible and easy to measure. Productivity losses due to poor budgetry decisions, however, are invisible and extremely difficult to measure.
> And of course myself and my team took the blame for not preparing ahead of time.
If your initial request was not logged and then able to be retrieved by yourself in defence, then I would say something is very wrong at your company.
For a single person, slight improvements added up over regular, e.g., daily or weekly, intervals compound to enormous benefits over time.
XKCD: https://xkcd.com/1205/
Saving 1 second/employee/day can quickly be worth 10+$/employee/year (or even several times that). But you rarely see companies optimizing their internal processes based on that kind of perceived benefits.
Water cooler placement in a cube farm comes to mind as a surprisingly valuable optimization problem.
That seems unreasonably short. My work computer is 10 years old (which is admittedly the other extreme, and far past the lifecycle policy, but it does what I need it to do and I just never really think about replacing it).
We managed to just estimate the lost time and management (in a small startup) was happy to give the most affected developers (about 1/3) 48GB or 64GB MacBooks instead of the default 16GB.
At $100/hr minimum (assuming lost work doesn't block anyone else) it doesn't take long for the upgrades to pay off. The most affected devs were waiting an hour a day sometimes.
This applies to CI/CD pipelines too; it's almost always worth increasing worker CPU/RAM while the reduction in time is scaling anywhere close to linearly, especially because most workers are charged by the minute anyway.
I have a whisper transcription module running at all times on my Mac. Often, I'll have a local telemetry service (langfuse) to monitor the 100s of LLM calls being made by all these models. With AI development it isnt uncommon to have multiple background agents hogging compute. I want each of them to be able to independently build + host and test their changes. The compute load apps up quickly. And I would never push agent code to a cloud env (not even a preview env) because I don't trust them like that and neither should you.
Anything below an M4 pro 64GB would be too weak for my workflow. On that point, Mac's unified VRAM is the right approach in 2025. I used windows/wsl devices for my entire life, but their time is up.
This workflow is the first time I have needed multiple screens. Pre-agentic coding, I was happy to work on a 14 inch single screen machine with standard thinkpad x1 specs. But, the world has changed.
AMD's Strix Halo can have up to 128GB of unified RAM, I think. The bandwidth is less than half the Mac one, but it's probably going to accelerate.
Windows doesn't inherently care about this part of the hardware architecture.
Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers.
Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need.
Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy.
Apple have long thought that 8Gb ram is good enough for anything, and will continue to for some time now.
I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale.
"AI" (Plus) Chromebooks?
They eventually became so cheap they blanket paused refreshing developer laptops...
Proper ergo is a cost concious move. It helps keep your employees able to work which saves on hiring and training. It reduces medical expenses, which affects the bottom line because large companies are usually self-insured; they pay a medical insurance company only to administer the plan, not for insurance --- claims are paid from company money.
All this at my company would be a call or chat to the travel agent (which, sure, kind of a pain, but they also paid for dedicated agents so wait time was generally good).
I have a pretty high end MacBook Pro, and that pales in comparison to the compute I have access to.
Sure, I’ve stopped using em-dashes just to avoid the hassle of trying to educate people about a basic logical fallacy, but I reserve the right to be salty about it.
1) Em-dashes
2) "It's not X, it's Y" sentence structure
3) Comma-separated list that's exactly 3 items long
>3) Comma-separated list that's exactly 3 items long
Proper typography and hamburger paragraphs are canceled now because of AI? So much for what I learned high school english class.
>2) "It's not X, it's Y" sentence structure
This is a pretty weak point because it's n=1 (you can check OP's comment history and it's not repeated there), and that phrase is far more common in regular prose than some of the more egregious ones (eg. "delve").
Don’t worry, they’ll tell you
Equality doesn't have to mean uniformity.
Some people would minimize the amount spent on their core hardware so they had money to spend on fun things.
So you’d have to deal with someone whose 8GB RAM cheap computer couldn’t run the complicated integration tests but they were typing away on a $400 custom keyboard you didn’t even know existed while listening to their AirPods Max.
I've been on teams where corporate hardware is all max spec, 4-5 years ahead of common user hardware, provided phones are all flagships replaced every two years. The product works great for corporate users, but not for users with earthly budgets. And they wonder how competitors swallow market in low income countries.
At one place I had a $25 no question spending limit, but sank a few months trying to buy a $5k piece of test equipment because somebody thought maybe some other tool could be repurposed to work, or we used to have one of those but it's so old the bandwidth isn't useful now, or this project is really for some other cost center and I don't work for that cost center.
Turns out I get paid the same either way.
Where did this idea about spiting your fellow worker come from?
I think you wanted to say "especially". You're exchanging clearly measurable amounts of money for something extremely nebulous like "developer productivity". As long as the person responsible for spend has a clear line of view on what devs report, buying hardware is (relatively) easy to justify.
Once the hardware comes out of a completely different cost center - a 1% savings for that cost center is promotion-worthy, and you'll never be able to measure a 1% productivity drop in devs. It'll look like free money.
I feel like that's the wrong approach. Like saying a music producer to always work with horrible (think car or phone) speakers. True, you'll get a better mix and master if you test it on speakers you expect others to hear it through, but no one sane recommends you to default to use those for day-to-day work.
Same goes for programming, I'd lose my mind if everything was dog-slow, and I was forced to experience this just because someone thinks I'll make things faster for them if I'm forced to have a slower computer. Instead I'd just stop using my computer if the frustration ended up larger than the benefits and joy I get.
Likewise, if you're developing an application where performance is important, setting a hardware target and doing performance testing on that hardware (even if it's different from the machines the developers are using) demonstrably produces good results. For one, it eliminates the "it runs well on my machine" line.
I get the sentiment but taken literally it's counter productive. If the business cares about perf, put it in the sprint planning. But they don't. You'll just be writing more features with more personal pain.
For what its worth, console gamedev has solved this. You test your game on the weakest console you're targeting. This usually shakes out as a stable perf floor for PC.
Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.
Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.
Efficiency is a good product goal: Benchmarks and targets for improvement are easy to establish and measure, they make users happy, thinking about how to make things faster is a good way to encourage people to read the code that's there, instead of just on new features (aka code that's not there yet)
However they don't sell very good: Your next customer is probably not going to be impressed your latest version is 20% faster than the last version they also didn't buy. This means that unless you have enough happy customers, you are going to have a hard time convincing yourself that I'm right, and you're going to continue to look for backhanded ways of making things better.
But reading code, and re-reading code is the only way you can really get it in your brain; it's the only way you can see better solutions than the compiler, and it's the only way you remember you have this useful library function you could reuse instead of writing more and more code; It's the only guaranteed way to stop software bloat, and giving your team the task of "making it better" is a great way to make sure they read it.
When you know what's there, your next feature will be smaller too. You might even get bonus features by making the change in the right place, instead of as close to the user as possible.
Management should be able to buy into that if you explain it to them, and if they can't, maybe you should look elsewhere...
> a much slower machine
Giving everyone laptops is also one of those things: They're slow even when they're expensive, and so developers are going to have to work hard to make things fast enough there, which means it'll probably be fine when they put it on the production servers.
I like having a big desktop[1] so my workstation can have lots of versions of my application running, which makes it a lot easier to determine which of my next ideas actually makes things better.
[1]: https://news.ycombinator.com/item?id=44501119
Using the best/fastest tools I can is what makes me faster, but my production hardware (i.e. the tin that runs my business) is low-spec because that's cheaper, and higher-spec doesn't have a measurable impact on revenue. But I measure this, and I make sure I'm always moving forward.
software should be performance tested, but you don't want a situation when time of single iteration is dominated by duration of functional tests and build time. the faster software builds and tests, the quicker solutions get delivered. if giving your developers 64GB or RAM instead of 32GB halves test and build time, you should happily spend that money.
Sure, occasionally run the software on the build machine to make sure it works on beastly machines; but let the developers experience the product on normal machines as the usual.
If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling.
And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too.
(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).
I’m probably thinking of various other packages, since at the time I was all-in on Gentoo. I distinctly remember trying to get distcc running to have the other computers (a Celeron 333 MHz and Pentium III 550 MHz) helping out for overnight builds.
Can’t say that I miss that, because I spent more time configuring, troubleshooting, and building than using, but it did teach me a fair amount about Linux in general, and that’s definitely been worth it.
I spent a few grand building a new machine with a 24-core CPU. And, while my gcc Docker builds are MUCH faster, the core Angular app still builds a few seconds slower than on my years old MacBook Pro. Even with all of my libraries split into atoms, built with Turbo, and other optimizations.
6-10 seconds to see a CSS change make its way from the editor to the browser is excruciating after a few hours, days, weeks, months, and years.
It blew my mind. Truly this is more complicated than trading software.
I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.
The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want
I think just having LSP give you answers 2x faster would be great for staying in flow.
Applies to git operations as well.
The days when 30 seconds pauses for the compiler was the slowest part are long over.
Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”
The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions.
Single thread performance of 16-core AMD Ryzen 9 9950X is only 1.8x of my poor and old laptop's 4-core i5 performance. https://www.cpubenchmark.net/compare/6211vs3830vs3947/AMD-Ry...
I'm waiting for >1024 core ARM desktops, with >1TB of unified gpu memory to be able to run some large LLMs with
Ping me when some builds this :)
Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance.
[0] https://www.cpubenchmark.net/compare/Intel-i9-9900K-vs-Intel...
I wish that were true, but the current Ryzen 9950 is maybe 50% faster than the two generations older 5950, at compilation workloads.
It's not 3x, but it's most certainly above 1.3x. Average for compilation seems to be around 1.7-1.8x.
> If you can justify an AI coding subscription, you can justify buying the best tool for the job.
I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU.
Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer.
[0] https://www.cpubenchmark.net/
[1] https://www.tomshardware.com/pc-components/cpus
There are a lot of jobs that should run in a home server running 24/7 instead of abusing your poor laptop. Remote dedicated servers work, but the latency is killing your productivity, and it is pricey if you want a server with a lot of disk space.
As an aside, being on my old laptop with its hard drive, can't believe how slow life was before SSDs. I am enjoying listening to the hard drive work away and I am surprised to realize that I missed it.
If you have a bad machine get a good machine, but you’re not going to get a significant uplift going from a good machine that’s a few years old to the latest shiny
„Public whipping for companies who don’t parallelize their code base“ would probably help more. ;)
Anyway, how many seconds does MS Teams need to boot on a top of the line CPU?
Except for the ridiculous laggy interface, it has some functional bugs as well such as things just disappearing for a few days and then they pop up again
Video processing, compression, games and etc. Anything computationally heavy directly benefits from it.
Here's my starting point: gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc. Anything better?
Also, I've got a gmktec here (cheaper one playing thin client) and it's going to be scrapped in the near future because the monitor connections keep dropping. Framework make a 395 max one, that's tempting as a small single machine.
I'm not very enamoured with distcc style build farms (never seem to be as fast as one hopes and fall over a lot) or ccache (picks up stale components) so tend to make the single dev machine about as fast as one can manage, but getting good results out of caching or distribution would be more cash-efficient.
The absolute best would be a 9005 series Threadripper, but you will easily be pushing $10K+. The mainstream champ is the 9950X but despite being technically a mobile SOC the 395 gets you 90% of the real world performance of a 9950X in a much smaller and power efficient computer:
https://www.phoronix.com/review/amd-ryzen-ai-max-arrow-lake/...
Get the 128Gb model for (currently) 1999 USD and you can play with running big local LLMs too. The 8060 iGPU is roughly equivalment to a mid-level nVidia laptop GPU, so it's plenty to deal with a normal workload, and some decent gaming or equivalent if needed.
There are also these which look similar https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen...
Maybe that’s an AMD (or even Intel) thing, but doesn’t hold for Apple silicon.
I wonder if it holds for ARM in general?
For AMD/Intel laptop, desktop and server CPUs usually are based on different architectures and don’t have that much overlap.
Core count used to be a big difference but the ARM Procs in the Apple machines certainly meet the lower end workstation parts now. to exceed it you're spending big big money to get high core counts in the x86 space.
Proper desktop processors have lots and lots of PCI-E Lanes. The current cream of the crop Threadripper Pro 9000 series have 128 PCI-E 5.0 Lanes. A frankly enormous amount of fast connectivity.
M2 Ultra, the current closest workstation processor in Apple's lineup (at least in a comparable form factor in the Mac Pro) has 32 lanes of PCI-E 4.0 connectivity that's enhanced by being slotted into a PCI-E Switch fabric on their Mac Pro. (this I suspect is actually why there hasn't been a rework of the Mac Pro to use M3 Ultra - that they'll ditch the switch fabric for direct wiring on their next one)
Memory bandwidth is a closer thing to call here - using the Threadripper pro 9000 series as an example we have 8 channels of 6400MT/s DDR5 ECC. According to kingston the bus width of DDR5 is 64b so that'll get us ((6400 * 64)/8) = 51,200MB/s per channel; or 409.6 GB/s when all 8 channels are loaded.
On the M4 Max the reported bandwidth is 546 GB/s - but i'm not so certain how this is calculated as the maths doesn't quite stack up from the information i have (8533 MT/s, bus width of 64b, seems to point towards 68,264MB/s per channel. the reported speed doesn't neatly slot into those numbers).
In short the memory bandwidth bonus workstation processors traditionally have is met by the M4 Max, but PCI-E Extensibility is not.
In the mac world though that's usually not a problem as you're not able to load up a Mac Pro with a bunch of RTX Pro 6000s and have it be usable in MacOS. You can however load your machine with some high bandwidth NICs or HBAs i suppose (but i've not seen what's available for this platform)
* Apple: 32 cores (M3 Ultra)
* AMD: 96 cores (Threadripper PRO 9995WX)
* Intel: 60 cores (W‑9 3595X)
I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine.
Considering ‘Geekbench 6’ scores, at least.
So if it’s not a task massively benefiting from parallelization, buying used is still the best value for money.
The clock speed of a core is important and we are hitting physical limits there, but we're also getting more done with each clock cycle than ever before.
Care to provide some data?
*Intel Core i5-5287U*: - *Single-Core Maximum Wattage*: ~7-12W - *Process Node*: 14nm - *GB6 Single Core *: ~950
- *Apple M4*: - *Single-Core Maximum Wattage*: ~4-6W - *Process Node*: 3nm - *GB6 Single Core *: ~3600
Intel 14nm = TSMC 10nm > 7nm > 5nm > 3nm
In 10 years, we got ~3.5x Single Core performance at ~50% Wattage. i.e 7x Performance per Watt with 3 Node Generation improvements.
In terms of Multi Core we got 20x Performance per Watt.
I guess that is not too bad depending on how you look at it. Had we compared it to x86 Intel or AMD it would have been worst. I hope M5 have something new.
If you are gaming... than high core count chips like Epyc CPUs can actually perform worse in Desktops, and is a waste of money compared to Ryzen 7/Ryzen 9 X3D CPUs. Better to budget for the best motherboard, ram, and GPU combo supported by a specific application test-ranked CPU. In general, a value AMD GPU can perform well if you just play games, but Nvidia rtx cards are the only option for many CUDA applications.
Check your model numbers, as marketers ruined naming conventions:
https://opendata.blender.org/
https://www.cpubenchmark.net/multithread/
https://www.videocardbenchmark.net/high_end_gpus.html
Best of luck, we have been in the "good-enough" computing age for some time =3
And an evergreen bit of advice. Nothing new to see here, kids, please move along!
https://news.ycombinator.com/item?id=44985323
It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation.
Then your actual physical computer is just a dumb terminal.
In which movie ? "Microsoft fried movie" ? Cloud sucks big time. Not all engineers are web developers.
The desktop latency has gotten way better over the years and the VMs have enough network bandwidth to do builds on a shared network drive. I've also found it easier to request hardware upgrades for VDIs if I need more vCPUs or memory, and some places let you dispatch jobs to more powerful hosts without loading up your machine.
I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection.
Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead.
But I would never say no to a faster CPU!
So, in a way, slow computers is always a software problem, not a hardware problem. If we always wrote software to be as performant as possible, and if we only ran things that were within the capability of the machine, we’d never have to wait. But we don’t do that; good optimization takes a lot of developer time, and being willing to wait a few minutes nets me computations that are a couple orders of magnitude larger than what it can do in real time.
To be fair, things have improved on average. Wait times are reduced for most things. Not as fast as hardware has sped up, but it is getting better over time.
* "people" generally don't spend their time compiling the Linux kernel, or anything of the sort.
* For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so.
* Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks.
* If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop.
Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project.
This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe.
I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks.
The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup.
The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow.
It doesn't have to be the cloud, but having a couple of ginormous machines in a rack where the fans can run at jet engine levels seems like a no-brainer.
This is an "office" CPU. Workstation CPUs are called Epyc.
Certainly not ahead of the curve when considering server hardware.
Server hardware is not very portable. Reserving a c7i.large is about $0.14/hour, this would equal the cost of an MBP M3 64GB in about two years.
Apple have made a killer development machine, I say this as a person who does not like Apple and macOS.
It's not like objective benchmarks disproving these sort of statements don't exist.