Article is interesting on the whole (I have no experience with "professional" work, and would love for suggestions as to how to be more familiar), but I latched onto this nugget:
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.
>> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human.
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
> 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
This is a fantastic illustration of selection bias. It stands to reason that truly-unjustified (some hidden variable) denials would be appealed at a higher rate and therefore the true value is something less than 90%.
That's not to say UHG are without blame, I just thought this was really interesting.
Your scientific take is useful in the case where selection bias is unavoidable and needs to be corrected for.
This case is not like that; if the insurance agency wants to dispute the 90% false denial rate, it would be trivial for them to take a random sample of _all_ cases, go through the appeal process for those, and publish the resulting number without selection bias.
As long as that doesn't happen, the most logical conclusion for us outside observers is: the number is probably not so much lower than 90% that it makes a difference.
The insurance company may well have already done that; this is being put by someone who is suing them and looking for reasons that the AI bot is bad. The article is silent on what the company response to the accusation was and, realistically, we'd expect the appealed denials to have a very high rate of error whether determined by bots or humans. Few people indeed are going to waste time arguing a hopeless case against an insurance company - this is classic selection bias.
What do you think the claim approval rate is? Less than 10%?
It stands to reason that the overwhelming majority of cases where the claim was approved were approved correctly. Unless that rate is well under 15%, it’s impossible to have the claimed “90% error rate”.
Right, but in this case the critical service isn't providing "health" for users, it's extracting profit from them (from the transactions) for the shareholders. THAT'S the critical service this company cybernetically fulfills.
Seems to me that the use of AI is irrelevant[1], and the real problem is the absurd error rate.
[1] In the sense of "it doesn't matter if it caused the problem", rather than "it probably didn't have any effect". Because after all, "to err is human, but to really foul things up takes a computer".
AI adjudication of healthcare is fine but there needs to be extremely steep consequences for false negatives and a truly independent board of medical experts to appeal to. If a large panel agrees the denial was wrong, a penalty of 10-100x the cost of procedure would be assessed depending on the consequence of the denial.
No one is going to accept a claim rejection from AI. Everyone will want to dispute, which will have to go to a human to review. At the end of the day I don’t see how 100 people is realistic.
This reaction is primarily an emotional one. Why is a human rejecting a claim better than an AI rejecting a claim? Presumably the AI will one day -- if not today -- be more accurate in following decisioning logic than humans, who will continue to make human errors.
If that were true then they would also dispute every first-line human review. I don't think the average first-line human customer service rep is any better than AI even today.
I don't think there an ethical responsibility to worrying about your competitor's labor. That would lead to stagnation and it's own sort of ethical issues.
I don't think it's as easy as hand waving it away as "your competitor's labor". Your competitors labor is your community, it's people. I believe we all have an ethical responsibility to that.
For the points you brought up, why is stagnation for the purposes of upholding an ethical position a bad thing?
And yes, by definition, worrying about ethical responsibility would lead to ethical issues. That's the whole point.
So should we all be farming and collecting berries? Most advancements since have put people out of jobs in "competitors" that didn't adapt. Still the unemployment rate isn't 99.9%. Yet we displaced whole industries many times over the centuries. Obviously people move to better jobs and find other things to do. There's nothing particularly good about sitting on a computer denying people insurance all day, why not have a computer do it?
If it is a choice between progress unfettered by concern for your "competitor's labor" or farming berries, I choose berries.
However, I believe there's a middle ground and endeavor to find it. Based on your response it doesn't appear as though you believe a middle ground exists.
You got that right, and yes, I was putting that issue aside, although my counterpoint to GGP argument would be "the ethical issues aren't from the competitor's perspective, it's from the perspective of the whole workforce, industry, and/or economy as a whole".
The impact to the whole workforce, industry and/or economy as a whole is a second order effect of the real ethical issue of providing a worse service for so cheap it's almost free such that the market won't bear significantly better service provided by humans. As I see it, the ethical concerns are not about specific people being out of a job, but with setting an expectation that it's not worth providing real, useful service (using actual people) because to do so would be a cost higher than phoning it in with AI.
I've had travel insurance from time to time and the consensus in online forums seems to be Allianz. But, in spite of anecdotal stories, relatively few people have any real world experience with the claims process. So it's really hard to tell what the true story is especially given that different people have different tolerances for out of pocket costs--especially below extreme amounts related to evacuation and the like.
The whole ugly turn of AI hypemen claiming its somehow morally okay for everyone to lose their jobs all at once makes me think the Luddites were right all along
Can we imagine a world where the claims are adjudicated by an uninterested party (as far as possible)? I don't want the insurance company to decide a contractual issue, that's ridiculous. At the moment they're kept honest by the law and by public opinion (which varies by country), but the principal-agent problem is too big to ignore.
My knee-jerk reaction is to think that the prospect of an insurance company handing support over to machines is a terrible development.
But it was already the case that they just arbitrarily do WTF ever they want, that outside a small set of actions that "bots" can perhaps handle fine they aren't going to do anything for you, and that the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.
So... not really any different? You already basically have to threaten them (well, have someone scarier than you threaten them) to get any real support, this wouldn't be different.
My last few interactions with an insurance company were moderately annoying but far from terrible - I would absolutely loathe having those replaced by a machine, given the terrible quality of every AI "assistant" I've ever used.
Similarly, I was just forced to talk to an insurance company and the only way I got any response was by talking to a human. The more robotic they are, instead of working around known issues, the more likely we are to get to a satisfactory solution (e.g. don't overcharge me and then do nothing about it).
Right. I wouldn't say that my interactions with those people were great, but they weren't nearly as bad as any of the automated systems that I've used.
Also, I think you may have made a typo that negated the meaning of some of your comment (but I believe I can understand what you meant anyway).
While a human interaction can be awful, there's a special hellishness that is trying to negotiate with a robot to get something related to your healthcare taken care of.
It seems to me apparent that there needs to be some way to arbiter the claims outside the insurer company itself. I'm... not sure that there is. But if there were and there exists some sort of sanction or incentive for the insurer to get it right the first time... I'm confident that AI insurance companies could streamline the process. But you need this incentive mechanism, else it's a recipe for dystopia. (Deeper thought goes that you would shift a lot of work to the arbiter, but I won't touch that for now.)
I don't see it as inherently a problem; AI can (theoretically) be a lot more fair in dealing with claims, and responds a lot sooner.
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
> AI can (theoretically) be a lot more fair in dealing with claims
Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.
However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.
I don't even think most Americans (except those trying to do the automating) would consider it to be fair.
AI is bias automation, and reflects the data it's trained on. The vast majority of training data is biased, even against different slices of Americans. The resulting AI will be biased.
> LLMs are a codification of internet and written content
Only true for pre-trained foundational models without any domain-specific augmentations. A good AI tool in this space would be fine-tuned or have other mechanisms that overshadow the pre-training from internet content.
On the other hand, once the claim is mishandled by AI, one can use the normal process to discover the juiced prompt and all the papertrail that comes with implementing it.
Why wouldn't they be? LLMs need a lot of content for training and there's multiple orders of magnitude less to train on of if you limited it to insurance-specific content, so you'd probably get a really crappy LLM. And training from scratch is really expensive anyway.
At best they'll be using fine tuned enterprise OpenAI / Anthropic models, more likely a regular model with a custom prompt.
And whichever method you use, you’re still accountable to regulators, courts, the letter of your contract, and the consequences of your reputation in a competitive market.
United Healthcare was in the news last year because they had an AI claims "approval" process with a 90% error rate, all in favor of the insurance company.
It's easy to describe a business process with written down rules, and those are easy to find in legal discovery. It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".
> It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".
Do you have actual knowledge of this? If not, the most obvious counterpoint is that the AI will need to give the reason or reasons for denial, and recording them for audit. Just like a human or a rules-based system.
It was not a 90% error rate (or at least that’s not a claim I read). It was that 90% of appeals of those decisions were decided (at least partially) in favor of the appeal. That could be 1000 decisions, 10 appeals, and 9 reversals.
I am personally 7 for 8 in lifetime wins in my city's parking ticket appeals process. That doesn't mean that I think that 7 out of 8 tickets my city issues are incorrect.
This is life insurance specifically. It's not very hard to prove someone is dead, is there really much room for argument over paying out the policy benefit?
If the plan is to just pay out after confirming the person is dead, what’s the AI doing? It could be replaced by a “upload your death certificate here” box.
It is kind of weird. Why does a life insurer have 100,000 employees. I'm really only familiar with term life. All the "customer service" is pre-purchase. Once you buy it, you forget it other than making the annual payment. There's nothing to manage, no real customer service required until and unless you die.
I suppose whole life where there is a cash value and investments being managed might have a more ongoing service need, but I'm not familiar with that.
This doesn’t establish any sort of mathematical bounds, but it gives an idea of the size of the problem. I suspect 100k employees is an over-estimate just because a lot of people are uninsured…
I work in the industry at a startup insurtech, we are a life insurance carrier (wysh.com - our flagship product is a b2b micro life insurance benefit, but we built that on top of a term life carrier and also sell d2c term life)
Allianz has ~150k employees but certainly they don't all work on the term life business in the USA, they do all kinds of other insurance stuff all over the world and have hundreds of different products.
For term life specifically, there still are some pretty significant back office teams that a customer probably never interacts with directly, though. A few that come to mind:
- underwriters: you wont be able to make a decision for all of your applicants based on the info they provide you and the info you can pull from automated sources, so some number of humans are on the phone with your applicants asking clarifying questions, doing additional research, and making risk decisions. They're also routinely doing retrospective analysis that looks back on claims paid out to make sure the claims are reasonable and there's not some sort of gap in the underwriting approach thats leaving unknown risk on the table, and audits of automated underwriting decisions to make sure the rules engines are correctly categorizing risks
- actuaries: every company has varying risk tolerance for both the policies they issue and the cash they hold/invest. These people are advising on how to take risks and working with underwriters and finance people to try and figure out the financial impact of various underwriting decisions: can a product remain viable if it is purchased by a heavier balance of smokers vs nonsmokers, etc
- accountants and finance: its a capital-intensive business that requires large cash reserves and sane investment strategy for that cash, often subject to tests by regulators or industry associations and all sorts of lengthy audits
- compliance: in the US, life insurance is individually regulated by each state. Many states join the ICC Compact and agree to all follow the same rules and have a single set of regulatory filings, but you still have plenty of other states to do filings with, analyze changing requirements from, maintain relationships with regulators, respond to regulatory complaints or investigations, etc
- industry reporting: most insurance carriers participate in information-sharing programs like the MIB (Medical Information Bureau) and these memberships come with various reporting and code-back obligations. The goal is to prevent you from getting declined at one life insurer because you say you have some sort of uninsurable illness and then turning around and lying about not having that illness to another life insurer the next day. These sort of conflicting answers get flagged for manual review, someone will need to talk to the applicant and figure out why they gave conflicting info to multiple insurers and what the truth really is.
- claims and fraud investigations: many, many people lie to try and get insurance they aren't qualified for or to take out insurance on someone they aren't supposed to. Claims investigations start by asking "is the insured really dead" but then try to answer the questions like "did the insured know this policy was taken out on them", "were the responses the insured gave during underwriting truthful", etc. These investigations are extremely time consuming and often involve combing public records, calling doctors, interviewing family, and more. You'd probably be shocked how common it is for former-spouses to try and take out insurance policies without the other knowing during divorces. Some level of this investigation is happening in the first couple of years a policy is in force, too, as insurers can rescind the policy and refund the premiums if they determine it was obtained under false pretenses
- reinsurance: even the biggest insurers typically pool and share some amount of risk so that a bad claims year can't take down an entire carrier. reinsurance treaties are complex things to negotiate and maintain, and have lots of reporting obligations and collaboration between the reinsurer and the actuaries to validate the risks are what everyone thinks they are
The customer-facing part of a term life company is really just the tip of the iceberg. Small companies are certainly better at doing this with tech than bigger incumbents (thats a big part of the reason we exist at Wysh), and a narrow product focus really helps, but there's still some pretty significant levels of human expertise involved to keep it all running.
The important detail there is doing it without the knowledge of the (former) spouse.
You need both an insurable interest and consent of the insured in order to buy an insurance policy on someone else’s life.
Couples separating and holding policies on each other is pretty common and carriers have some specific rules to follow to make sure there’s appropriate mutual consent for policy changes etc
Most of life insurance policies have exceptions. For example, they won't pay out if you commit suicide. So the conditions of the death must be assessed against the insurance policy before payout.
Of course! One can die by suicide or as a result of drug abuse, preexisting conditions and all that. Otherwise somebody discovering or suspecting they have an incurable disease would be able to get a policy after that.
> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI. That, to me, is deeply disturbing, and very very difficult to justify.
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
While I get the vibes, and have had experience of human customer support being very weird on a few occasions, replacing mediocre humans with mediocre AI isn't a win for customers getting actual solutions.
And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.
LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.
I think there's an interesting implication here: that the actually good (for the customer) support experience is a real human who has access to a RAG where they can look up company documents/policies/procedures, but still be able to use their human brain to make judgement calls (and, of course, they have to be willing to, y'know, read the notes left by the previous rep).
Nothing new or revolutionary, just the usual race to the cost bottom with corresponding quality bottom.
Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.
Ethical issues of putting people out of a job? Please. This mindset has to be called out because it directly causes suffering via creating a societal permission structure for politicians to protect interest groups with protectionist trade policy and internal pork barreling policy.
Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.
I think the commenter was definitely somewhat glib in their statement, but I don't think the case is as clear cut as you think.
The way I've come to think of the current moment in history is that capitalism allocates resources via markets and we use this system because in many situations its highly efficient. But governments allocate resources democratically exactly because we do not always want to allocate resources efficiently with respect to making money.
Whether it "makes sense" or not, most people believe there is more to life than the efficient allocation of resources and thus it might be a reasonable opinion that making 100,000 people suddenly unemployed is bad. I doubt seriously that the OP believes having 100,000 people working indefinitely when the labor can be done more efficiently by machines is good. I think most reasonable people want to see the transition handled more smoothly than a pure market capitalism would do it.
One might argue that the government allocating some resources is more efficient than the market doing so purely because specific outcomes are desired that the invisible hand is not motivated or incentivized to provide. If the goal is to keep people healthy, efficiency is based on how successful that is, not on the monetary cost. Few people seem to understand it this way, though.
In most cases government employees simply aren't prescient enough to allocate resources efficiently. Like in theory maybe central planning could be more efficient if everything worked correctly, but in practice it never works efficiently at scale. Much of the resources simply end up wasted.
If one looks to "government employees", as individuals, then yes, they aren't prescient enough to allocate resources efficiently. But comparing the free market to government employees is not an apples to apples comparison, because individuals don't allocate resources efficiently either in a free market; the "market" as a whole is what optimizes for efficiency.
And I think there is a distinction in different kinds of efficiency that can be optimized for, not just monetary cost. If we desire clean, paved, safe roads, that can be used by all equally for efficient movement of goods, because we recognize that as a prereq for a strong economy, we can not rely on the free market to deliver that, much less optimize for it. It can be more efficient, in terms of actually delivering the desired goal vs not delivering it at all (or delivering a grossly bastardized version of it) to pool our resources and explicitly work towards making something available rather than hoping that the free market will deliver it.
The free market did not deliver on reducing congestion in New York (in fact, one might say that over the decades, the free market is what made it worse), but the congestion pricing program has, and has resulted in a bunch of valuable/desirable knock-on effects.
I do not think that a centrally planned economy is workable; but collectively being deliberate about building the things we need/want, and taking a longer view, can result in significant efficiencies.
The free market ends up simply wasting resources in its drive to discover where efficiencies lie and how to take advantage of them.
> it directly causes suffering via creating a societal permission structure for politicians to protect interest groups with protectionist trade policy and internal pork barreling policy
What part of that is suffering, if it enables 100k constituents to put food on the table?
We could employ 100k people to dig holes and then fill them back in; should we?
We shouldn't employ people in economically un-viable ways just because they need income. We can just give them money directly, or redirect them to other work, or a combination of the two.
> We could employ 100k people to dig holes and then fill them back in; should we?
If that is what's necessary to provide a social safety net, then maybe so. See the works progress administration for an example of this.
> We can just give them money directly, or redirect them to other work
Ideally yes, but that isn't happening, hence the first option.
We may be straying here, though: this discussion didn't start out with someone saying what someone else should or shouldn't do. We were discussing the ethical and economic consequences of an idea.
The problem is that it's a misallocation of human capital which slows progress for all of society. We should be providing social safety nets for people, not fake jobs.
> We should be providing social safety nets for people, not fake jobs.
I agree with you (except in classifying the genuine effort of my fellow people to be "fake jobs" just because a computer can do some of the work) and believe making a resilient, trustworthy, proven system for the former is a prerequisite to withdrawing the latter, to avoid suffering.
Unfortunately for us, the barrier to the former is ideological in nature and imposed by the elite few in power now, before any matters of capital allocation (human or financial) come into play.
Nobody has classified genuine effort as fake. But what good is genuine effort when it can be done much more easily without it? There's no shame whatsoever in this. At least, I don't think we should add any to the situation.
> Nobody has classified genuine effort as fake. But what good is genuine effort when it can be done much more easily without it?
This was previously stated: the good being done is 100,000 people can feed their families. What good is going without that? You'll enrich some private equity dudes and make a lot more people unemployed and a lot more families unhappy.
No, but claims processing is already highly automated across much of the insurance industry and the level of automation will only increase in the future.
There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.
There's a huge assumption in your comment -- that you know how insurance works. "Most" probably aren't working in sales and marketing; I'd heavily dispute anything above 50% and I feel like 33% might be pushing it? I don't want to get overconfident here, but this claim feels off-base.
Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.
E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:
a. incorrectly approves someone, then you need to kick them off the policy later?
b. incorrectly denies someone initial or continuing coverage?
Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.
And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.
> There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
That's not an assumption.
I know that I, and many others, have been able to get a human on the phone every time we needed one. Regardless of the number of those humans actually working claims, in the current system, it is "enough".
I also know that it's impossible to give that level of service when you have 1 employee for every 10 million customers.
That's really all that you need in order to make the judgement that you're not going to get a human.
Side-note: I did a quick search, and found that Allstate has 23k reps that actually handle claims and 55k employees total, so almost half of their workforce does claims and disputes. They also have 10% market share of the US's ~340 million people, so that's, at most, 1 rep per 1500 employees. That's much better odds than 1 for every 10 million.
> A reasonably decent A
And there's the problem - that AI doesn't exist. You're speculating about a scenario that simply hasn't been realized in the real world, and every single person that I've talked to who has interacted with an AI-based "support representative" has had a bad experience.
> the only way to provide dispute resolution and customer service to 1B people is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers
I didn't read "the only way..." as having the condition of 100 or less employees. In fact the 100 employees is mentioned in an earlier sentence that explicitly says they are using AI to accomplish such a low employee count. The comment I was replying to seems to imply AI was the only way to serve 1B people, without regard for the number of employees.
It absolutely does not imply that. You read it wrong. It's very clear from the context that I'm talking about serving 1B people with only 100 employees.
Wanted to point to the startup the author seems to be running, which is to sell insurance somehow tied to Bitcoin: https://meanwhile.bm/
For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.
Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.
I think it's actually tax avoidance disguised as life insurance:
> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation
My wife made a McKinsey consultant cry… she hired McKinsey for some internal project. One person on the project was a recent Harvard grad. They were in a meeting going over the deliverables along with the McKinsey partner on the project and in the meeting my wife said something to the effect that their work wasn’t up to McKinsey standards.
The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…
Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
I don't care if you went to Ivy League and graduated at the top of your class, I really don't get WTF someone whose life experience has been almost exclusively in school really knows running about business.
Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.
The whole business is nonsensical. The point of a consultant is they have a lot of experience in a specific domain, a recent Harvard grad is useless. From what I've heard, tons of their consultants are young people with minimal real industry experience
You pay for one or two people with real experience and 4 reasonably new hires whose job it is to answer questions posed by the senior team and to build documentation.
You want the senior people focusing on the problems, strategy, and comms and not data aggregation and power point formatting.
Half the time it doesn't actually matter who the consultant is, the business is just looking for an arbiter to provide a second opinion or justify a decision.
How does this not vindicate their viewpoint? Do you really need a team of ivy grads to make power points or inexperienced people to give unqualified answers?
Modern consulting seems like one of the better deaths inflicted by GenAI. The entire industry is a means to commit corporate espionage legally.
They can do something more useful with that education.
The Partner is the consultant. The 'recent grad' is just extra low-cost apprenticeship for the partner. The customer is (ridiculously over-) paying for the Partner's time and tolerating the apprentices that come along for the ride.
They have a brutal up or out culture. The idea is that the recent grads are grunts who are ground into the dirt. They actively hope that most of them quit and the few who don't get promoted into positions where they do have the experience, or really, one very specific type of experience that consultancy firms select for. Similar setup to Big Law.
> The point of a consultant is they have a lot of experience in a specific domain
This is your mistake. The point of a consultant is to tell the business to do what the business was already planning to do anyway. This way the consultant takes the risk/blame of the decision. It's similar to the classic 'no one was ever fired for buying IBM' "I did what the McKinsey consultant told me" is CYA. The last piece is that since everyone is in on the game, when a decision leads to bad outcomes they don't blame the consultant, but something they could not have foreseen.
There's a cycle to one's relationship with Mckinsey and the big accounting firms. You start with a lot of attention from the partner, who over time shifts you to more experienced assistants, who over time shift you to new hires. Then you scream at them about the shit quality of their work, and you get the partner's attention again for a year or two.
You don't seem to understand how consulting works.
The person making the recommendations isn't just out of school. They've been at the firm for years, and do have a ton of experience.
The recent grads are there for all of the grunt work -- collecting massive amounts of data and synthesizing it. You don't need years of business experience for that, but getting into a top college and writing lots of research papers in college is actually the perfect background for that.
Having worked at Mck, what I could very well imagine happened behind the scenes here was
1. This BA/Asc was on <4 hours of sleep, maybe many days in a row
2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted
And after the meeting (this I feel more confident about, as it happens a lot)
1. A conversation happened to see if the BA/Asc wanted to stay on the project
2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)
Genuinely funny. Had to once interface with a small team from Deloitte on a project, and pushed hard during an early meeting for them to outline the problems and scope. Just complete incompetence... I didn't make anyone cry, but definitely squirm a lot. Just asking questions about their understanding, process to close gap of understanding, and project management plans were enough to make clear to the main executive stakeholder on our end that this was going to be a trainwreck. They were fired shortly after.
That's what I thought. Having the partner present seems to be the right way of handling this. The company is responsible for employees well-being and shouldn't let a client bullying them.
We weren't in the meeting, but if the person cried, there's a possibility that the person was actually bullied. If the work sucks, it can be discussed outside of the meeting with the person's manager for instance. It's not because the person works for McKinsey and is an inexperienced ivy league graduate that it's ok to be mean.
> Meanwhile: to break into a highly-regulated, commoditized market like insurance, you need both a truly differentiated product that incumbents can't easily replicate and an associated distribution strategy that leverages their blind spots.
Having worked in highly regulated industries, I’ve learned that the best way to disrupt incumbents is by creating a product that assumes more business risk than is typically accepted. Large, regulated companies are extremely risk-averse—so if you can take on that risk in a smart, innovative way, you’ll win.
What if you take that risk by putting "crypto" in it? I think it might work out for our founder here but I am not so optimistic about the results for any of the poor schmucks suckered into this scheme.
This reads like a linkedin post and I'm only commenting because I'd like to hear more about the 2nd type of big-org problems he faced that he felt weren't fixable, and why - but instead got a pitch to his new startup, which I guess should've been expected from the title. Just hoped for more substance.
Oh McKinsey had a name for that program ("Leap"). I once worked at a "Telco Enterprise Startup" in Berlin founded by them.
They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.
Looking at the home page of Meanwhile only made me think of how life insurance is such a different thing than, say, a mortgage. With life insurance, counterparty risk matters. You don't care about your mortgage counterparty. I'm not going to buy life insurance from an insurer with Youtube videos of Anthony Pompliano on their home page. Know your enemy.
The engineer in me immediately looks for ways to map out how tax avoidance via crypto trading on life insurance funds via a Bermuda company can possibly go wrong. Insurance has a nice long term cash flow that has proved very sweet for Berkshire Hathaway, and investment on top of that gets perks for the insurance. However Crypto, which has liquidity issues and is heavily scammed/stolen would benefit far more than the users of business. The holdings would stay for decades, allowing arbitrage of the main company with user investments. If there is a leak or a collapse of the crypto, the customers won’t know it until they can’t get their funds back, but since AI is handling the claims, they may never even find out the real reason they can’t get their money back. And since it’s life insurance, the buyer might never find out, while their descendants or loved ones may not know how to deal with it or be plenty confused by the lack of customer service. Very novel scheme.
My experience working at a consulting company was that we were a software development agency with change management. It can work well. However as a consultancy, the incentive is also to continually develop our integration into your organization so that you continue to need us.
> I learned deeper truths about where startups can win and compete.
Now that I'm working at a big organization (a Fortune 500 company), I can relate. I'm by far the most innovative person in my team and I'm being held down because I'm not doing my role (as I'm not a dev but a data analyst at the moment).
If I'd be doing my role however, then we wouldn't be innovating and the C-suite wants us to innovate with AI. I'm the only one at my department that can create actual AI automations. And the IT department is basically stripped out by upper management.
If anyone wants an actual dev building AI automations and think how we can disrupt with the state of the art, my email is in my profile.
Valuable article, it's rare to see a glimpse into McKinsey in normal human language.
The fact that the company has become a sort of pseudo-VC (mentorship but not financing) for small teams within megacorps is interesting. I wonder why large corps find it so difficult to innovate. I think that they become somewhat "load-bearing" in society and the lines between the company and the market begin to blur. Any change the company makes causes a misalignment because they shaped the market to fit themselves.
Thought :know your enemy: would refer to the corporation investigative "When McKinsey comes to town" book (McKinsey's major involvements in lobbying for tobacco involvement, Opioid epidemic, and many more crimes left mostly unpunished).
Insurance business is mostly about hoarding and investing money so you can actually pay when you have to.
Unless you can solve that part of the problem as well as the big players, you will run into problems at some point, using extrem value theory you can even estimate when.
In my home country (Norway), I've met plenty of startup founders that come from MBB consulting. Actually a pretty "normal" path here, compared to jumping straight into entrepreneurship out of college. But that also has something to do with how risk-averse investors here are, compared to the US (no one here is going to give inexperienced college kids a bunch of money, unless they've proven themselves to be bona fide serious people) - and the fact that consultants actively get to see market needs in real time, and in positions where other external people might not.
Key claim: "disruption" is impossible for BigCompany. SmallCo can do it, but only if they both (1) have something technically hard to replicate, and (2) target a marketing niche that is a irremediable blind spot for BigCompany's. Since his venture now is life insurance, Geico is likely the comparable case in point.
I really think every founder (and startup worker) needs to take seriously the marketing side of the business, and not just believe that new technology will win.
(While I, too, am allergic to bitcoin scams, given increasing levels of political corruption monkeying with markets, rates, and regulation, I can also see it as an enticing alternative for those looking to get long-term investments off the dollar. For insurance, the main question is, will the money be there and be made available? Having seen even highly-regulated pensions fail (without federal insurance recourse in the case of religious hospital behemoths), I can see how technical guarantees independent of regulation or law could be compelling.)
The key phrase is LIFE Insurance, not HEALTH Insurance!
They are vastly different markets.
You don’t deny claims for life insurance as companies would do for health insurance. It’s a very different set of circumstances to have to deny life insurance.
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
So 3 years at McKinsey taught OP the corporate BS. That paragraph doesn't say anything useful.
There are different metrics that people use to say they're the biggest.
Some of them off the top of my head are number customers, number of active policies, premium amount, assets under management, time to claim resolution, etc. He's talking to business people who understand the insurance market.
Yes it does, it's just dressed up in corporate speak:
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management.
We think we can be bigger (more customers, more sales, more money) than all existing players.
> We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably.
We're looking to eclipse the population of any one country and we're going to use something like Bitcoin to side-step national currencies (and maybe also to avoid existing regulatory structure, not clear from the ambiguous language).
> We plan to do with 100 people what Allianz and others do with 100,000.
We believe we can automate or use AI to eliminate the need for people to actually support these billion customers.
All three of those are very bold statements/goals.
Unless they are planning on sending AI powered robots to attend court cases and prepare submissions they're going to need to hire or retain at least 100 lawyers for an insurance company serving that many customers.
I was confused by this as ChatGPT launched in Nov 2022 and had tens of millions of users by end of 2022.
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
Not responding about the article, but I remember interviewing with McKinsey as a graduating PhD. I had just passed their test and was going through the case study interview and I got paired with a PhD in physics from MIT. I think the study was something about cognac sales and I just got disgusted with the waste of training and talent, and after I got home from the airport that evening I pulled out of the interview process even though I had no other employment options at the time.
Somewhat ironically, over the past 20 years I’ve come to reject PhD-type career tracks after seeing how much PhD overproduction there is and how my older colleagues only had a BS or MS. These days, I yearn to leave my Big Tech job to start a “boring” business. Right now I’m taking Accounting 101 at a local university to understand business financials better.
My understanding was that the "enemy" was McKinsey, a firm that has a reputation to me as being an expensive consulting firm filled with MBA types who frequently are hired by companies.
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
Author is not consistent. He mentions in 25 cases, the firm hiring McKinsey did not know the answer beforehand. Yet, Leap is based on firms already knowing the answer. The reason McKinsey is hired is to avoid internal conflicts over which manager takes the reigns. I doubt McKinsey is providing solutions to these industries (as in, introducing a product that was not pitched internally by someone already. In fact, in most cases, a manager will pitch the solution and McKinsey's job is purely finding the right managers to leave this internal "startup" to). Should that be the case, I would love to be proven wrong. However, every consultant I had met is no engineer or tech leader. They are merely consultants, restructuring the answers in ways that avoid conflicts within established giants. Most of them are Ivy League graduates that never worked in the technical field (got hired at Bain or McKinsey fresh out of school). Often we would make up stacks to demonstrate how ignorant they can be of technology. Managers and business people love McKinsey. As an engineer, I have not met any tech founder or technical engineer that esteemed the field (just listen to Steve Jobs' opinion on consulting). I attribute the mess that Google is under Sundar to McKinsey (not even mentioning the Opioid crisis where their hands are stained too). The redeeming factor is the author describes them as the enemy and is at least honest about his reasons for joining (stability and established resume name)
"As an engineer," you've probably yet to realize "technical cofounder" is p97 a polite way to say "second-class citizen." You get more equity than a "founding engineer," I hope. So there's that.
Here's a tip to every reader. If your founders or co-founders are not technical (and never have been) and are pitching stacks using buzzwords, run. Equity is worthless if the startup goes under
Here's another. Exit and product are totally orthogonal.
I would rather work for someone honest than for a bullshit artist. But I wouldn't necessarily decline to work for the world's best bullshit artist. Just that you want to be very sure you know who at the table is the sucker.
Non-technical folks (business/marketing/artistic/bullshit types) can be founders of tech companies, but there needs to be significant tech prescence at the top. I think as a general rule, at least a third of the equity needs to be devoted to technical folks if the company wants to succeed.
Ideally you get someone whose good at both, or at least competent at one and really good in the other, such as Jobs or Gates.
The problem is non-technical folks tend to hire other non-technical folks for leadership (MBAs recruit other MBAs). What ends up happening is the leadership structure shifts from one dominated by technically-savvy people, to a culture of business. Best example is Apple (not a startup anymore, but I think my point will rest). Under Jobs, Forstall, Ive and other teams all prioritized product. Currently, Apple's leadership consists of Cook, Luca, Schiller, Eddy Cue, Williams, Ternus and Craig (they are the ones that effectively make the big decisions). Craig and Ternus are the only "engineers", and neither of them have been engineering first. This is not a founders story, but I am trying to highlight how non-technical folks will put in the decision making seat non-technical folks, which will harm your product in the long run. You can be the best engineer out there. If you report to someone who holds the reigns and is not technical, you are facing a tough road. Of course, there will be exceptions. However, this is rather the norm, and in startup culture, where the odds are already stacked against you, I'd advise against it.
Actually, this is wrong depending on the company and what the specific situation is.
Look at DEC, a classic engineering company failure. DEC failed because they were led by engineers who didn't understand the market. It apparently was a great place to work, because they were so NIH that they built everything from scratch.
Then look at Intel, a company that is in the process of failing because they listened to their customers too much. None of their customers wanted GPUs, or mobile chips, or power savings - until they did. By that time Intel was already behind the curve.
Then look at Microsoft under Ballmer - a company that probably illustrates the point you're trying to make. But then they won with Nadella, luckily.
Apple is a bit different and a bad example because unlike other companies they attempt to define the future. Most companies aren't in a position to try, much less succeed, at this.
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
https://www.healthcarefinancenews.com/news/class-action-laws...
feature, not bug
working as intended, closing ticket
This is a fantastic illustration of selection bias. It stands to reason that truly-unjustified (some hidden variable) denials would be appealed at a higher rate and therefore the true value is something less than 90%.
That's not to say UHG are without blame, I just thought this was really interesting.
This case is not like that; if the insurance agency wants to dispute the 90% false denial rate, it would be trivial for them to take a random sample of _all_ cases, go through the appeal process for those, and publish the resulting number without selection bias.
As long as that doesn't happen, the most logical conclusion for us outside observers is: the number is probably not so much lower than 90% that it makes a difference.
It stands to reason that the overwhelming majority of cases where the claim was approved were approved correctly. Unless that rate is well under 15%, it’s impossible to have the claimed “90% error rate”.
[1] In the sense of "it doesn't matter if it caused the problem", rather than "it probably didn't have any effect". Because after all, "to err is human, but to really foul things up takes a computer".
We'll send the appeals through Mechanical Turk.
Happy now?
For the points you brought up, why is stagnation for the purposes of upholding an ethical position a bad thing?
And yes, by definition, worrying about ethical responsibility would lead to ethical issues. That's the whole point.
However, I believe there's a middle ground and endeavor to find it. Based on your response it doesn't appear as though you believe a middle ground exists.
> Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, […]
I’m pretty sure. Although, the original comment was basically putting that issue aside, so I’m not sure what there is to say about it.
Dealing with a machine is unlikely to be worse.
But it was already the case that they just arbitrarily do WTF ever they want, that outside a small set of actions that "bots" can perhaps handle fine they aren't going to do anything for you, and that the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.
So... not really any different? You already basically have to threaten them (well, have someone scarier than you threaten them) to get any real support, this wouldn't be different.
And then the they will add a low cost arbitration clause, where disputes are also handled by AI. Free market goes brrr
Also, I think you may have made a typo that negated the meaning of some of your comment (but I believe I can understand what you meant anyway).
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.
However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.
AI is bias automation, and reflects the data it's trained on. The vast majority of training data is biased, even against different slices of Americans. The resulting AI will be biased.
Only true for pre-trained foundational models without any domain-specific augmentations. A good AI tool in this space would be fine-tuned or have other mechanisms that overshadow the pre-training from internet content.
At best they'll be using fine tuned enterprise OpenAI / Anthropic models, more likely a regular model with a custom prompt.
Although I would still agree that there would need to be a mechanism for escalation to a human.
It's easy to describe a business process with written down rules, and those are easy to find in legal discovery. It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".
Do you have actual knowledge of this? If not, the most obvious counterpoint is that the AI will need to give the reason or reasons for denial, and recording them for audit. Just like a human or a rules-based system.
I am personally 7 for 8 in lifetime wins in my city's parking ticket appeals process. That doesn't mean that I think that 7 out of 8 tickets my city issues are incorrect.
I suppose whole life where there is a cash value and investments being managed might have a more ongoing service need, but I'm not familiar with that.
I seems a bit high to me, but I don’t know anything about the industry. FWIW, around 170k people die per day.
https://news.ycombinator.com/item?id=43918053
This doesn’t establish any sort of mathematical bounds, but it gives an idea of the size of the problem. I suspect 100k employees is an over-estimate just because a lot of people are uninsured…
Allianz has ~150k employees but certainly they don't all work on the term life business in the USA, they do all kinds of other insurance stuff all over the world and have hundreds of different products.
For term life specifically, there still are some pretty significant back office teams that a customer probably never interacts with directly, though. A few that come to mind:
- underwriters: you wont be able to make a decision for all of your applicants based on the info they provide you and the info you can pull from automated sources, so some number of humans are on the phone with your applicants asking clarifying questions, doing additional research, and making risk decisions. They're also routinely doing retrospective analysis that looks back on claims paid out to make sure the claims are reasonable and there's not some sort of gap in the underwriting approach thats leaving unknown risk on the table, and audits of automated underwriting decisions to make sure the rules engines are correctly categorizing risks
- actuaries: every company has varying risk tolerance for both the policies they issue and the cash they hold/invest. These people are advising on how to take risks and working with underwriters and finance people to try and figure out the financial impact of various underwriting decisions: can a product remain viable if it is purchased by a heavier balance of smokers vs nonsmokers, etc
- accountants and finance: its a capital-intensive business that requires large cash reserves and sane investment strategy for that cash, often subject to tests by regulators or industry associations and all sorts of lengthy audits
- compliance: in the US, life insurance is individually regulated by each state. Many states join the ICC Compact and agree to all follow the same rules and have a single set of regulatory filings, but you still have plenty of other states to do filings with, analyze changing requirements from, maintain relationships with regulators, respond to regulatory complaints or investigations, etc
- industry reporting: most insurance carriers participate in information-sharing programs like the MIB (Medical Information Bureau) and these memberships come with various reporting and code-back obligations. The goal is to prevent you from getting declined at one life insurer because you say you have some sort of uninsurable illness and then turning around and lying about not having that illness to another life insurer the next day. These sort of conflicting answers get flagged for manual review, someone will need to talk to the applicant and figure out why they gave conflicting info to multiple insurers and what the truth really is.
- claims and fraud investigations: many, many people lie to try and get insurance they aren't qualified for or to take out insurance on someone they aren't supposed to. Claims investigations start by asking "is the insured really dead" but then try to answer the questions like "did the insured know this policy was taken out on them", "were the responses the insured gave during underwriting truthful", etc. These investigations are extremely time consuming and often involve combing public records, calling doctors, interviewing family, and more. You'd probably be shocked how common it is for former-spouses to try and take out insurance policies without the other knowing during divorces. Some level of this investigation is happening in the first couple of years a policy is in force, too, as insurers can rescind the policy and refund the premiums if they determine it was obtained under false pretenses
- reinsurance: even the biggest insurers typically pool and share some amount of risk so that a bad claims year can't take down an entire carrier. reinsurance treaties are complex things to negotiate and maintain, and have lots of reporting obligations and collaboration between the reinsurer and the actuaries to validate the risks are what everyone thinks they are
The customer-facing part of a term life company is really just the tip of the iceberg. Small companies are certainly better at doing this with tech than bigger incumbents (thats a big part of the reason we exist at Wysh), and a narrow product focus really helps, but there's still some pretty significant levels of human expertise involved to keep it all running.
If they were receiving spousal support (“alimony”) or child support, this seems unsurprising and sensible.
You need both an insurable interest and consent of the insured in order to buy an insurance policy on someone else’s life.
Couples separating and holding policies on each other is pretty common and carriers have some specific rules to follow to make sure there’s appropriate mutual consent for policy changes etc
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.
LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.
No it's not, but that's not what I described. I described replacing mediocre humans with better AI for at least the first level of customer service.
Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.
Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.
There should be social safety nets to ease people's transition. Not protectionism of unproductive jobs.
The way I've come to think of the current moment in history is that capitalism allocates resources via markets and we use this system because in many situations its highly efficient. But governments allocate resources democratically exactly because we do not always want to allocate resources efficiently with respect to making money.
Whether it "makes sense" or not, most people believe there is more to life than the efficient allocation of resources and thus it might be a reasonable opinion that making 100,000 people suddenly unemployed is bad. I doubt seriously that the OP believes having 100,000 people working indefinitely when the labor can be done more efficiently by machines is good. I think most reasonable people want to see the transition handled more smoothly than a pure market capitalism would do it.
And I think there is a distinction in different kinds of efficiency that can be optimized for, not just monetary cost. If we desire clean, paved, safe roads, that can be used by all equally for efficient movement of goods, because we recognize that as a prereq for a strong economy, we can not rely on the free market to deliver that, much less optimize for it. It can be more efficient, in terms of actually delivering the desired goal vs not delivering it at all (or delivering a grossly bastardized version of it) to pool our resources and explicitly work towards making something available rather than hoping that the free market will deliver it.
The free market did not deliver on reducing congestion in New York (in fact, one might say that over the decades, the free market is what made it worse), but the congestion pricing program has, and has resulted in a bunch of valuable/desirable knock-on effects.
I do not think that a centrally planned economy is workable; but collectively being deliberate about building the things we need/want, and taking a longer view, can result in significant efficiencies.
The free market ends up simply wasting resources in its drive to discover where efficiencies lie and how to take advantage of them.
What part of that is suffering, if it enables 100k constituents to put food on the table?
We shouldn't employ people in economically un-viable ways just because they need income. We can just give them money directly, or redirect them to other work, or a combination of the two.
If that is what's necessary to provide a social safety net, then maybe so. See the works progress administration for an example of this.
> We can just give them money directly, or redirect them to other work
Ideally yes, but that isn't happening, hence the first option.
We may be straying here, though: this discussion didn't start out with someone saying what someone else should or shouldn't do. We were discussing the ethical and economic consequences of an idea.
I agree with you (except in classifying the genuine effort of my fellow people to be "fake jobs" just because a computer can do some of the work) and believe making a resilient, trustworthy, proven system for the former is a prerequisite to withdrawing the latter, to avoid suffering.
Unfortunately for us, the barrier to the former is ideological in nature and imposed by the elite few in power now, before any matters of capital allocation (human or financial) come into play.
This was previously stated: the good being done is 100,000 people can feed their families. What good is going without that? You'll enrich some private equity dudes and make a lot more people unemployed and a lot more families unhappy.
There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.
Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.
E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:
a. incorrectly approves someone, then you need to kick them off the policy later?
b. incorrectly denies someone initial or continuing coverage?
Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.
And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.
That's not an assumption.
I know that I, and many others, have been able to get a human on the phone every time we needed one. Regardless of the number of those humans actually working claims, in the current system, it is "enough".
I also know that it's impossible to give that level of service when you have 1 employee for every 10 million customers.
That's really all that you need in order to make the judgement that you're not going to get a human.
Side-note: I did a quick search, and found that Allstate has 23k reps that actually handle claims and 55k employees total, so almost half of their workforce does claims and disputes. They also have 10% market share of the US's ~340 million people, so that's, at most, 1 rep per 1500 employees. That's much better odds than 1 for every 10 million.
> A reasonably decent A
And there's the problem - that AI doesn't exist. You're speculating about a scenario that simply hasn't been realized in the real world, and every single person that I've talked to who has interacted with an AI-based "support representative" has had a bad experience.
https://worldpopulationreview.com/countries/deaths-per-day
So actually 100,000 employees put it surprisingly close to just having one case handled per day per employee.
Of course, a ton of people don’t have life insurance. And also, a lot of deaths are pretty straightforward.
The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers
The comparison to the Church seems not really super useful, their business model is pretty different.
For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.
Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.
> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation
https://world.org/
You can take the founder out of a consultancy, but you can't take the consultancy out of the founder.
The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…
Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.
You want the senior people focusing on the problems, strategy, and comms and not data aggregation and power point formatting.
Half the time it doesn't actually matter who the consultant is, the business is just looking for an arbiter to provide a second opinion or justify a decision.
Modern consulting seems like one of the better deaths inflicted by GenAI. The entire industry is a means to commit corporate espionage legally.
They can do something more useful with that education.
This is your mistake. The point of a consultant is to tell the business to do what the business was already planning to do anyway. This way the consultant takes the risk/blame of the decision. It's similar to the classic 'no one was ever fired for buying IBM' "I did what the McKinsey consultant told me" is CYA. The last piece is that since everyone is in on the game, when a decision leads to bad outcomes they don't blame the consultant, but something they could not have foreseen.
The person making the recommendations isn't just out of school. They've been at the firm for years, and do have a ton of experience.
The recent grads are there for all of the grunt work -- collecting massive amounts of data and synthesizing it. You don't need years of business experience for that, but getting into a top college and writing lots of research papers in college is actually the perfect background for that.
1. This BA/Asc was on <4 hours of sleep, maybe many days in a row
2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted
And after the meeting (this I feel more confident about, as it happens a lot)
1. A conversation happened to see if the BA/Asc wanted to stay on the project
2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)
Isn't that... good? What else would you expect
Why would they fire him after a singe incident?
Sounds like McKinsey is a more companionate organization than you, and that's saying something:)
Saying the works sucks isn't bullying, unless you didn't know you were incompetent.
Having worked in highly regulated industries, I’ve learned that the best way to disrupt incumbents is by creating a product that assumes more business risk than is typically accepted. Large, regulated companies are extremely risk-averse—so if you can take on that risk in a smart, innovative way, you’ll win.
They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.
Very much a symbiotic vs parasitic relationship.
Now that I'm working at a big organization (a Fortune 500 company), I can relate. I'm by far the most innovative person in my team and I'm being held down because I'm not doing my role (as I'm not a dev but a data analyst at the moment).
If I'd be doing my role however, then we wouldn't be innovating and the C-suite wants us to innovate with AI. I'm the only one at my department that can create actual AI automations. And the IT department is basically stripped out by upper management.
If anyone wants an actual dev building AI automations and think how we can disrupt with the state of the art, my email is in my profile.
The fact that the company has become a sort of pseudo-VC (mentorship but not financing) for small teams within megacorps is interesting. I wonder why large corps find it so difficult to innovate. I think that they become somewhat "load-bearing" in society and the lines between the company and the market begin to blur. Any change the company makes causes a misalignment because they shaped the market to fit themselves.
Unless you can solve that part of the problem as well as the big players, you will run into problems at some point, using extrem value theory you can even estimate when.
I really think every founder (and startup worker) needs to take seriously the marketing side of the business, and not just believe that new technology will win.
(While I, too, am allergic to bitcoin scams, given increasing levels of political corruption monkeying with markets, rates, and regulation, I can also see it as an enticing alternative for those looking to get long-term investments off the dollar. For insurance, the main question is, will the money be there and be made available? Having seen even highly-regulated pensions fail (without federal insurance recourse in the case of religious hospital behemoths), I can see how technical guarantees independent of regulation or law could be compelling.)
The key phrase is LIFE Insurance, not HEALTH Insurance!
They are vastly different markets.
You don’t deny claims for life insurance as companies would do for health insurance. It’s a very different set of circumstances to have to deny life insurance.
So 3 years at McKinsey taught OP the corporate BS. That paragraph doesn't say anything useful.
Some of them off the top of my head are number customers, number of active policies, premium amount, assets under management, time to claim resolution, etc. He's talking to business people who understand the insurance market.
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management.
We think we can be bigger (more customers, more sales, more money) than all existing players.
> We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably.
We're looking to eclipse the population of any one country and we're going to use something like Bitcoin to side-step national currencies (and maybe also to avoid existing regulatory structure, not clear from the ambiguous language).
> We plan to do with 100 people what Allianz and others do with 100,000.
We believe we can automate or use AI to eliminate the need for people to actually support these billion customers.
All three of those are very bold statements/goals.
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
perhaps a typo in year?
Somewhat ironically, over the past 20 years I’ve come to reject PhD-type career tracks after seeing how much PhD overproduction there is and how my older colleagues only had a BS or MS. These days, I yearn to leave my Big Tech job to start a “boring” business. Right now I’m taking Accounting 101 at a local university to understand business financials better.
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
one nitpick:
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
ChatGPT launched in late 2022...
I would rather work for someone honest than for a bullshit artist. But I wouldn't necessarily decline to work for the world's best bullshit artist. Just that you want to be very sure you know who at the table is the sucker.
Ideally you get someone whose good at both, or at least competent at one and really good in the other, such as Jobs or Gates.
Look at DEC, a classic engineering company failure. DEC failed because they were led by engineers who didn't understand the market. It apparently was a great place to work, because they were so NIH that they built everything from scratch.
Then look at Intel, a company that is in the process of failing because they listened to their customers too much. None of their customers wanted GPUs, or mobile chips, or power savings - until they did. By that time Intel was already behind the curve.
Then look at Microsoft under Ballmer - a company that probably illustrates the point you're trying to make. But then they won with Nadella, luckily.
Apple is a bit different and a bad example because unlike other companies they attempt to define the future. Most companies aren't in a position to try, much less succeed, at this.