Cloud providers like AWS, GCP, and Azure should offer local emulators for development. This would encourage developers to utilize their services more.
I currently work with several AWS serverless stacks that are challenging or even impossible to integration test locally. While Localstack provide a decent solution, it seems like a service that AWS should offer to enhance the developer experience. They’d also be in the best position to keep it current.
An official local emulator sounds nice until AWS has to explain why S3, IAM, or Kinesis behave a little differntly on your laptop, because the minute it's blessed people will treat every mismatch as an AWS bug, not a dev-time compromise.
Microsoft used to with their Azure Service Dev Kit. the ASDK was a single-node "sandbox" meant to emulate the entire Azure cloud locally. They may have something similar now but paired back
Totally agree that AI coding makes this even more important. We are working on a coding agent-first cloud and a large part of that is ensuring everything runs locally so folks can let their coding agents define the infra and test it all
Without the infrastructure behind it to make it make sense, cloud platforms just seem like convoluted ways of storing data and launching applications/VMs to me.
The only functional use of a tool like this to me would be to learn how to use AWS so that I can work for people who want me to use AWS. Would that not be to Amazon's benefit?
It's a fair point but iff you neglect that the overwhelming revenue drivers for these services are large corps who are already locked-in.
Devx doesn't matter at all once you're there.
The myopathy among us "online people" is assuming number of voices here and elsewhere correlate to revenue.
If you want to use that for unit testing, then I think it would be better to mock the calls to AWS services. That way you test only your implementation, in an environment you control.
If you want to use that for local development, then I think it would be better to provision a test environment (using Terraform or any other IaC tool). That way you don't run the risk of a bug slipping into prod because the emulator has a different behaviour than the real service.
A few notes about "local AWS" (or "local cloud") based on other comments and my own XP:
- I'm not sure this kind of product is really a foot in the door to create new customers. Someone not willing to create an actual account because they have no money or they just don't want to put their card details is not someone who's going to become a 6 figures per year customer, which is the level to be noticed by those providers.
- The free tier of AWS is actually quite generous. For my own needs I spend less than $10/year total spread around dozens of accounts.
- If one wants to learn AWS, they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible. It's better to overspend $5 at the beginning of the journey than to overspend $5k when going to prod.
- The main interest of local cloud is actually to make things easier and iterating faster, because you don't focus on all the security layer. Since everything is local, focus on using the services, period. Meanwhile, if you wanted to rely on actual dev accounts, you need to first make sure that everything is secure. With local cloud you can skip all this. But then, if you decide to go live, you have to fix this security debt and it most often than not break things that "work on my computer".
- Localstack has the actual support of AWS, that's why they have so much features and are able to follow the releases of the services. I doubt this FOSS alternative will have it.
Security is the entire reason I want tools like this. Specifically for emulating IAM: if you've got a hard organisational "least privilege" mandate then you start with virtually nothing allowed and have to enable permissions for the explicit set of API calls you're using. You're not doing `Allow :` but you're also not using AWS-managed roles. That combined with the fact that - certainly with terraform - there's no mapping between "I need to manage this resource" and "these are the permissions needed to do so" means that every time you do something new in your infrastructure you're going into a game of permissions whack-a-mole where the deploy/fix/deploy cycle can easily take a multiple of the time it took to develop the feature you want to deploy, because one trip round the loop is a full attempted deployment. Whereas if there's an accurate local emulator not just of the feature but of the permissions attached to it, you can shortcut the slow bit.
Localstack does have IAM emulation as part of the paid product. I'm intrigued to see how well this does at the same thing.
When you're running hundreds of integration test suites per day in CI pipelines, the free tier is irrelevant. You need fast, deterministic, isolated environments that spin up and tear down in seconds, not real AWS calls that introduce network latency, eventual consistency flakiness, rate limits, and costs that compound with every merge request.
It'd be great to just use AWS but in practice it doesn't happen. Even if billing doesn't, limits + no notion of namespacing will hit you very quickly in CI. It's also not practical to give every dev AWS account, I did it with 200 people it was OK but always caused management pain. Free tier also don't cover organizations.
> they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible
This is a bizarre take. "The best way to learn fire safety is to get burned." You can understand AWS billing without treating surprise charges as a rite of passage.
The main use case for local emulators is unit testing. Maybe even some integration testing, especially for stuff like VPC setup that often can't be done without global side effects.
Security for dev accounts is not a big deal, just give each developer an individual account and set up billing alerts.
> LocalStack's community edition sunset in March 2026 — requiring auth tokens, dropping CI support, and freezing security updates. Floci is the no-strings-attached alternative.
This project would be comical if it takes off. In Romanian this name means "a small pile of hair", but informally it's only used as a synonym for pubic hair.
Looking at the features this seems to be an awesome project, but the commit history (even on the develop branch) shows almost nothing.
No pull-requests, no real issues, it smells like it was auto-generated which is disappointing. Makes it harder to trust if you're going to test with "real data", how do we know it won't be sent elsewhere?
I don't understand why you'd be making this comment when the commit history shows this whole project is a week old.
>how do we know it won't be sent elsewhere?how do we know it won't be sent elsewhere?
I the past open source meant that you trusted in theory that someone else would notice and report these things. These days though just load up your LLM of choice and ask it to do a security audit. There are some unreliable ways to cheat this and they aren't magical, but it would be pretty hard to subvert this kind of audit.
It is usual for a new project to start small, and slowly add new features. Instead this project seems to arrive "fully formed".
There is no "this is the core, then we add S3, then we add RDS, then we add ..." history to view and that seems both unnatural and surprising. Over half the commits are messing around with github actions and documentations.
Local AWS emulators are one of those tools where the value is inversely proportional to how much you trust your staging environment. If your staging account perfectly mirrors prod, you don't need a local emulator. But nobody's staging perfectly mirrors prod, so you end up needing something like this for the fast feedback loop on IAM policies, Step Functions state machines, and anything involving SQS/SNS fanout where the iteration cycle against real AWS is measured in minutes per attempt. The question is always parity — how closely does the emulation match real AWS behavior at the edges? LocalStack has been chasing that for years and still hits gaps. Curious how Floci handles the services where AWS's own behavior is underdocumented.
The point of tools like this is for development, not staging. By “development” I don’t just mean developers writing code, but any unit tests that require behavioural testing that cannot easily be mocked too.
So by the time you’re ready to push to staging you should be past the point of wanting to emulate AWS and instead pushing to UAT/test/staging (whatever your naming convention) AWS accounts.
Ideally you would have multiple non-production environments in AWS and if your teams are well staffed then your dedicated Cloud Platform / DevOps team should be locking these non-prod environments from developers in the same way as they do to production too.
Bonus points if you can spin up ephemeral environments automatically for feature branches via CI/CD. But that’s often impractical / not pragmatic for your average cloud-based project.
Although I love localstack and am grateful for what they have done, I always thought that an open community-driven solution would be much more suitable and opens a lot of doors for AWS engineers to contribute back. I’m certain that it’s on their best interest to do so (specially as many of their popular products have local versions)
It’s a no-brainer to me as AI adoption continues to increase: local-first integration testing is a must and teams that are equipped to do so will be ahead of everyone else
100% this. especially with agentic workflows actually mutating state now. local testing is the only safe way to see what happens when a model hallucinates a table drop without burning an actual staging database.
Cool, I've tried localstack before and cant wait to give it a try
Anyway, do anyone know if there're similar stuff but for gcp? So far https://github.com/goccy/bigquery-emulator helped me a lot in emulating bigquery behaviour, but I cant find emulator for the whole gcp environment.
Getting Java to run is a base requirement for running most software written in Java.
However, there is a dedicated Dockerfile for creating a native image (Java words for "binary") that shouldn't require a JVM. I haven't tested running the binary myself so it's possible there are dependencies I'm not aware of, but I'm pretty sure you can just grab the binary out of the container image and run there locally if you want to.
It'll produce a Linux image of course, if you're on macOS or Windows you'd have to create a native image for those platforms manually.
Isn’t a “local emulator of cloud services” kind of the perfect project to be vibe coded? Extremely well documented surface to implement, very easy to test automatically and prove it matches the spec, and if you make some things sub optimal performance wise, that is totally fine because by project will not be used in a tight loop anyway - e.g. it will just need to be faster than over the network hop plus the time it takes for the cloud to actually persist things. This can just need to do this in ram and doesn’t need to scale.
So I’m shocked cloud providers haven’t just done this themselves, given how feasible it is with the right harness
Not necessarily. Would you respond the same if the previous person said, "Was this built using an IDE" or "What qualifications do you have to write this software"?
Shit code can be written with AI. Good code can also be written with AI. The question was only really asked to confirm biases.
As someone who has worked in projects with hundreds of seemingly trivial dependencies which still manage to produce a steady stream of security notices, "What qualifications do you have to write this software" seems like an entirely reasonable, far too seldom asked question to me.
I dont automatically dismiss ai slop but when its obvious this was barely reviewed and sloppily committed with broken links 404ing or files missing from git, then it is slop.
Using llm as a tool is different from guiding it with care vs tossing a one sentence prompt to copy localstack and expecting the bot to rewrite it for you, then pushing a thousand file in one go with typos in half the commit message.
Longevity of products comes from the effort and care put into them if you barely invest any of it to even look at the output, look at the graveyard of "show hn" slop. Just a temporary project that fades away quickly
There are no code commits. The commits are all trying to fix ci.
The release page (changelog) is all invalid/wrong/useless or otherwise unrelated code changes linked.
Not clearly stating that it was AI written, and trying to hide the claude.md file.
The feature table is clearly not reviewed, like "Native binary" = "Yes" while Localstack is no. There is no "native" binary, it is a packed JVM app. Localstack is just as "native" then. "Security updates Yes" .. entirely unproven.
I'll have a much harder time convincing my company to try out such a tool if it's AI slop than when there's a group of people behind it.
I'll happily use it for personal development stuff if I ever decide to try cloud stuff in my free time, but it's hardly an alternative to established projects like LocalStack for serious business needs.
Not that any of it should matter to the people behind this project of course, they can run and make it in whatever way they want. They stand to lose nothing if I can't convince my boss and they probably shouldn't care.
I run several Docker services on EC2 and testing locally before deploying has always been painful. This looks promising for catching config issues before they hit production. Does it support EC2 + RDS together in local mode?
At that speed you can treat it as disposable: fresh instance per test run, no shared state, no flaky tests from leftover S3 objects. that was never practical with LocalStack cold start
Also localhost and presumably this are good for validating your logic before you throw in roles, network and everything else that can be an issue on AWS.
Confirm it runs in this, and 99% of the time the issue when you deploy is something in the AWS config, not your logic.
Exactly, especially when people are starting out, don't have a clear understanding of the inner workings of the system for whatever reason. Jobs are getting harder to find nowadays and if during learning, you make one mistake, you either pay or the learning stops.
I currently work with several AWS serverless stacks that are challenging or even impossible to integration test locally. While Localstack provide a decent solution, it seems like a service that AWS should offer to enhance the developer experience. They’d also be in the best position to keep it current.
AWS don't want that support nightmare.
Great to see Localstack offset a bit thanks to ... AI driven shift left infrastructure tooling? This is a great trend.
The only functional use of a tool like this to me would be to learn how to use AWS so that I can work for people who want me to use AWS. Would that not be to Amazon's benefit?
It could encourage more development and adoption and lead to being a net-positive for the revenue.
The myopathy among us "online people" is assuming number of voices here and elsewhere correlate to revenue.
It does not.
If you want to use that for unit testing, then I think it would be better to mock the calls to AWS services. That way you test only your implementation, in an environment you control.
If you want to use that for local development, then I think it would be better to provision a test environment (using Terraform or any other IaC tool). That way you don't run the risk of a bug slipping into prod because the emulator has a different behaviour than the real service.
- I'm not sure this kind of product is really a foot in the door to create new customers. Someone not willing to create an actual account because they have no money or they just don't want to put their card details is not someone who's going to become a 6 figures per year customer, which is the level to be noticed by those providers.
- The free tier of AWS is actually quite generous. For my own needs I spend less than $10/year total spread around dozens of accounts.
- If one wants to learn AWS, they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible. It's better to overspend $5 at the beginning of the journey than to overspend $5k when going to prod.
- The main interest of local cloud is actually to make things easier and iterating faster, because you don't focus on all the security layer. Since everything is local, focus on using the services, period. Meanwhile, if you wanted to rely on actual dev accounts, you need to first make sure that everything is secure. With local cloud you can skip all this. But then, if you decide to go live, you have to fix this security debt and it most often than not break things that "work on my computer".
- Localstack has the actual support of AWS, that's why they have so much features and are able to follow the releases of the services. I doubt this FOSS alternative will have it.
Localstack does have IAM emulation as part of the paid product. I'm intrigued to see how well this does at the same thing.
When you're running hundreds of integration test suites per day in CI pipelines, the free tier is irrelevant. You need fast, deterministic, isolated environments that spin up and tear down in seconds, not real AWS calls that introduce network latency, eventual consistency flakiness, rate limits, and costs that compound with every merge request.
It'd be great to just use AWS but in practice it doesn't happen. Even if billing doesn't, limits + no notion of namespacing will hit you very quickly in CI. It's also not practical to give every dev AWS account, I did it with 200 people it was OK but always caused management pain. Free tier also don't cover organizations.
> they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible
This is a bizarre take. "The best way to learn fire safety is to get burned." You can understand AWS billing without treating surprise charges as a rite of passage.
Security for dev accounts is not a big deal, just give each developer an individual account and set up billing alerts.
If your only focus is spending, yes.
Otherwise, a "not a big deal" dev account can quickly become the door to your whole org for hackers
RDS databases, DynamoDB, and S3? Much less so.
That's my point: I'm not the one setting it up and using it, it's the devs using it
And I'm not expecting them to know how to navigate a cloud provider securely.
So it's either setting the dev account with all the required guardrails in place, or using "local cloud" on their computer
No pull-requests, no real issues, it smells like it was auto-generated which is disappointing. Makes it harder to trust if you're going to test with "real data", how do we know it won't be sent elsewhere?
>how do we know it won't be sent elsewhere?how do we know it won't be sent elsewhere?
I the past open source meant that you trusted in theory that someone else would notice and report these things. These days though just load up your LLM of choice and ask it to do a security audit. There are some unreliable ways to cheat this and they aren't magical, but it would be pretty hard to subvert this kind of audit.
There is no "this is the core, then we add S3, then we add RDS, then we add ..." history to view and that seems both unnatural and surprising. Over half the commits are messing around with github actions and documentations.
https://news.ycombinator.com/item?id=47420619
https://github.com/robotocore/robotocore
So by the time you’re ready to push to staging you should be past the point of wanting to emulate AWS and instead pushing to UAT/test/staging (whatever your naming convention) AWS accounts.
Ideally you would have multiple non-production environments in AWS and if your teams are well staffed then your dedicated Cloud Platform / DevOps team should be locking these non-prod environments from developers in the same way as they do to production too.
Bonus points if you can spin up ephemeral environments automatically for feature branches via CI/CD. But that’s often impractical / not pragmatic for your average cloud-based project.
But you can’t have every dev tweaking staging at the same time as they work. How can you debug things when the ground is shifting beneath you?
Ideally every dev has their own AWS account to play with, but that can be cost prohibitive.
A good middle ground is where 95% of work is done locally using emulators and staging is used for the remaining 5%.
One of the first things I do when building a new component is create a docker compose environment for it.
Although I love localstack and am grateful for what they have done, I always thought that an open community-driven solution would be much more suitable and opens a lot of doors for AWS engineers to contribute back. I’m certain that it’s on their best interest to do so (specially as many of their popular products have local versions)
It’s a no-brainer to me as AI adoption continues to increase: local-first integration testing is a must and teams that are equipped to do so will be ahead of everyone else
Anyway, do anyone know if there're similar stuff but for gcp? So far https://github.com/goccy/bigquery-emulator helped me a lot in emulating bigquery behaviour, but I cant find emulator for the whole gcp environment.
However, there is a dedicated Dockerfile for creating a native image (Java words for "binary") that shouldn't require a JVM. I haven't tested running the binary myself so it's possible there are dependencies I'm not aware of, but I'm pretty sure you can just grab the binary out of the container image and run there locally if you want to.
It'll produce a Linux image of course, if you're on macOS or Windows you'd have to create a native image for those platforms manually.
So I’m shocked cloud providers haven’t just done this themselves, given how feasible it is with the right harness
Mentions CLAUDE.md and didn't even bother deleting it.
Whether their concerns are driven by curiosity, ethics, philosophy, or something else entirely is really immaterial to the question itself.
Shit code can be written with AI. Good code can also be written with AI. The question was only really asked to confirm biases.
Using llm as a tool is different from guiding it with care vs tossing a one sentence prompt to copy localstack and expecting the bot to rewrite it for you, then pushing a thousand file in one go with typos in half the commit message.
Longevity of products comes from the effort and care put into them if you barely invest any of it to even look at the output, look at the graveyard of "show hn" slop. Just a temporary project that fades away quickly
The commits are sloppy and careless and the commit messages are worthless and zero-effort (and often wrong): https://github.com/hectorvent/floci/commit/1ebaa6205c2e1aa9f...
There are no code commits. The commits are all trying to fix ci.
The release page (changelog) is all invalid/wrong/useless or otherwise unrelated code changes linked.
Not clearly stating that it was AI written, and trying to hide the claude.md file.
The feature table is clearly not reviewed, like "Native binary" = "Yes" while Localstack is no. There is no "native" binary, it is a packed JVM app. Localstack is just as "native" then. "Security updates Yes" .. entirely unproven.
I'll happily use it for personal development stuff if I ever decide to try cloud stuff in my free time, but it's hardly an alternative to established projects like LocalStack for serious business needs.
Not that any of it should matter to the people behind this project of course, they can run and make it in whatever way they want. They stand to lose nothing if I can't convince my boss and they probably shouldn't care.
At that speed you can treat it as disposable: fresh instance per test run, no shared state, no flaky tests from leftover S3 objects. that was never practical with LocalStack cold start
Also localhost and presumably this are good for validating your logic before you throw in roles, network and everything else that can be an issue on AWS.
Confirm it runs in this, and 99% of the time the issue when you deploy is something in the AWS config, not your logic.
Exactly, especially when people are starting out, don't have a clear understanding of the inner workings of the system for whatever reason. Jobs are getting harder to find nowadays and if during learning, you make one mistake, you either pay or the learning stops.