Could you genericise the requirement in postgresql and provide a storage interface we could plug into? I think I have a use for this in Polykey (https://GitHub.com/MatrixAI/Polykey) but we use rocksdb (transactional key value embedded db).
That's definitely worth considering! The core algorithms can work with any data store. That said, we're focused on Postgres right now because of its incredible support and popularity.
Hi there, I think I might have found a typo in your example class in the github README. In the class's `workflow` method, shouldn't we be `await`-ing those steps?
It's not recommended--the assumed model is that every workflow finishes on the code version it started. This is managed automatically in our hosted version (DBOS Cloud) and there's an API for self-hosting: https://docs.dbos.dev/typescript/tutorials/development/self-...
That said, we know sometimes you have to do surgery on a long-running workflow, and we're looking at adding better tooling for it. It's completely doable because all the state is stored in Postgres tables (https://docs.dbos.dev/explanations/system-tables).
The main use case is to build reliable programs. For example, orchestrating long-running workflows, running cron jobs, and orchestrating AI agents with human-in-the-loop.
DBOS makes external asynchronous API calls reliable and crashproof, without needing to rely on an external orchestration service.
How do you persist execution state? Does it hook into the Python interpreter to capture referenced variables/data structures etc, so they are available when the state needs to be restored?
About workflow recovery: if I'm running multiple instance of my app that uses DBOS and they all crash, how do you divide the work of retrying pending workflows?
Each workflow is tagged by the executor ID that runs it. You can command each new executor to handle a subset of the pending workflows. This is done automatically on DBOS Cloud. Here's the self-hosting guide: https://docs.dbos.dev/typescript/tutorials/development/self-...
I was originally looking at the docs to see if there was any information on multi-instance (horizontally scaled) apps. Is this supported? If so, how does that work?
Yeah, DBOS Cloud automatically (horizontally) scales your apps. For self-hosting, you can spin up multiple instances and connect them to the same Postgres database. For fan-out patterns, you may leverage DBOS Queues. This works because DBOS uses Postgres for coordination, rate limiting, and concurrency control. For example, you can enqueue tasks that are processed by multiple instances; DBOS makes sure that each task is dequeued by one instance.
> What’s unique about DBOS’s take on durable execution (compared to, say, Temporal) is that it’s implemented in a lightweight library that’s totally backed by Postgres. All you have to do to use DBOS is “npm install” it and annotate your program with decorators. The decorators store your program’s execution state in Postgres as it runs and recover it if it crashes. There are no other dependencies you have to manage, no separate workflow server–just your program and Postgres.
this is good until you the postgres server fills up with load and need to scale up/fan out work to a bunch of workers? how do you handle that?
(disclosure, former temporal employee, but also no hate meant, i'm all for making more good orcehstration choices)
That's a really good question! Because DBOS is backed by Postgres, it scales as well as Postgres does, so 10K+ steps per second with a large database server. That's good for most workloads. Past that, you can split your workload into multiple services or shard it. Past that, you've probably outscaled any Postgres-based solution (very few services need this scale).
The big advantages of using Postgres are:
1. Simpler architecturally, as there are no external dependencies.
Unaffiliated with DBOS but I agree that Postgres will scale much further than most startups will ever need! Even Meta still runs MySQL under the hood (albeit with a very thick layer of custom ORM).
Do you consider ”durability” to include idempotency? How can you guarantee that without requiring the developer to specify a (verifiable) rollback procedure for each “step?” If Step 1 inserts a new purchase into my local DB, and Step 2 calls the Stripe API to “create a new purchase,” what if Step 2 fails (even after retries, eg maybe my code is using the wrong URL or Stripe banned me)? Maybe you haven’t “committed” the transaction yet, but I’ve got a row in my database saying a purchase exists. Should something clean this up? Is it my responsibility to make sure that row includes something like a “transaction ID” provided by DBOS?
It just seems that the “durability” guarantees get less reliable as you add more dependencies on external systems. Or at least, the reliability is subject to the interpretation of whichever application code interacts with the result of these workflows (e.g. the shipping service must know to ignore rows in the local purchase DB if they’re not linked to a committed DBOS transaction).
Yes, if your workflow interacts with multiple external systems and you need it to fully back out and clean up after itself after a step fails, you'll need backup steps--this is basically a saga pattern.
Where DBOS helps is in ensuring the entire workflow, including all backup steps, always run. So if your service is interrupted and that causes the Stripe call to fail, upon restart your program will automatically retry the Stripe call and if that doesn't work, back out and run the step that closes out the failed purchase.
What are the limits on Retroaction? Can Retroactive changes revise history?
For example, if I change the code / transactions in a step, how do you reconcile what state to prepare for which transactions. For example, you'll need to reconcile deleted and duplicated calls to the DB?
Generally we recommend against retroaction--the assumed model is that every workflow finishes on the code version it started. This is managed automatically in our hosted version (DBOS Cloud) and there's an API for self-hosting: https://docs.dbos.dev/typescript/tutorials/development/self-...
That said, we know sometimes you have to do surgery on a long-running workflow, and we're looking at adding better tooling for it. It's completely doable because all the state is stored in Postgres tables (https://docs.dbos.dev/explanations/system-tables).
I see the example for running a distributed task queue. The docs aren't so clear though for running a distributed workflow, apart from the comment about using a vm id and the admin API.
We use spot instances for most things to keep costs down and job queues to link steps. Can you provide an example of a distributed workflow setup?
Got it! What specifically are you looking for? If you launch multiple DBOS instances connected to the same Postgres database, they'll automatically form a distributed task queue, dividing new work as it arrives on the queue. If you're looking for a lightweight deployment environment, we also have a hosted solution (DBOS Cloud).
What is the determinism constraint? I noticed it mentioned several times in blog posts, but one of the use-cases mentioned here is for use with LLMs, which produce non-deterministic outputs.
Great question! A workflow should be deterministic: if called multiple times with the same inputs, it should invoke the same steps with the same inputs in the same order. But steps don't have be deterministic, they can invoke LLMs, third party APIs, or any other operation. Docs page on determinism: https://docs.dbos.dev/typescript/tutorials/workflow-tutorial...
DBOS always uses transactions to perform database operations.
If you're writing a function that performs database operations, you can use the @DBOS.transaction() decorator to wrap the function so that DBOS's bookkeeping records commit in the same transaction as your operation.
However, if you're interfacing with a third-party API, then that wouldn't be part of a database transaction (you'll use @DBOS.step instead). The reason is that you don't want to hold database locks when you're not performing database operations.
Yeah, the arguments and return values of steps have to be serializable to JSON.
For versioning, each workflow is tagged with the code version that ran it, and we recommend recovering workflows on an executor running the same code version as what the workflow started on. Docs for self hosting: https://docs.dbos.dev/typescript/tutorials/development/self-.... In our hosted service (DBOS Cloud) this is all done automatically.
Loved the Supabase coverage from a month ago, showing under the hood what DBOS is storing & how the data flow works on it. It made real what DBOS was for me, clicked; before DBOS felt very abstract to me.
Is it possible to mix typescript and python steps?
Did you do literature research of Smalltalk?
That said, we know sometimes you have to do surgery on a long-running workflow, and we're looking at adding better tooling for it. It's completely doable because all the state is stored in Postgres tables (https://docs.dbos.dev/explanations/system-tables).
DBOS makes external asynchronous API calls reliable and crashproof, without needing to rely on an external orchestration service.
- Which workflows are executing
- What their inputs were
- Which steps have completed
- What their outputs were
Here's a reference for the Postgres tables DBOS uses to manage that state: https://docs.dbos.dev/explanations/system-tables
I was originally looking at the docs to see if there was any information on multi-instance (horizontally scaled) apps. Is this supported? If so, how does that work?
Docs for Queues and Parallelism: https://docs.dbos.dev/typescript/tutorials/queue-tutorial
this is good until you the postgres server fills up with load and need to scale up/fan out work to a bunch of workers? how do you handle that?
(disclosure, former temporal employee, but also no hate meant, i'm all for making more good orcehstration choices)
The big advantages of using Postgres are:
1. Simpler architecturally, as there are no external dependencies.
2. You have complete control over your execution state, as it's all on tables on your Postgres server (docs for those tables: https://docs.dbos.dev/explanations/system-tables#system-tabl...)
It just seems that the “durability” guarantees get less reliable as you add more dependencies on external systems. Or at least, the reliability is subject to the interpretation of whichever application code interacts with the result of these workflows (e.g. the shipping service must know to ignore rows in the local purchase DB if they’re not linked to a committed DBOS transaction).
Where DBOS helps is in ensuring the entire workflow, including all backup steps, always run. So if your service is interrupted and that causes the Stripe call to fail, upon restart your program will automatically retry the Stripe call and if that doesn't work, back out and run the step that closes out the failed purchase.
For example, if I change the code / transactions in a step, how do you reconcile what state to prepare for which transactions. For example, you'll need to reconcile deleted and duplicated calls to the DB?
That said, we know sometimes you have to do surgery on a long-running workflow, and we're looking at adding better tooling for it. It's completely doable because all the state is stored in Postgres tables (https://docs.dbos.dev/explanations/system-tables).
We use spot instances for most things to keep costs down and job queues to link steps. Can you provide an example of a distributed workflow setup?
- Drizzle (we're also a sponsor to Drizzle): https://docs.dbos.dev/typescript/tutorials/orms/using-drizzl...
- Knex: https://docs.dbos.dev/typescript/tutorials/orms/using-knex
- Prisma: https://docs.dbos.dev/typescript/tutorials/orms/using-prisma
More ORM support is on the way.
However, if you're interfacing with a third-party API, then that wouldn't be part of a database transaction (you'll use @DBOS.step instead). The reason is that you don't want to hold database locks when you're not performing database operations.
For code, here's the bare minimum code example for a workflow:
The steps can be any TypeScript function.Then we have a bunch more examples in our docs: https://docs.dbos.dev/.
Or if you want to try it yourself download a template:
Also, what happens with versioning? What if I want to deploy new code?
For versioning, each workflow is tagged with the code version that ran it, and we recommend recovering workflows on an executor running the same code version as what the workflow started on. Docs for self hosting: https://docs.dbos.dev/typescript/tutorials/development/self-.... In our hosted service (DBOS Cloud) this is all done automatically.
https://supabase.com/blog/durable-workflows-in-postgres-dbos https://news.ycombinator.com/item?id=42379974