Here’s a demo: https://youtu.be/luoMApPeglo
Two years ago, I was hired at a Stanford lab to run models for my labmates. Some post-doc would ask me to run a set of 1-5 models in sequence with tens of thousands inputs and I would email them back the result after setting up the workflow in the university cluster.
At some point, it became unreasonable that all of an organization's computational biology work would go through an undergrad, so we built Tamarind as a single place for all molecular AI tools, usable at massive scale with no technical background needed. Today, we are used by much of the top 20 pharma, dozens of biotechs and tens of thousands of scientists.
When we started getting adoption in the big pharma companies, we found that this problem also persisted. I know directors of data science, where half their job could be described as running scripts for other people.
Lots of companies have also deprecated their internally built solution to switch over, dealing with GPU infra and onboarding docker containers not being a very exciting problem when the company you work for is trying to cure cancer.
Unlike non-specialized inference providers, we build both a programmatic interface for developers along with a scientist-friendly web app, since most of our users are non-technical. Some of them used to extract proteins from animal blood before replacing that process with using AI to generate proteins on Tamarind.
Besides grinding out images for each of the models we serve, we’ve designed a standardized schema to be able to share each model’s data format. We’ve built a custom scheduler and queue optimized for horizontal scaling (each inference call takes minutes to hours, and runs on one GPU at a time), while splitting jobs across CPUs and GPUs for optimal timing.
As we've grown to handle a substantial portion of the biopharma R&D AI demand on behalf of our customers, we've expanded beyond just offering a library of open source protocols.
A common use case we saw from early on was the need to connect multiple models together into pipelines, and having reproducible, consistent protocols to replace physical experiments. Once we became the place to build internal tools for computational science, our users started asking if they could onboard their own models to the platform.
From there, we now support fine-tuning, building UIs for arbitrary docker containers, connecting to wet lab data sources and more!
Reach out to me at deniz[at]tamarind.bio if you’re interested in our work, we are hiring! Check out our product at https://app.tamarind.bio and let us know if you have any feedback to support how the biotech industry uses AI today.
Originally, my first instinct was to use Slurm or AWS batch, but started having problems once we tried to multi cloud. We're also optimizing for being able to onboard an arbitrary codebase as fast as possible, so building a custom structure natively compatible with our containers (which are now automatically made from linux machines with the relevant models deployed) has been helpful.
What were the biggest challenges in getting major pharma companies onboard? How do you think it was the same or different compared to previous generations of YC companies (like Benchling)?
Some of the same problems exist, large enterprises don't want to process their un-patented, future billion-dollar drug via a startup, because leaking data could destroy 10,000 times the value of the product being bought.
Pharma companies are especially not used to buying products vs research services, there's also historical issues with the industry not being served with high quality software, so it is kind of a habit to build custom things internally.
But I think the biggest unlock was just that the tools are actually working as of a few years ago.
If you look at the recent research on ML/AI applications in biology, the majority of work has, for the most part, not provided any tangible benefit for improving the drug discovery pipeline (e.g. clinical trial efficiency, drugs with low ADR/high efficacy).
The only areas showing real benefit have been off-the-shelf LLMs for streamlining informatic work, and protein folding/binding research. But protein structure work is arguably a tiny fraction of the overall cost of bringing a drug to market, and the space is massively oversaturated right now with dozens of startups chasing the same solved problem post-AlphaFold.
Meanwhile, the actual bottlenecks—predicting in vivo efficacy, understanding complex disease mechanisms, navigating clinical trials—remain basically untouched by current ML approaches. The capital seems to be flowing to technically tractable problems rather than commercially important ones.
Maybe you can elaborate on what you're seeing? But from where I'm sitting, most VCs funding bio startups seem to be extrapolating from AI success in other domains without understanding where the real value creation opportunities are in drug discovery and development.
So both things can be true: the more important bottlenecks remain, but progress on discovery work has been very exciting.
Runs vary significantly between models/protocols used, some generative models can take several hours, while some will run a few seconds. We have tools that would screen against DBs if the goal is to find an existing molecule to act against the target, but often, people will import and existing starting point and modify it or design completely novel ones on the platform.
We do let people onboard their own models too, basically the users just see a separate tab for their org, which is where all the scripts, docker images, notebooks their developers built interfaces for live on Tamarind.
I would say primary concerns were:
dependency issues, needing more than model weights to be able to consume models (Multiple Sequence Alignment needs to be split, has its own always on server, so on), more convenient if the inputs and outputs are hardened interfaces as different envs
Our general findings in the BioML are that the models are not at all standardized especially compared to the diffusion model world for example, so treating each with its own often weird dependencies helped us get out more tools quicker.
We actually did have this available early on, our rationale for why we structure it differently now is basically that there is a lot of diversity between how people use us. We have some examples where a twenty person biotech company will consume more inference than a several hundred person org. Each tool has very different compute requirements, and people may not be clear on which model exactly they will be using. Basically we weren't able to let people calculate the usage/annual commitment/integration and security requirements in one place.
We do have a free tier which tends to be decent estimate of usage hours and a form you can fill out if and we can get back to you with a more precise price.
I think most large companies have similar expectations around security requirements, so once those are resolved most IT teams are on your side. We occasionally do some specific things like allowing our product to be run in a VPC on the customer cloud, but I imagine this is just what most enterprise-facing companies do.