We'll release an Upstash Workflows adapter soon! StepKit is ultimately just an in-code API that lets you define workflows in a backend agnostic way. We want you to define workflows that can run in Upstash, Inngest, Cloudflare... really anywhere!
Vercel Workflow Kit takes a very different approach. Lack of step IDs (which makes them worse at handling code changes), compilation step, more opinionated about backends ("worlds", as they call them). Vercel Workflow Kit has magic that admittedly makes it a little easier to get started, but that magic causes problems when you want a mature product.
Cloudflare Workflows are actually complementary to StepKit! We'll soon release an adapter that lets you define StepKit workflows that run as Cloudflare Workflows. We have a POC in `packages/cloudflare` in our repo
Inngest engineer here! For a little extra context, the `@stepkit/core` package is basically just an API for defining a workflow. There isn't much to it because we don't want to be overly opinionated on backend implementations!
The `@stepkit/sdk-tools` package is a set of tools for building your own StepKit SDK. The vast vast majority of stuff in there is optional, but highly valuable if you want to avoid reinventing the wheel when building your own SDK.
To be fair, this is a better example of booleans being a poor fit for modeling many problems. And it’s solvable without even addressing either issue (eg how this is modeled in the real world with multiple affirmatives).
I find a bit infuriating that the official docs for pattern matching are the PEPs. Maybe that will change at the next pattern matching lang change, and I think PEPs and language docs each have their separate purposes, but on the other hand it's nice that the PEP is good (and current) enough as usage documentation.
I use Cursor and I get a lot of benefit from its autocomplete suggestions, but its composer is horrible so I never use it. The dream of telling AI to make changes on its own hasn’t arrived
Looks like you're being downvoted but you're right. A tiny fraction of high school students would actually care about these classes -- high school me wouldn't
IIRC, the lifecycle hook only prevents destruction of the resource if it needs to be replaced (e.g. change an immutable field). If you outright delete the resource declaration in code then it’s destroyed. I may be misremembering though
I find this statement to be technically correct, but practically untrue. Having worked in large terraform deployments using TFE, it's very easy for a resource to get deleted by mistake.
Terraform's provider model is fundamentally broken. You cannot spin up a k8s server and then subsequently use the k8s modules to configure the server in the same workspace. You need a different workspace to import the outputs. The net result was we had like 5 workspaces which really should have been one or two.
A seemingly inconsequential change in one of the predecessor workspaces could absolutely wreck the later resources in the latter workspaces.
It's very easy in such a scenario to trigger a delete and replace, and for larger changes, you have to inspect the plan very, very carefully. The other pain point was I found most of my colleagues going "IDK, this is what worked in non-prod" whilst plans were actively destroying and recreating things, as long as the plan looked like it would execute and create whatever little thing they were working on, the downstream consequences didn't matter (I realize this is not a shortcoming of the tool itself).
This sounds like an operational issue and/or a lack of expertise with terraform. I use terraform (self hosted, I guess you’d call it?) and manage not only kubernetes clusters but helm deployments with it just fine and without the issues you are describing. Honestly, this is just my honest feedback, I see things and complaints a lot like this in consulting, where they expect terraform to magically solve their terrible infrastructure and automation decisions. It can’t, but it absolutely provides you the tooling to avoid what I think you are describing.
It’s fair to complain that terraform requires weird areas of expertise that aren’t that intuitive and take a little bit of a learning curve, but it’s not really fair to complain that it should prevent bad practices and inexperience from causing the issues they typically do.
Terraform explicitly recommends in the Kubernetes provider documentation that the the cluster creation itself and everything else related to Kubernetes should live in different states.
> The most reliable way to configure the Kubernetes provider is to ensure that the cluster itself and the Kubernetes provider resources can be managed with separate apply operations. Data-sources can be used to convey values between the two stages as needed.
I agree with you (this is something that OpenTofu is trying to fix), but the way I do k8s provisioning in Terraform is to have one module that brings up a cluster, another to print the cluster's Kubeconfig, then, finally, another to use the Kubeconfig to provision Kubernetes resources. It's not perfect but it gets the job done most of the time.
The Google Cloud Terraform provider includes, on Cloud SQL instances, an argument "deletion_protection" that defaults to true. It will make the provider fail to apply any change that would destroy that instance without first applying a change to set that argument to false.
That's what I expected lifecycle.prevent_destroy to do when I first saw it, but indeed it does not.
I think the previous post is saying a resource removed from a configuration file rather than an invocation explicitly deleting the resource in a command line. Of course if it’s removed from the config file, presumably the lifecycle configuration was as well!
Yeah, that's a legit challenge that it would be great if there was a better built-in solution for (I'm fairly sure you can protect against it with policy as code via Sentinel or OPA, but now you're having to maintain a list of protected resources too).
That said the failure mode is also a bit more than "a badly reviewed PR". It's:
* reviewing and approving a PR that is removing a resource
* approving a run that explicitly states how many resources are going to be destroyed, and lists them
* (or having your runs auto approve)
I've long theorised the actual problem here is that in 99% of cases everything is fine, and so people develop a form of review fatigue and muscle memory for approving things without actually reviewing them critically.
This is not a terraform problem. This is your problem. Theoretically, you should be able to recreate the resource back with only a downtime or some services affected. You should centralize/separate state and have stronger protections for it.