> AWS has a long standing issue with the ECS agent randomly disconnecting, resulting in orphaned EC2 instances which can cause traffic or deployment degradation.
> We have attempted to solve this a few ways in the past, but there were still critical edge cases falling through.
> So we bit the bullet, and developed a robust, full featured ECS cluster management solution to solve this problem once and for all.
> It's currently in private preview. To get early access before we roll it out to everyone, contact support.
I found elsewhere in the Flight control docs where they recommend ECs+EC2. While I'm not surprised to hear about issues with ECS+EC2, given the reported issues above I don't know if I'd recommend it in my docs. Fargate is a far better option for most use cases, at least in my experience. Unless you need specialized instance types, like GPU workloads.
I paid, booked a flight etc for having a 360 scan and giving my fingerprint just to be able to apply for a US visitor visa (which could be rejected but they would still keep all your information)
And European visas work exactly the same way. The news here is that Americans are going to lose their privileged status, and be treated like the rest of us.
Recently moved some of the background jobs from graphile worker to DBOS. Really recommend for the simplicity. Took me half an hour.
I evaluated temporal, trigger, cloudflare workflows (highly not recommended), etc and this was the easiest to implement incrementally. Didn't need to change our infrastructure at all. Just plugged the worker where I had graphile worker.
The hosted service UX and frontend can use a lot of work though but it's not necessary for someone to use. OTEL support was there.
They inherit all the limitations of DO. For example, if you want to do anything that requires more than 6 TCP connection. Every fetch request will start failing silently because there is no more TCP connection to go through. This was a deal breaker for us. Their solution was split our code into more workflows or DOs.
You are limited to 128 MB ram which means everything has to be steamed. You will rewrite your code around this because many node libraries don't have streaming alternatives for things that spike memory usage.
The observability tab was buggy. Lifecycle chart is hard to understand for knowing when things will evict. Lot of small hidden limitations. Rate limits are very low for any mid scale application. Full Node compatibility is not there yet (work in progress) so needed to change some modules.
Overall, a gigantic waste of time unless you are doing something small scale. Just go with restate/upstash + lambdas/cloud run if you want simpler experience that scales in serverless manner
Needed checkpoints in some of our jobs wrapping around the AI agent so we can reduce cost and increase reliability (as workflow will start from mid step as opposed to a complete restart).
We already check pointed the agent but then figure it's better to have a generic abstraction for other stuff we do.
Temporal required re-architecting some stuff, their typescript sdk and sandbox is bit unintuitive to use so would have been an additional item to grok for the team, and additional infrastructure to maintain. There was a latency trade off too which in our case mattered.
Didn't face any issue though. Temporal observability and UI was better than DBOS. Just harder to do incremental migration in an existing codebase.
Indeed, it escalated quite quickly to somewhat under the table in the first term, but the second term hadn't already started and it was all out in the open. With some discontent (especially around the comic depicting it that got denied by the Washington Post, resulting in their cartoonist leaving to publish it), but nothing near the outrage there should be.
I think providing examples and sample code is better than tying your API to AI sdk.
Due to how fast AI providers are iterating on their APIs, many features arrive weeks or months later to AI SDK (support for openai computer use is pending since forever for example).
I like the current API where you can wait for an event. Similar to that, it would be great to have an API for streaming and receiving messages and everything else is handled by the person so they could use AI sdk and stream the end response manually.
I check multiple different apps to order food or taxi services every day looking for cheapest price and availability. It’s a pain and I don’t like the decision fatigue.
I thought why not automate this and turn it into an app so I built this MCP server.
I’m not using vision at all (mostly a fallback) and parsing the accessibility tree into something LLM can understand.
Lot of issues with slow AWS provision or missing APIs on AWS side so it would take hours to delete resources created by them.