Hacker Newsnew | past | comments | ask | show | jobs | submit | TheP1000's commentslogin

I just implemented incident manager org wide :(

I knew the service was rough, but had the right building blocks plus Cfn/CDK support and has been working well.

My lack of trust in AWS is increasing, feels like a google move.


API gateway timeout increase has been nice.


It was always there but it required much more activity to get it done (document your use case & traffic levels and then work with your TAM to get the limit changed).


I don't see that in this post.

I just started working with a vendor who has a service behind API Gateway. It is a bit slow(!) and times out at 30 seconds. I've since modified my requests to chunk subsets of the whole dataset, to keep things under the timeout.

Has this changed? Is 30 secs the new or the old timeout?



Is this really xenophobia?

China does not provide equal access to their markets, why should we provide it to ours?

And the specifically pertains to a loophole that would be getting closed for everyone...


That's the point. It's not. But it was painted as such because it made for great (cough cough) "journalism".


For anything we inject secrets in via env vars (which really is only supported by ECS, maybe EKS?), it is easy to add a lambda to kick off a nightly ECS restart. Easier if you are already using the AWS CDK for tooling.

The purist in me thinks restarts are a hack, but the pragmatist has been around long enough to embrace the simplicity.

Adding another dependency/moving piece that AWS could drop support or it could just break also steers me away from this.

For Lambda, processes should be getting swapped fast enough and you also normally load during a cold start only. I could see some argument there around improving cold start performance, but would need some testing.

So, maybe this is to save a few cents?


Agreed. I would imagine the previous approach of forced upgrades ended up burning lots of customers in worse ways than just their pocketbook.


Is above still an issue with http2/3?

edit: From the article: To workaround the limitation you have to use HTTP/2 or HTTP/3 with which the browser will only open a single connection per domain and then use multiplexing to run all data through a single connection.


No, if you can enable TLS and HTTP/2|3, you are only technically using a single browser connection, onto which multiple logical connections can be multiplexed.

I think the article calls this out. There is still a limit on the number of logical connections, but it's an order of magnitude larger.


As others have mentioned, moving to per tenant databases can really simplify things at scale and doesn't leave a massive amount of engineering complexity and debt in its wake.

I feel sorry for the team managing this 5 years from now.


Moving to a per-tenant database sounds like even more work to me than moving to shards. Moving to per-tenant means rewriting _everything_ - moving to shards has you rewriting a lot less.


What do you mean by "rewriting everything"? Or maybe your definition of "per-tenant database" is different from mine. In our product, it's just a small layer which routes requests to the target organization's DB. When an organization is created, we create a new DB. Most of application code has no idea there are different DBs under the hood.

There are logical DBs (folders/files on a single physical server), and there's a few physical servers. We're currently at the stage where the first physical server hosts most of smaller organizations, and all other physical servers are usually dedicated servers for larger clients with high loads.


If you didn't write your application with multi-tenant in mind from the start I would expect you would need to review almost every line of code that touches a database to make a transition like this.


In our code, the only DB-related piece of code which is aware of multi-tenant databases is the getDBConnection(accountId) function. Once you have the connection, you execute exact same SQL queries as before. The function is hidden deep inside the framework/infrastructure layer, so application code is completely unaware of it.


This is possible to do, but lots of engineering. You can provide the experience of a single DB while each tenant can be placed in their own dedicated Postgres compute. This would help the application to stay the same while tenants are moved to independent computes (you can even move only a few tenants and leave the rest on a shared Postgres compute).


Exactly this. But at this point, I don't even want to give them advice, I don't really like their service. I like Lunacy more.


Yes, GoDaddy does this.


If you want to leverage cheap spot, use us-east-2 / Ohio region. The prices are typically half of what you see in us-east-1.

Also, it really helps to analyze at the AZ level. Certain AZs lack instances or have very low spot availability and contrary to recommended best practice, reducing AZs can sometimes be beneficial (I am looking at you eu-central-1a).

While lowest price sounds nice, they can be really messy in terms of spot interruption rate. It is much better to set a max price and choose capacity optimized with as many instances as possible.


> eu-central-1a

FYI, AZ names are not universal. Your eu-central-1a might be someone else's eu-central-1b.


this actually depends on the region. amazon stopped randomizing az names in new regions quite awhile ago, while also offering azid as a guaranteed id in all regions.


The automatic region thing is problematic for many companies.

I would much rather be able to explicitly choose this and know that customers data is where I told them it would be.


From the blog:

> ... we know that data locality is important for a good deal of compliance use cases. Jurisdictional restrictions will allow developers to set a jurisdiction like the ‘EU’ that would prevent data from leaving the jurisdiction.

It's coming: we understand the importance of data locality & residency requirements.

(I work at CF)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: