Hacker Newsnew | past | comments | ask | show | jobs | submit | thewisenerd's commentslogin

discussed a couple days ago: https://news.ycombinator.com/item?id=46191993

AWS introduces Graviton5–the company's most powerful and efficient CPU (14 comments)


the custom error page is configurable at a domain (zone) level

which sometimes gets annoying because branding for subdomains could be different.

https://developers.cloudflare.com/rules/custom-errors/edit-e...


> Error Pages do not apply to responses with an HTTP status code of 500, 501, 503, or 505. These exceptions help avoid issues with specific API endpoints and other web applications. You can still customize responses for these status codes using Custom Error Rules.

From that page ;)


previously discussed here: https://news.ycombinator.com/item?id=46064571

Migrating the main Zig repository from GitHub to Codeberg - 883 comments


Didn't know about codeberg and can't even access it... Is it https://codeberg.org/ ??

That is correct. It is down quite a bit. https://status.codeberg.org/status/codeberg

92% uptime? What do you do the other 8% of the time? Do you just invoke git push in a loop and leave your computer on?

You keep working since Git is decentralized.

You can also run a Forgejo instance (the software that powers Codeberg) locally - it is just a single binary that takes a minute to setup - and setup a local mirror of your Codeberg repo with code, issues, etc so you have access to your issues, wiki, etc until Codeberg is up and Forgejo (though you'll have to update them manually later).


I hope Codeberg is able to scale up to this surge in interest, but

> it is just a single binary that takes a minute to setup - and setup a local mirror of your Codeberg repo with code, issues, etc so you have access to your issues, wiki, etc

is really cool! Having a local mirror also presumably gives you the means to build tools on top, to group and navigate and view them as best works for you, which could make that side of the process so much easier.

> you'll have to update them manually later

What does the manually part mean here? Just that you'll have to remember to do a `forgejo fetch` (or whatever equivalent) to sync it up?


As discussed elsewhere in this thread: They're under DDoS, and have been very public about this fact.

_if_ you're using ubuntu,

there's the CVE tracker you can use to ~argue~ establish that the versions you're using either aren't affected or, have been patched.

https://ubuntu.com/security/cves

https://ubuntu.com/security/CVE-2023-28531


that said, we've also had the same auditor ask us to remove the openssh version upon telnet (which by RFC 4253, is not possible)

so ymmv


there's this video essay of what makes dua lipa's podcasts good: https://www.youtube.com/watch?v=QN1rULxGHCA


given the image in the post is specifically of the azure portal, the following is a very real notification message from the same:

Deleting load balancer '[object Object]'


Please don't get me started on Azure!


i see the complaints around URL length limits and i raise you..

storing the entire state in the hash component of the URL

http://example.com/foo#abc

since this is entirely client-side, you can pretty much bypass all of the limits.

one place i've seen this used is the azure portal.. (payload | gzip | b64) take of that what you will.


Except you hit limits when trying to share that URL. Eg: try pasting a URL longer than 4096 bytes in Signal or WhatsApp, and they don't render it as clickable.


they recently had an incident with front door reachability, wonder if it's back.

QNBQ-5W8


i guess that depends on what you mean by url-safe

uuidv7 (-) and nanoid (_-) have special characters which urlencode to themselves.

none are small enough that you want someone reading them over the phone; but from a character legibility, ulid makes more sense.


mirroring all the comments about this _needing_ to be an extension..

in theory, one should be able to extract the "rule" definitions [1] and have it run with a conn str; instead of this _needing_ to be an extension.

in practice though, query plan analysis and missing indexes is a bigger use-case; since it's bad queries that take down the db.. and i see no rules here to help with that.

[1] https://github.com/pmpetit/pglinter/blob/9a0c427fac14840a7d6...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: