Hacker Newsnew | past | comments | ask | show | jobs | submit | Sophistifunk's commentslogin

The idea that what's needed is for these alternative platforms to switch to "free with ads" is amazingly short sighted and disheartening. Everything bad YouTube does is driven by this business model. Switching to it might make a few people rich at the top of these alternative platforms, but it won't make anything better for any user or creator.


Race cars have heaps of safety systems not present in road cars. They don't have ABS and traction control because they don't actually increase safety on track with a professional driver. SRS airbags also offer no additional safety when in a 6 point harness and wearing a helmet and neck brace.


The wasted time and money in construction comes entirely from two places: a small percentage of crooked builders (and their local council mates), and the bureaucracy that is trying to protect the citizens from same. Big brother puts in a lot of hoop-jumps standards and supposed checks and balances that end up creating massive delays and costs for the consumer, but the actual standards (while usually quite sensible) are easily sidestepped by the crooked builders, so the war continues, and the overhead constantly increases with the usual expansion-only government regulation ratchet.

None of these things are susceptible to "AI" and other such automation. We have had prefab construction for decades.


I very much enjoy reading and writing TS code. What I don't enjoy is the npm ecosystem (and accompanying mindset), and what I can't stand is trying to configure the damn thing. I've been doing this since TSC was first released, and just the other day I wasted hours trying to make a simple ts-node command line program work with file-extension-free imports and no weird disagreements between the ts-node runner and the language server used by the editor.

And then gave up in disgust.

Look, I'm no genius, not by a long shot. But I am both competent and experienced. If I can't make these things work just by messing with it and googling around, it's too damned hard.


I encountered this trying out PureScript. Looks like a good language, but I gave up after a couple of trips through npm, bower, yarn...


Fully agree. Try bun.


Or deno.


Sounds like a Zig-shaped hole to me ;-)


They complain that Go is too low-level for their needs. Zig, with its explicit allocators, is definitely even lower-level.

Rust seems low-level too, but it isn't the same. It allows building powerful high-level interfaces that hide the complexity from you. E.g., RAII eliminates the need for explicit `defer` that can be forgotten


True, but I think the "low-level" complaint against Go in the article was just referring to all the stupid repetitive ceremony required for error handling, which Zig mostly skips over.


Fair enough. That's what they seem to be saying.

But then I want to chime in and argue that the repetitive syntax isn't even close to being the main problem with Go: https://home.expurple.me/posts/go-did-not-get-error-handling...


So, while on that subject; does Zig get error handling right?


It does seem to: https://pedropark99.github.io/zig-book/Chapters/09-error-han...

However errors do not seem to commonly wrapped, tagged or contextualized as is the case in Rust. This might weight lower verbosity as more important than extremely structured error handling which definitely constitutes an interesting approach.


Idk, I'm not familiar enough with Zig to say


I think it does.


When are we done adding everything into the browser API?


Hopefully never.

Unless you loved IE6 of course, which was when Microsoft declared the web browser to be 'complete'.


When somebody creates something better.


I'm much more interested in going the other direction, in order to get a TV without all the crapware.


It's very interesting to me that we have to keep telling people this, but it hasn't become part of the "hive folk knowledge" we all seem to develop. I think DB vendors have been sleeping on an opportunity to encourage better practices.


DB vendors haven't done enough to offer ID generation as a core part of their system. Ideally "what ID do I use for this object" shouldn't even be a consideration, because of course the database should handle it. It is the system of record after all. Yet your options are pretty much limited to UUID or a basic incremental counter that fails to meet any real world production constraints.


Basic incremental counters work for most real world production constraints. Most people are not going to create tables with 4.2 billion rows, even with failed inserts. If you are doing that its an extreme of either very much you know what you are doing, or you very much do not; I have seen both in production.


What if I have multiple partitions? Replication? What if I don't want business data to be exposed due to strictly incremental counters? What if I want unique IDs across different tables?


Partitions should not impact the use of an INT PK, except that you’ll need to include the partition key in the PK, e.g. (id, created_at) if partitioning by datetime. The displayed ordering without an explicit ORDER BY may not make sense, but to be fair, there are never any guarantees about implicit order.

Replication should be fine, unless you mean active-active in which case I suggest a. not doing that b. using interleaved chunks, or a coordinator node that hands them out.

Business data exposure can be avoided (if it’s actually a problem, and not just a theoretical one) in a variety of ways; two of the most common are:

* Don’t use the id in the slug.

* Have a iid column that’s random and exposed, while keeping the integer as the PK.

If you need unique IDs across tables, then I question your use of an RDBMS, because you aren’t really making use of the relational aspect.


I could not have said it better myself. I would also add that I keep the slug just as an entirely separate column (or "user visible id") that they can change, had too many systems do things like "invoice id is auto generated" and then a customer coming back and saying "the invoice id has to be this or the auditors will scream!" - don't expose internals of your database to your users and you wont have a bad time.


A hundred inserts per second is going to hit 2^32 within a year and a half. I've seen that volume repeatedly. A colleague has seen this limit blow up prod. Do you really want to spend time on a project you're sure can never succeed to this extent?


Yep, seen it many times, and I cant tell you how many thousands of times I have seen tables with UUIDv4 primary keys for "future safety" that have 12 rows.

Converting int to bigint is not a big project, I have done more times than I can count just like any database evolution. I have also had customers say "oh, we'll just drop and recreate the table every year because its just some trash data." or "oh wait, we didnt mean to create 100 rows every second every day, that's a bug and costing us a lot of money for something we dont care about."

There's no one size fits all in databases, but most people who don't know better don't need to design a database for scale... because their choices won't work in the long term anyway.


That table is a self-limiting problem. It's true that failure is likely, but optimizing for that outcome never adds value, and it would probably make me quit.


Because we keep telling people the opposite in academic settings, where the pkey is usually some actual data field(s) you expect to be unique. There's some point to teaching 3NF this way, but it shouldn't be taken literally.


Boeing haven't exactly been covering themselves with glory with their non-NASA work over the last decade or two, either. But I'd still go with "both" :)


An ARM computer running a BSD and x stack. What's interesting about it, that makes it a "flying car" as opposed to everybody else's 140 characters?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: