The System Shock 1 remaster in particular is excellent. It's not just a graphical improvement, they improved the controls and inventory interface and a host of other legacies from from DOS-era gaming that did not age well.
It's well received after launch, yes. There's been plenty of screeching about "not being true to original" and "this looks ugly" and "this looks wrong" on the way there though. Just like for this.
Especially for the first remake.
This is why I say the screeching right now based on a trailer means nothing. "Fans" always get mightily offended if someone touches their childhood favorites.
You don't have to be a fan of the game to see the trailer and think it looks bad. I'm certainly not a Deus Ex fan. I played it just once, years and years ago. Yet when they showed the trailer during State of Play I was shocked at how bad it was. It's clearly just some texture upscaling and an update to the lighting. The original looks ugly and this remaster manages to look uglier.
An example of a remaster done right was Halo 1 (which is actually quite an old remaster at this point). They threw a new graphics engine on top but also remodeled and retextured everything. That's what I expect out of a proper remaster.
While I enjoyed CE:A, it had issues. The big one was the loss of bump-mapped textures making everything look flat. They only show up when you use a flashlight.
Yeah, I've been trying it recently and I'm not entirely convinced I want to keep using it.
My biggest annoyance at the moment (and this may be me missing something), is that I have two directories: "thing" and "thing-api". I'm doing work in "thing" much more often than in the "thing-api", but whenever I run "z thing", it takes me to "thing-api" first, and I have to "z thing" again to get to where I wanted to go. It ends up being more effort than if I'd just tab-completed or history searched a plain cd command.
Perhaps helpful: There's also a `zi` command, which prompts you with a list of all matches before changing directories. Personally there's only few directories where I need it, and I just memorize using zi instead of z for those.
However I agree z should ideally have some syntax like `thing$` to denote a full directory name instead.
> Yeah, I've been trying it recently and I'm not entirely convinced I want to keep using it.
> My biggest annoyance at the moment (and this may be me missing something), is that I have two directories: "thing" and "thing-api". I'm doing work in "thing" much more often than in the "thing-api", but whenever I run "z thing", it takes me to "thing-api" first, and I have to "z thing" again to get to where I wanted to go. It ends up being more effort than if I'd just tab-completed or history searched a plain cd command.
AFAIK the z command does take frequency into account (or was it most recent visit). However to avoid going into thing-api instead of thing I believe you just type thing/ i.e. At the slash and z will take you to thing (that obviously doesn't work with tab completion though).
I found that after some time I have gotten so used to z (which I aliases to cd) that I wouldn't want to live without it.
The aha moment for me was to type a space after the characters I'm searching for - then hit tab. You then get the list of options ranked (and a nice view showing the contents of each folder).
I wrote a shell keybinding that presents me with the candidates using fzf (in rank order). This way I can see which one it will go to and pick the "correct" one if need be. It's blazing fast.
The "Optimized Tarball Extraction" confuses me a bit. It begins by illustrating how other package managers have to repeatedly copy the received, compressed data into larger and larger buffers (not mentioning anything about the buffer where the decompressed data goes), and then says that:
> Bun takes a different approach by buffering the entire tarball before decompressing.
But seems to sidestep _how_ it does this any differently than the "bad" snippet the section opened with (presumably it checks the Content-Length header when it's fetching the tarball or something, and can assume the size it gets from there is correct). All it says about this is:
> Once Bun has the complete tarball in memory it can read the last 4 bytes of the gzip format.
Then it explains how it can pre-allocate a buffer for the decompressed data, but we never saw how this buffer allocation happens in the "bad" example!
> These bytes are special since store the uncompressed size of the file! Instead of having to guess how large the uncompressed file will be, Bun can pre-allocate memory to eliminate buffer resizing entirely
Presumably the saving is in the slow package managers having to expand _both_ of the buffers involved, while bun preallocates at least one of them?
I think my actual issue is that the "most package managers do something like this" example code snippet at the start of [1] doesn't seem to quite make sense - or doesn't match what I guess would actually happen in the decompress-in-a-loop scenario?
As in, it appears to illustrate building up a buffer holding the compressed data that's being received (since the "// ... decompress from buffer ..." comment at the end suggests what we're receiving in `chunk` is compressed), but I guess the problem with the decompress-as-the-data-arrives approach in reality is having to re-allocate the buffer for the decompressed data?
Or alternatively (assuming that's true) he fired the people who thought about what they commit and kept those whose commit logs look like: "push feature WiP", "fix", "more fixes", "push", "maybe this works?"...
Ironically, those may have been the staff with the most institutional knowledge. Seeing people argue, here of all places, that loc or commit frequency == institutional knowledge is … unexpected. New hires committing “whitespace cleanup” != institutional knowledge.
Someone had to actually write all that code and it inevitably shows up in the stats. People who work on the code most tend to know it the most. Although people in non-coding roles sometimes prefer to deny it.
Sure there had so be some frequent but low impact committers. But implying that people with lowest amount of code contribution must have more impact is ridiculous.
I mean, a staff engineer who stopped committing couple years ago? Yeah could be burnout, or could be some major contribution that's not in the stats. OTOH an IC on their second year in position who hadn't pushed a single line? Nah the institutional knowledge is safe without.
> The POST in the README is going to send the params in the request body "url form encoded" like a form in a web page.
Is there a different POST request in the readme or are you saying that this example is going to send the "user" and "password" params in the request body?
That seems really surprising to me - how would you then send a POST request that includes query string parameters? The documentation on form parameters [1] suggests there's an explicit syntax for sending form-encoded request parameters
I think the parent was agreeing with you. If the “local” SSDs _weren’t_ actually local, then presumably they wouldn’t need to be ephemeral since they could be connected over the network to whichever host your instance was launched on.
Exactly! And now feature X and the feature flag that governs it is in your code base forever.
In my opinion this all gets back to the way we build product and the expectations we have for our product managers. I have no doubt that their jobs are difficult in many ways, but the lack of actual focus on product specifically as it relates to customer sentiment always strikes me as lazy especially when that data collection is basically passed off to the engineers.
reply