Hacker Newsnew | past | comments | ask | show | jobs | submit | crabmusket's favoriteslogin

I had no idea Thief, one of my favourite games, was built with an ECS-like architecture. Two articles with more interesting details about Thief (I especially love its "temporal" CSG world model):

https://nothings.org/gamedev/thief_rendering.html

https://www.gamedeveloper.com/design/postmortem-i-thief-the-...

I skipped a fair chunk of the middle of this video as I really wanted to get to the Sketchpad discussion, which I found very valuable (starting around 1:10).

I think Casey was fairly balanced, and emphasized near the end of the talk that some of the things under the OOP umbrella aren't necessarily bad, just overused. For example, actors communicating with message passing could be a great way to model distributed systems. Just not, maybe, a game or editor. Along similar lines, I love this old post "reconstructing" OOP ideas with a much simpler take similar to what Casey advocates for:

https://gamedev.net/blogs/entry/2265481-oop-is-dead-long-liv...

But I of course enjoyed him calling out the absolutely dire state of OOP education/tutorials. I satirized this on my own blog ages ago:

https://crabmusket.net/how-i-learned-oop/

In that post I referenced Sandi Metz as an antidote to awful OOP education. I may just have to include Casey as well.


This is rhetorically fascinating.

All the concrete examples in this post refer to companies trying to prolong the problem they benefit from, but the summary at the top says "For example, the Shirky principle means that a government agency that's meant to address a certain societal issue..." They took a bunch of examples about companies and used them to imagine a problem of government.

Kelly does the same in his blog post, where he opines, without citation, that unions "inadvertently perpetuate the continuation of the problem (management) they are the solution to because as long as unions exists, companies feel they need management to offset them". Which to me is very amusing, but it's written in a style that encourages you to take it completely seriously.

Even the use of "institutions", which at least to me implies government more than it does the private sector, is not technically wrong, but I would argue is subtly misleading.

Hmmm.


As a small business owner, I'm actually keen on the benefits to other businesses that antitrust enforcement and pro-competition enforcement can have.

As a really specific example in the case of Apple, I really hope the DMA causes wider availability of browser choice on iOS so that we as a business that ships a web app can offer our customers features like notifications and other PWA benefits. Our customers are somewhat willing to switch browsers to get the best experience when using our app. But switching to Android? Not a reasonable ask from us.

Most consumers also have jobs right? Making their lives better and easier at work, increasing competition to give their employers more opportunity to thrive, is just as important as making their groceries cheaper.


Something that I love about A Pattern Language, and which I have seen few imitators attempt, is that it addresses a vast range of scales.

The book begins audaciously by asserting that the world should be divided into political regions of population no greater than 10 million; because "regions will not come to balance until each one is small and autonomous enough to be an independent sphere of culture".

https://patternlanguage.cc/Patterns/Independent-Regions-(1)

After this anarchist utopia, the patterns proceed to describe the distribution of towns and cities, responses to large geographic features like valleys versus hilltops, the layout of towns, suburbs, streets, sites, buildings, and rooms, all the way down to types of chairs and the half inch of trim that joins walls of different materials.

It features detours into aged care ("old people everywhere"), political economy ("self-governing workshops and offices"), culture ("dancing in the street").

Maybe the closest software work I've seen with this scope is the "blue book" on Domain-Driven Design, but even that doesn't come close to addressing the political economy of software, or the act of writing code.

And that's probably fine, but it really does put A Pattern Language in a class of its own.


The surrogate key uniquely identifies a row in your database, which is an entity just as real and significant as the car or the employee or what-have-you. Don't confuse the two!

I agree with you that having a surrogate key isn't going to save you from the reasons why natural keys can be difficult. The complexity has to go somewhere. But not having a unique identifier for each row is going to make things extra difficult.


> There was some pain in that realization. So many of my utopian dreams—what if we could live in a society where everyone can get the food, the housing, the healthcare, the opportunities for growth that they deserve—come from a place of wishing that we could live in a world where people are cared for.

I'd like to offer some comfort to the author on this score. Food, housing, healthcare broadly... while these are all aspects of being "cared for" by society, they aren't all care in the individual sense you describe. The food system is different from homecooked meals; the housing economy is different from the handsome breakfast nook your family DIYed into their home. We can build systems which scale and make it possible and economical for individual care to happen.


I think most responses to his posts are missing that this is the most important part:

> In practice, the only thing that makes web experiences good is caring about the user experience — specifically, the experience of folks at the margins. Technologies come and go, but what always makes the difference is giving a toss about the user.

> In less vulgar terms, the struggle is to convince managers and tech leads that they need to start with user needs. Or as Public Digital puts it, "design for user needs, not organisational convenience"

This is the most important thing. The entire article series is making this point in great detail, and also being very angry at orgs (especially public services!) that don't get this.

I think this message does get a bit buried under invective against React. Surprise surprise, if you do start with user needs, then sometimes that does lead to you building a React-based SPA. I think from Alex's perspective working with organisations, looking at it from the other direction: for any given React SPA, it is unlikely that React was chosen because of user needs.


I have a map of Brugge (Bruges) from this tool printed off on my wall. It's a great concept!

> In one of his podcasts, Ezra Klein said that he thinks the “message” of generative AI (in the McLuhan sense) is this: “You are derivative.” In other words: all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code, can be replicated by the robot equivalent of 1,000 monkeys typing at 1,000 typewriters. Even if it’s true, it’s a pretty dim view of humanity and a miserable message to keep pounding into your brain during 8 hours of daily software development.

I think this is a fantastic point well summarised. I see people coming out of the woodwork here on HN, especially when copyright is discussed in relation to LLMs, to say that there's no difference between human creativity and what LLMs do. (And therefore of course training LLMs on everything is fair use.) I'm not here to argue against that point of view, just to illustrate what this "message" means.

I feel fairly similar to Nolan and to this day haven't really started using LLMs in a major way in my work.

I do occasionally use it when I might have previously gone to Stack Overflow. Today I asked it a mildly tricky TypeScript generic wrangling question that ended up using the Extract helper type.

However, I'm also feeling the joy of coding isn't quite what it used to be as I move along in my career. I really feel great about finding the right architecture for a problem, or optimising something that used to be a roadblock for users until it's hardly noticeable. But so much work can just be making another form, another database table, etc. And I am always teetering back and forth between "just write the easy code (or get an AI to generate it!)" and "you haven't found the right architecture that makes this trivial".


First, it's great to see new stuff happening in the solar space, and helping use incentives that are there. I wish you success!

> This domain is a good fit for automation and LLMs—not to generate text, but to (1) structure unstructured documents, (2) interact with legacy government websites where there’s no API, and (3) deal with repetitive bureaucratic language.

This isn't a criticism of what you're doing, but a more general gripe/musing about the wider software and AI ecosystem. I've seen this in my own work too. I feel very unhappy that we are using complex, nondeterministic, power-hungry "intelligent" machines to solve the problem of... unstructured data. Instead of... structuring the data.

I know you can't solve that problem. But nevertheless, wouldn't it be better for society as a whole if "we" agreed to make data accessible in machine-readable ways that don't require human-like agents to piece together the mess?

This is a writ-large version of the joke about writing an email in bullet points, inflating it to paragraphs using an LLM, then the receiver summarising the paragraphs back to bullet points using an LLM.


> combined with the "let it crash" ethos

I see this phrase around a lot and I wish I could understand it better, having not worked with Erlang and only a teeny tiny bit with Elixir.

If I ship a feature that has a type error on some code path and it errors in production, I've now shipped a bug to my customer who was relying on that code path.

How is "let it crash" helpful to my customer who now needs to wait for the issue to be noticed, resolved, a fix deployed, etc.?


I agree this is a concern, but it frustrates me that tech companies won't give us reasonable options.

- "Scan photos I upload" yes/no. No batch processing needed, only affects photos from now on.

- "Delete all scans (15,101)" if you are privacy conscious

- "Scan all missing photos (1,226)" can only be done 3x per year

"But users are dummies who cannot understand anything!" Not with that attitude they can't.


I applaud your desire to write better commit messages and not be lazy. Not every commit deserves the attention, but being able to turn on "I am definitely going to leave a precise record for the next person to see this diff" is a great skill to have.

However, I feel like your approach here is a little backwards. By getting the AI to come up with the commit messages, you're actually removing the chance for the human, you, to practise and improve.

I'm a real fan of Kahneman's "thinking fast" and "thinking slow" paradigm. By asking the human to review and approve the commit message, you're allowing them to "think fast", instead of doing the challenging, deliberative "thinking slow" of actually writing what you mean.

While getting the LLM to ask you questions about what you did and why is better than just one-shotting the commit message from the diff, it still lets you reply "reactively" and instinctually, using your "fast" gut thinking, instead of engaging the slower attentive processes required to write from scratch.

Now there are a couple of other posters here critiquing the commit messages in this repo's history. I think that's fair, but by your own admission you are learning, and this is a small and new project! Probably most commits should be along the lines of "getting a thing working", not essays about the intricacies of character encoding:

https://dhwthompson.com/2019/my-favourite-git-commit

But the commits we can see are already demonstrating some of the pitfalls of LLM generated language.

From a recent commit,

"This update enhances user interaction by explicitly addressing scenarios with large diffs, directing users towards feasible actions and maintaining workflow continuity."

This comes after a detailed breakdown of the diff. It is too vague to stand alone without the preceding detail (e.g. 40k character limit) but also doesn't explain them. Why 40k characters? Why any limit at all? Words like "enhances" and "feasible" are filler - be concrete instead.

This article on wiki has fantastic advice about ways that LLM writing fails, more along the lines of what I've just pointed out:

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

Writing well is hard, never "effortless" as your readme advertises. Sadly, good results have to come from hundreds of hours of hard and uncomfortable work. Truth is rare and precious and difficult to come by, and even when we glimpse it, turning it into words is a whole nother story. I hope you can continue to develop this tool to help you learn and train your own writing, rather than avoid it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: