This is an initiative I want to support, but after reading both stories - you're making the mistake of having a good-faith argument with bad faith actors, comparing approaches as if you are chasing the same objective from different principles.
DOGE is not trying to find efficiency. DOGE is trying to funnel money from the people to the powerful. DOGE is actively part of a project to destroy the government. DOGE does not give a damn.
I don't think they are trying to have a good-faith argument with DOGE -- I think they are trying to appeal to the hopefully-still-extant, sane, slight majority of Americans.
Why this illusion, and how, you may ask? I can tell you why and how! It's because people are far more likely to notice the data points they dislike (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). Not only that, but the dislikeables make a stronger impression when you do notice them. This is why people with strong feelings on a topic always feel like the site is going to the dogs—they're unintentionally blotting out the other data.
---
Edit: here are more recent examples in case helpful (what can I say, it's a hobby)
I've got just as many pointing the other way, of course, but the valuable examples point opposite to the illusion. That is, if I were replying to one of these (^^^^) commenters, I'd point them to your post instead!
It's not an illusion. I've been on HN across various accounts for over a decade. There is a hard liberal-progressive tilt to comments and post flagging which has been documented and remarked upon by others on sites where such comments don't get flagged and removed. Your own bias is also quite obvious, as is that of pg, who is now known on Twitter as a block-happy liberal lolcow. But thanks for posting all that, I'm glad Bolsheviks seething at Mensheviks haven't escaped your attention.
I wouldn't call this documentation in the sense that I thought you meant the word.
These are examples of users complaining about HN in the way many users always complain about HN. You'll find countless such comments making general claims about how the site is "all X" or "has become Y" or "never allows any Z".
The problem is that these perceptions are not reliable—people base them on datapoints that they happen to notice because they dislike them so much [1], and from there they jump to general conclusions.
I'm willing to believe you mean what you say and are posting in good faith, but please read this comment and others on its link and tell me with a straight face there's no strong lib-left bias in HN's community moderation:
You and others on the right, as a group in 2025, only argue for freedom of speech when it suits you. Yes, the left does this too, and it's just as despicable.
I wouldn't bother engaging with this kind of comment. Hackernews is not the place for back and forth personal slinging. Just flag it and move on, we should try to keep things civil.
As the article points out, they are arguing in court against the new york times that publicly available data is fair game.
The questions I am keenly waiting to observe the answer to (because surely Sam's words are lies): how hard is OpenAI willing to double down on their contradictory positions? What mental gymnastics will they use? What power will back them up, how, and how far will that go?
Their way of squaring this circle has always been to whine about "AI safety". (the cultish doomsday shit, not actual harms from AI)
Sam Altman will proclaim that he alone is qualified to build AI and that everyone else should be tied down by regulation.
And it should always be said that this is, of course, utterly ridiculous. Sam Altman literally got fired over this, has an extensive reputation as a shitweasel, and OpenAI's constant flouting and breaking of rules and social norms indicates they CANNOT be trusted.
When large sums of money are involved the techbros will burn everything down, go scorched earth no matter what the consequences, to keep what they believe they're entitled to.
This was also visible in earlier versions of Claude and ChatGPT, when the supervisor kicks in after the answer begins generating. Censoring different content, naturally.
I'm VERY interested in your project. Not playing it, I mean more the techniques and tech stack. That sounds entirely out of my reach with an AI, and ive written game engines in c++ before. Networking, synchronisation problems, etc are really really hard.
What process did you use with the AIs, any prompting insights - context, agentic prompts etc?
What tech stack did you use that you found AIs were familliar enough with, ive found them woefully misinformed about most libraries and technologies ive tried them with in game development, often confidently mixing out of date and new information
I'm using Cursor's composer agent mode with Sonnet 3.5 (I don't use OpenAI on principle, snakes). It does a great job of finding the relevant code without overloading its context window.
I experimented today with Aider (to get R1 involved) and had less success, but it might be that I don't have the workflow down.
I have found cursor can handle a .NET C# back-end using highly standard code structures very well. SignalR for networking.
I've created servers and very basic HTML visualization for three projects - a fairly simple autobattler (took a day), a web-based beat-em-up (2 days), and now a bit more ambitiously my dream RTS-MMO (3rd weekend running).
I started with concise MVP specifications including requirements for future scaling, and from these worked with the AI to make dot point architectural documents. Once we had those down I moved step-by-step, developing elements and tests simultaneously then having the agent automatically run the tests and debug. The test-driven debugging is the part that saved the most frustration, as the initial implementation was almost always broken, but leaving the agent to its own devices (tabbing in and typing "continue" when hitting Cursor's 25 tool call limit, sometimes for hours) the tests guided bug fixing and amazingly it got there fairly consistently, though occasionally it will go off the rails and start modifying the tests to pass or inventing unwanted functionality.
The code is as standard as possible, with the servers all organized identically API -> Application -> Domain <- Infrastructure, and well separated between client/server. Getting basic HTML representations wasn't an issue, but it does begin to struggle and requires a lot more direction when it comes to client-side code that expands beyond initial visualization. I had a lot more success with Monogame C# than Phaser or other web formats (e.g. I quickly gave up on SFML, same issues you were having).
I'm a professional game developer but without formal CS/programming training, so I'm aware of my requirements but not always how to implement them cleanly. I understand the code it writes which feels vital when it occasionally rolls a critical miss, but these projects would have taken me months without AI.