It's wild that a president can say, "I don't like Elon anymore, so out of retaliation, I'm canceling all his government contracts," and ~40% of the country doesn't see that as corruption in any way, shape, or form.
Government contracts should not be based on whether or not the president likes the CEO, and the CEO says enough good things about the president.
If you can cancel contacts not based on merit, then it should extend you're likely willing to grant contracts not based on merit and based on nepotism instead.
This is literally the path that led the USSR to ruin. If anyone says anything you don't like, their funding is gone, even if it shoots the country in the foot. If people kiss your ass enough, they get contracts, even if it's clear they're just spending the money on hookers and coke and yachts and not delivering on promises, and it shoots the country in the head.
I think the real white collar bloodbath is that the end of ZIRP was the end of infinite software job postings, and the start of layoffs. I think its easy to now point to AI, but it seems like a canard for the huge thing that already happened.
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.
I think this article is pretty spot on — it articulates something I’ve come to appreciate about LLM-assisted coding over the past few months.
I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.
Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.
It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.
But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.
I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.
What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.
Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.
What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.
There are some misunderstandings in the comments that seem to stem from not having read the section, so I thought it was worth referencing the actual text [0]. It's quite short and easy to read.
The most important bits:
* Subsection (a) requires amortizing "Specified research or experimental expenditures" over 5 years (paragraph (2)) instead of deducting them (paragraph (1))
* Paragraph (c)(3) is a Special Rule that requires that all software development expenses be counted as a "research or experimental expenditure".
That's it. All software expenses must be treated as research and experimental expenses, and no research and experimental expense can be deducted instead of amortized. Ergo, all software expenses must be amortized over 5 years.
I strongly recommend reading the section before forming an opinion. It really is quite unambiguous and is unambiguously bad for anyone who builds software and especially for companies that aren't yet thoroughly established in their space (i.e. startups).
Also note that this makes Software a special case of R&D. It's the only form of R&D that Section 174 requires you to categorize as such and therefore amortize.
This article does not touch on the thing which worries me the most with respect to LLMs: the dependence.
Unless you can run the LLM locally, on a computer you own, you are now completely dependent on a remote centralized system to do your work. Whoever controls that system can arbitrarily raise the prices, subtly manipulate the outputs, store and do anything they want with the inputs, or even suddenly cease to operate. And since, according to this article, only the latest and greatest LLM is acceptable (and I've seen that exact same argument six months ago), running locally is not viable (I've seen, in a recent discussion, someone mention a home server with something like 384G of RAM just to run one LLM locally).
To those of us who like Free Software because of the freedom it gives us, this is a severe regression.
I am so incredibly excited for WebRTC broadcasting. I wrote up some reasons in the Broadcast Box[0] README and the OBS PR [1]
Now that GStreamer, OBS and FFmpeg all have WHIP support we finally have a ubiquitous protocol for video broadcasting for all platforms (Mobile, Web, Embedded, Broadcasting Software etc...)
I have been working on Open Source + WebRTC Broadcasting for years now. This is a huge milestone :)
One thing that I find truly amazing is just the simple fact that you can now be fuzzy with the input you give a computer, and get something meaningful in return. Like, as someone who grew up learning to code in the 90s it always seemed like science fiction that we'd get to a point where you could give a computer some vague human level instructions and get it more or less do what you want.
I've got to say, some of the comments here are pretty funny.
> "The sideloading restriction is easily solved by installing GrapheneOS"
> "Unless they block ADB, I wouldn't say it's accurate to claim they're "blocking sideloading"".
Not to pick on these folks but it's like we on HN have forgotten that ordinary people use phones too. For some of us, it's not a limitation as long as we can solder a JTAG debugger to some test pads on the PCB and flash our own firmware, but for most users that's just about as possible as replacing the OS.
> There's a culture of indifference, an embrace of mediocrity. I don't think it's new, but I do think perhaps AI has given the lazy and prideless an even lower energy route to... I'm not sure. What is the goal?
I think pride in work has declined a lot (at least in the US) because so many large employers have shown that they aren't even willing to pretend to care about their employees. It's difficult to take pride in work done for an employee that you aren't proud of, or actively dislike.
I’m an AI skeptic. I’m probably wrong. This article makes me feel kinda wrong. But I desperately want to be right.
Why? Because if I’m not right then I am convinced that AI is going to be a force for evil. It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze. It will concentrate immense power and wealth in the hands of people who I don’t trust. And it will do all of this while consuming truly shocking amounts of energy.
Not only do I think these things will happen, I think the Altmans of the world would eagerly agree that they will happen. They just think it will be interesting / profitable for them. It won’t be for us.
And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI. My skepticism (and unwillingness to aid in the advancement of AI) might slow things down a billionth of a percent. Maybe if there are more of me, things will slow down enough that we can find some sort of effective safeguards on this stuff before it’s out of hand.
This matches my experience. I actually think a fair amount of value from LLM assistants to me is having a reasonably intelligent rubber duck to talk to. Now the duck can occasionally disagree and sometimes even refine.
I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.
I worked at two different $10B+ market cap companies during ZIRP. I recall in most meetings over half of the knowledge workers attending were superfluous. I mean, we hired someone on my team to attend cross functional meetings because our calendars were literally too full to attend. Why could we do that? Because the company was growing and hiring someone to attend meetings wasn't going to hurt the skyrocketing stock. Plus hiring someone gave my VP more headcount and therefore more clout. The market only valued company growth, not efficiency. But the market always capitulates to value (over time). When that happens all those overlay hires will get axed. Both companies have since laid off 10K+. AI was the scapegoat. But really, a lot of the knowledge worker jobs it "replaces" weren't providing real value anyway.
For the young players: this is what hacker in “Hacker News” stands for. This is 101 and it’s very simply explained which makes it a great step by step example of a typical journey. Hack-a-day is full of these if you want more.
The author is clearly curious and leads in knowing a lot to begin with.
The work-behind-the-work is looking up data sheets for the chips involved, desoldering them without damaging them, in the case of memory resoldering with hookup wire and hopefully its access is slow enough that it can work fine over the length of the wire, following hunches, trying things, and knowing (for next time) the possibility of using a pinhole camera or something of the sort when drilling shallow holes and looking through for tamper traces to avoid in further drills, if so desired be.
As others have mentioned, it would be interesting if the author stuck in and got past the tamper checks to see if it would work as normal. Oh well!
It took me a few days to build the library with AI.
I estimate it would have taken a few weeks, maybe months to write by hand.
That said, this is a pretty ideal use case: implementing a well-known standard on a well-known platform with a clear API spec.
In my attempts to make changes to the Workers Runtime itself using AI, I've generally not felt like it saved much time. Though, people who don't know the codebase as well as I do have reported it helped them a lot.
I have found AI incredibly useful when I jump into other people's complex codebases, that I'm not familiar with. I now feel like I'm comfortable doing that, since AI can help me find my way around very quickly, whereas previously I generally shied away from jumping in and would instead try to get someone on the team to make whatever change I needed.
I was just kvetching about this to my partner over breakfast. Not exactly, but a parallel observation, that a lot of people are just kind of shit at their jobs.
The utility tech who turned my tiny gas leak into a larger gas leak and left.
The buildings around me that take the better part of a decade to build (really? A parking garage takes six years?)
Cops who have decided it's their job to do as little as possible.
Where I live, it seems like half the streets don't have street signs (this isn't a backwater where you'd expect this, it's Boston).
I made acquaintance to a city worker who, to her non-professional friends, is very proud that she takes home a salary for about two hours of work per day following up with contractors, then heading to the gym and making social plans.
There's a culture of indifference, an embrace of mediocrity. I don't think it's new, but I do think perhaps AI has given the lazy and prideless an even lower energy route to... I'm not sure. What is the goal?
It turns out that when elections are fought on the basis of identity (race, religion) etc corruption is actually considered a benefit! This is because the loyalists interpret this as "we" are winning and "they" are losing.
I witnessed this up close in India where parties openly exist to benefit certain constituencies based on caste, language, religion and so on.
It is horrifying to see this attitude take root in my adopted land.
> Meanwhile, software developers spot code fragments seemingly lifted from public repositories on Github and lose their shit. What about the licensing? If you’re a lawyer, I defer. But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.
This kind of guilt-by-association play might be the most common fallacy in internet discourse. None of us are allowed to express outrage at the bulk export of GitHub repos with zero regard for their copyleft status because some members of the software engineering community are large-scale pirates? How is that a reasonable argument to make?
The most obvious problem with this is it's a faulty generalization. Many of us aren't building large-scale piracy sites of any sort. Many of us aren't bulk downloading media of any kind. The author has no clue whether the individual humans making the IP argument against AI are engaged in piracy, so this is an extremely weak way to reject that line of argument.
The second huge problem with this argument is that it assumes that support for IP rights is a blanket yes/no question, which it's obviously not. I can believe fervently that SciHub is a public good and Elsevier is evil and at the same time believe that copyleft licenses placed by a collective of developers on their work should be respected and GitHub was evil to steal their code. Indeed, these two ideas will probably occur together more often than not because they're both founded in the idea that IP law should be used to protect individuals from corporations rather than the other way around.
The author has some valid points, but dismissing this entire class of arguments so flippantly is intellectually lazy.
For a variety of reasons I wanted some notoriety when I was younger. I wanted to be “the guy who’d done that thing”
I became a lot happier with myself when I stopped chasing that and just decided to post the things that I like and the projects I wanted to do. These days I like to think of my website as part of the “old, good internet”: No ads, no demands, just whatever I like and wanted to write.
It’s worth recognizing that that comfort came around/after I was making decent enough money that I wasn’t also trying to figure out a side hustle. It feels to me like “do the things you like” is a luxury of someone who isn’t anxious about paying all their bills.
I like the way Jeff signed off the article, pointing out that whilst the video has been pulled for (allegedly) promoting copyright infringement, Youtube, via Gemini, is (allegedly) slurping the content of Jeff's videos for the purposes of training their AI models.
Seems ironic that their AI models are getting their detection of "Dangerous or Harmful Content" wrong. Maybe they just need to infringe more copyright in order to better detect copyright infringement?
Hah, yes! Whereas most of my developer friends have long ago moved to off-the-shelf Hugo or Jekyll templates for their personal sites, I stubbornly maintain my blog with entirely bespoke css and a backend only a parent could love.
For me, the joy is not in having a website, the joy is in building the website. Why would I want to hand off the joyful part?
It's like maintaining a classic car. You can buy a reliable decent looking car, but that's not fun. If your goal is just to get somewhere, sure, but my goal is to have fun.
I work on websites all day where I get less and less say in the design and functionality. Why would I not want total control over my own?
This is the problem I had with all the content removal around Covid. It never ends with that one topic we may not be unhappy to see removed.
From another comment: "Looks like some L-whateverthefuck just got the task to go through YT's backlog and cut down on the mention/promotion of alternative video platforms/self-hosted video serving software."
This is exactly what YT did with Covid related content.
Here in the UK, Ofcom held their second day-long livestreamed seminar on their implementation of the Online Safety Act on Wednesday this week. This time it was about keeping children "safe", including with "effective age assurance".
Ofcom refused to give any specific guidance on how platforms should implement the regime they want to see. They said this is on the basis that if they give specific advice, it may restrict their ability to take enforcement action later.
So it's up to the platforms to interpret the extremely complex and vaguely defined requirements and impose a regime which Ofcom will find acceptable. It was clear from the Q&A that some pretty big platforms are really struggling with it.
The inevitable outcome is that platforms will err on the side of caution, bearing in mind the potential penalties.
Many will say, this is good, children should be protected. The second part of that is true. But the way this is being done won't protect children in my opinion. It will result in many more topic areas falling below the censorship threshold.
This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
Machine translation and speech recognition. The state of the art for these is a multi-modal language model. I'm hearing impaired veering on deaf, and I use this technology all day every day. I wanted to watch an old TV series from the 1980s. There are no subtitles available. So I fed the show into a language model (Whisper) and now I have passable subtitles that allow me to watch the show.
Am I the only one who remembers when that was the stuff of science fiction? It was not so long ago an open question if machines would ever be able to transcribe speech in a useful way. How quickly we become numb to the magic.
"Proof-of-Work CAPTCHA with password cracking functionality"
The "work" is "to use the distributed power of webusers’ computers" to "obtain suspects’ passwords in order to access encrypted evidence" and "support law enforcement activities".
Funny how that isn't mentioned anywhere in the linked site.
One thing that really bothered me that the author glossed over (perhaps they don't care, given the tone of the article) is where they said:
> Does an intern cost $20/month? Because that’s what Cursor.ai costs.
> Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic.
But do you know what another part of being a senior developer is? Not just making them more productive, but also guiding the junior developers into becoming better, independent, self-tasking, senior coders. And that feedback loop doesn't exist here.
We're robbing ourselves of good future developers, because we aren't even thinking about the fact that the junior devs are actively learning from the small tasks we give them.
Will AI completely replace devs before we all retire? Maybe. Maybe not.
But long before that, the future coders who aren't being hired and trained because a senior dev doesn't understand that the junior devs become senior devs (and that's an important pipeline) and would rather pay $20/month for an LLM, are going to become a major loss/ brain drain domestically.
This idea appears every once in awhile, as it’s obviously a major issue in modern life.
The interesting thing though is how the solution is always location-agnostic. By that I mean it’s never really about a specific cafe or restaurant or soccer field, it’s always an app or service that organizes people to show up in various places.
I bring this up because if you look at places that had lively social activities a few decades or a century ago, they were almost always a specific place.
The neighborhood cafe where locals can stop by at any time and see other locals. The bar that everyone stops by after work twice a week. These are stationary physical locations that don’t require pre-planning, schedules, apps, or anything else.
(I worked at a different processing company, which I am not speaking for.)
We're struggling to find the motive or intended outcome by the attacker(s).
The highest likelihood for me is that they're doing card/credential testing. They have either stolen or purchased a large number of stolen credentials. Those credentials are worth more individually if they are known to function. They can use any business on the Internet which sells anything and would tell someone "Sorry, can't sell you that because I couldn't charge your account/card/etc. Do you have another one?" to quickly winnow their set of credentials into a pile of ones which haven't been canceled yet and another pile. Another variation of this attack is their list is "literally just enumerate all the cards possible in a range and try to sift down to the cards that actually exist."
After sifting through to find the more valuable cards, they sell this onto another attacker at higher price of the mixed-working-and-not-working cards, or they pass it to their colleague who will attempt to hit the cards/creds for actual money.
Digital items are useful because people selling them have high margins and have lower defenses against fraud as a result. Cheap things, especially cheap things where they can pick their price, are useful because it is less likely to trigger the attention of the card holder or their bank. (This is one reason charities get abused very frequently, because they will often happily accept a $1 or lower donation, even one which is worth less than their lowest possible payment processing cost.) The bad guys don't want to be noticed because the real theft is in the future, by them or (more likely) by someone they sell this newly-more-valuable card information onto.
This hit the company I used to run back in the day, also on Paypal, and was quite frustrating. I solved it by adding a few heuristics to catch and giving a user matching those heuristics the product for free, with the usual message they got in case of a successful sale. This quickly spoils your website for the purpose they're trying to use it for, and the professional engineering team employed to abuse you experiences thirty seconds of confusion and regret before moving to the next site on their list. Back in the day, the bad guys were extremely bad at causing their browser instance to even try to look like a normal user in terms of e.g. pattern of data access prior to attempting to buy a thing.
Hope some of that is useful. Best of luck and skill. You can eventually pierce through to Paypal's attention here and they may have options available contingent on you being under card/credential testing attack, or they might not. I was not successful in doing so back in the day prior to solving the problem for myself.
Would also recommend building monitoring so you know this is happening in the future before the disputes roll in. Note that those disputes might be from them or from the legitimate users depending on exactly what credentials they have stolen, and in the case they are from legitimate users, you may not have caught all of the fraudulent charges yet. (Mentioning because you said "all of the charges" were disputed.) If I were you I'd try to cast a wider net and pre-emptively refund or review things in the wider net, both because the right thing to do and also because you may be able to head off more disputes later as e.g. people get their monthly statements.
> One of Bill Atkinson’s amazing feats (which we are so accustomed to nowadays that we rarely marvel at it) was to allow the windows on a screen to overlap so that the “top” one clipped into the ones “below” it. Atkinson made it possible to move these windows around, just like shuffling papers on a desk, with those below becoming visible or hidden as you moved the top ones. Of course, on a computer screen there are no layers of pixels underneath the pixels that you see, so there are no windows actually lurking underneath the ones that appear to be on top. To create the illusion of overlapping windows requires complex coding that involves what are called “regions.” Atkinson pushed himself to make this trick work because he thought he had seen this capability during his visit to Xerox PARC. In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so. “I got a feeling for the empowering aspect of naïveté”, Atkinson said. “Because I didn’t know it couldn’t be done, I was enabled to do it.” He was working so hard that one morning, in a daze, he drove his Corvette into a parked truck and nearly killed himself. Jobs immediately drove to the hospital to see him. “We were pretty worried about you”, he said when Atkinson regained consciousness. Atkinson gave him a pained smile and replied, “Don’t worry, I still remember regions.”
Government contracts should not be based on whether or not the president likes the CEO, and the CEO says enough good things about the president.
If you can cancel contacts not based on merit, then it should extend you're likely willing to grant contracts not based on merit and based on nepotism instead.
This is literally the path that led the USSR to ruin. If anyone says anything you don't like, their funding is gone, even if it shoots the country in the foot. If people kiss your ass enough, they get contracts, even if it's clear they're just spending the money on hookers and coke and yachts and not delivering on promises, and it shoots the country in the head.