We've stacked up a lot of evidence that human language is much too loose and mushy to be formalized in a meaningful way.
Lossy might also be a way of putting it, like a bad compression algorithm. Written language carries far less information than spoken and nonverbal cues.
Is there a reason Apple can’t focus on system improvements instead of constantly tweaking with their UI so thoroughly every couple years? I don’t disagree the OS UI needs to be revamped periodically, but it seems they do it too often.
> constantly tweaking with their UI so thoroughly every couple years
Between any of the big 3 companies putting out major OSes (Apple, Microsoft, Google), Apple is the best for sticking to tried and true designs. It's certainly gotten worse the past couple of years (like the Photos app redesign they immediately changed again in iOS 26) and I hope with their new design lead he can pull things back to somewhere sensible, but compared to Android or Windows it's not even close. I used Android for the better part of a decade and every year they'd completely redesign the notification shade, the settings app, they'd switch the SMS app out for Hangouts, then put you back on Messages, then rename it, then change the logo/branding, then redesign it again, etc. Everything was endless changes for no reason, felt like a constant beta.
If you look at the basic iPhone apps - Messages, Settings, Notes - prior to Liquid Glass it's been pretty much exactly as it was when Jobs showed it off at the iPhone reveal 19 years ago.
The bosses who are actually in charge likely don’t even use their Macs (and when they do, they only use the web browser) and only care about how the OS is going to look in the demo, not how it works.
Pretty sure Tim Cook has said before he does most of his work on the iPad; it doesn't seem like an unreasonable guess that too many of the C-suites do the same and don't use Macs enough.
It's a damned if you don't, damned if you do. Apple could release a new system with zero UI updates and tons of internal improvements and people would call it 'old' and 'dated' and 'lack of innovation'.
It's a bit like adding new emojis in an OS release. There's been reports that new emojis are one of the drivers for getting people to upgrade. No one cares about a zero day security flaw, but that new kiss emoji everyone wants.
> Apple could release a new system with zero UI updates and tons of internal improvements and people would call it 'old' and 'dated' and 'lack of innovation'.
Apple has released incremental upgrades to macOS for years, and I've never heard this criticism of them. On the contrary, I ofter hear people missing Leopard design, and when UI has changed I've heard pushback (ie, when System Settings was renamed and redesigned). On macOS people care about the apps and interactions, not wether the buttons got a new look.
> There's been reports that new emojis are one of the drivers for getting people to upgrade. No one cares about a zero day security flaw, but that new kiss emoji everyone wants.
I agree with this. New emojis are new functionality; you can now express something you couldn't. A zero day security flaw brings no new functionality. Equally, updates to to apps and interactions bring new functionality. A re-skin of the OS doesn't.
I agree, but I'm pointing out why UI changes happen. Apple could certainly do UI changes as part of a cleanup release though. Basically start with getting back to consistency pointed out in the article.
But, having worked with users I've seen first hand out tons of internal improvements are ignored while one small UI change makes 'everything seem new'.
Don't you feel that the circumstances are similar though? There was a pressure (expectation and competition) for new features. There were rapid changes in the UI and UX. But also bugs. And IIRC Mac OS X upgrades were still paid-for.
It was a brave move to spend a major release without adding feature. And people were grateful for it, once it happened.
I'm all about them spending a major release bug fixing. I've been on their side with a much smaller project and see what users say though.
The analogy I use is that no one thinks about plumbing until it's not working. I could stand up and tell people we have the best plumbing ever, it's been improved, is less likely to break, etc... and as long as it works at a surface level it seems the vast majority of user don't care. We actually save little UI tweaks/fixes to point to when doing major behind the scenes upgrades so users 'see' we're doing something. It's silly, but /shrug.
Yeah I thought the same thing too. Takes a few moments. A UX suggestion for the author. If a user wants it off, it doesn’t need to be graceful, just quick.
Wow, it turns off the snowflakes, but the ones already falling will stay there and keep falling. I thought it only changed the background color to yellow, so assumed it doesn't actually turn off the snowflakes, wanted to come here and comment how ridiculous that is on an article talking about bad icons, then I saw in the comments that it turns off the snow effect... eventually (like a 10s delay).
It's hilarious that it's a great article about clutter, and yet, the post is on a theme that is so badly cluttered it would have been funny... if I could have read the article...
Flagging a submission just because they added a little whimsy for the holidays to their personal website which you dont like? What a sad, sorry, hate-filled way to go through life.
Software doesn’t have an incremental cost. The cost of one unit of software is the same as the cost of 1 trillion units of software. That’s not the same as real world items.
Still, the cost of production is not 0 and the problem space is huge with no silver bullets; especially once you factor in things like business, scalability, regulatory and efficiency requirements.
It doesn't matter if software has no marginal cost of production. Production cost has nothing to do with what you can charge for something. What you can charge for something, minus what it cost you to make it, is of course your profit and will help you decide if the effort is worthwhile, but it doesn't change the fact that the input cost has nothing to do with the output price
> Production cost has nothing to do with what you can charge for something.
On the contrary, production cost has a great deal to do with what you can charge for something. In a perfectly efficient market, someone who charges more than what an item costs will sooner or later get outcompeted by someone who charges less. I think it's fair to say that production cost isn't the only factor in what you can charge, but to say "it has nothing to do with the price" is going way too far.
It does matter though. It is definitely a factor. There are lots of other factors, and different types of businesses with different margins, but they are all definitely tracking and optimizing those figures.
Consumers know that software can be reproduced cheaply and carpentry cannot.
Everything you are referring to is the CI/CD pipeline. GitHub actions, Gitlab Runners, ArgoCD; they can all do some sort of gitops. Those dependencies existed before gitops anyway, so nothing new is being added.
This is interesting. Particularly the notifications flow. I run a simpler setup with webssh on my iPhone over WG back to my LAN and manage Claude that way. It’s fine, and can handle disconnects (with some big cons). I can run code-server via browser on my iPad and can get all the same benefits mosh provides.
One thing to note: the VM seems like an absolute waste of money. If you are using tailscale, might as well connect back to bare metal VMs you can run at home. Save yourself some coin.
$2000 will get you 30~50 tokens/s on perfectly usable quantization levels (Q4-Q5), taken from any one among the top 5 best open weights MoE models. That's not half bad and will only get better!
If you are running lightweight models like deepseek 32B. But anything more and it’ll drop. Also, costs have risen a lot in the last month for RAM and AI adjacent hardware. It’s definitely not 2k for the rig needed for 50 tokens a second
Could you explain how? I can't seem to figure it out.
DeepSeek-V3.2-Exp has 37B active parameters, GLM-4.7 and Kimi K2 have 32B active parameters.
Lets say we are dealing with Q4_K_S quantization for roughly half the size, we still need to move 16 GB 30 times per second, which requires a memory bandwidth of 480 GB/s, or maybe half that if speculative decoding works really well.
Anything GPU-based won't work for that speed, because PCIe 5 provides only 64 GB/s and $2000 can not afford enough VRAM (~256GB) for a full model.
That leaves CPU-based systems with high memory bandwidth. DDR5 would work (somewhere around 300 GB/s with 8x 4800MHz modules), but that would cost about twice as much for just the RAM alone, disregarding the rest of the system.
Can you get enough memory bandwidth out of DDR4 somehow?
Look up AMD's Strix Halo mini-PC such as GMKtec's EVO-X2. I got the one with 128GB of unified RAM (~100GB VRAM) last year for 1900€ excl. VAT; it runs like a beast especially for SOTA/near-SOTA MoE models.
reply