That's not exactly easy. I doubt on-device training will become much of a thing. But on-device inference is desirable in all sorts of distributed use cases. We're still a long way off from reliable internet everywhere. Especially when you want to start pushing large quantities of sensor data down the pipe.
I can't even get reliable internet on my phone in the centre of London.
> Capacity is constantly being hit by very large population growth.
Is this really true in the UK? Electricity production in the UK peaked in 2005. It's down 20% on that today. The issue here is that in 2005 electricity was primarily produced in large power stations reasonably close to where it was consumed, while in 2025 it's increasingly produced in locations far from population centres. The actual ability of the grid to deliver power to the last mile isn't really a problem. The problem is that most of the houses are in the South, and increasingly large amounts of generation are in the North.
I suspect it’s less a nostalgia for a less corporate time, and more of a nostalgia for an earlier stage of the product lifecycle. Pretty much every technology follows a similar path - after an initial version proves the market there’s an explosion of manufacturers and designs all trying something new before eventually the product becomes mature and settles on a single design.
The nostalgia is for a time when a new product could genuinely surprise you.
The world is rapidly homogenising. You see it with “air space” interior design - coffee shops have the same aesthetic in every major city in the world. You see it in local fashions. You see it as a tourist - travel anywhere in the world and the chances are you’ll find the same kind of shop selling the same kind of trinket. Made in China with a subtly different graphic on it to represent the country you’re in.
This has been happening ever since trade routes were established across Eurasia (Silk Road) or the Americas were discovered. It only keeps accelerating as movement and trade becomes easier.
If pockets of humanity could isolate themselves from the rest, we could get diversity growing again, that one sentinel island might be our only hope.
Even on Earth, the only reason humans exist is because the “local maximum” of the dinosaurs was wiped out by a meteor. Perhaps comparably intelligent dinosaurs would have eventually evolved - but it’s not a given!
Dinosaurs existed for some 200 million years with no detectable signs of technology development[0]. Presumably, the steady state did not produce a scenario in which the intelligence niche would develop without some other less catastrophic global change event.
Intelligence evolved at least three times on earth - dinosaurs (leading to corvids, but a raptors already intelligent), mammals and cephalopods (e.g. octopus).
I suspect that any evolutionary environment will eventually create enough variety and instability that some generalists emerge, creating a reward for intelligence. The rise in intelligence from early water-bound life to later forms was likely all driven by more complex and diverse environments.
Maybe they didn't produce an intelligent species just because they had not the luck of living in the unprecended time in the history of Earth with both high atmospheric O2 and very low atmospheric CO2 we enjoyed for a while, before we started to burn fossil fuels by the gigaton. See https://www.qeios.com/read/IKNUZU
It took several environment-changing events to get our unique kind of intelligence; mammals had to thrive in place of saurs; and then, Africa needed to be split by the Rift and to create the dry savannah.
This forced some apes to climb down the trees and depend on a diet of scavenging for meat, which happened to both increase brain size AND require improved intellect to survive, forcing the evolution of our hypertrophied symbolic brain.
Had this not happened however, other intelligent species could have filled the niche. There's no shortage of other intelligent species in our planet, not just other mammals but octopus and some birds. And then you get hive intelligence, which could equally be forced to evolve into a high problem-solving organism.
It's in the example isn't it? The example is logging "No space for 5 seconds". It's just a helpful diagnostic that subtly turned into data loss.
Maybe it's a bit contrived, but it's also the kind of code you'd sprinkle through your system in response to "nothing seems to be happening and I don't know why".
It's definitely a bit contrived, but to me it's also emblematic of the issues with async Rust.
The note on mpsc::Sender::send losing the message on drop [1] was actually added by me [2], after I wrote the Oxide RFD on cancellations [3] that this talk is a distilled form of. So even the great folks on the Tokio project hadn't documented this particular landmine.
There's hiding complexity, and then there's creating fake reality for people.
As it is, panels are gonna produce variable power depending on the weather. Putting interoperability with third-party panels aside, to get the simplicity of "max 2 panels in series", they'd have to either cap the max power on the panel/generator link and dump the excess, or set the limit based on the worst case a customer is likely to encounter. I.e. they're either gonna waste power, or gouge their customers for extra hardware. Neither of that makes sense for an ecological product sold to a price-conscious customer base :).
The problem is that you run higher voltages with the same hardware if your in Alaska than if you're in Florida.
Substantially so.
"Wasting" those 5~10% during severe winter conditions isn't worth splurging on the voltage converter.
Though then selling units that suggest to not run a few hundred volt strings before paralleling instead does sound bad, as the string doesn't need separate fuses rated to many volts DC.
I do think commit messages should give some reference to what they're changing.
However, in more than a decade of software development, I don't think I've ever got much use out of commit messages. The only reason I'd even look at them is if git blame showed a specific commit introduced a bug, or made some confusing code. And normally the commit message has no relevant information - even if it's informative, it doesn't discuss the one line that I care about. Perhaps the only exception would be one line changes - perhaps a change which changes a single configuration value alongside a comment saying "Change X to n for y reason".
Comments can be a bit better - but they have the nasty habit of becoming stale.
> However, in more than a decade of software development, I don't think I've ever got much use out of commit messages.
> normally the commit message has no relevant information
Maybe that's why you've never got much use of them?
If your colleagues or yourself wrote better commits, they could have been of use.
An easily readable history is most important in open source projects, when ideally random people should be able to verify what changed easily.
But it can also be very useful in proprietary projects, for example to quickly find the reason of some code, or to facilitate future external security reviews (in the very rare case where they're performed).
That's not exactly easy. I doubt on-device training will become much of a thing. But on-device inference is desirable in all sorts of distributed use cases. We're still a long way off from reliable internet everywhere. Especially when you want to start pushing large quantities of sensor data down the pipe.
I can't even get reliable internet on my phone in the centre of London.
reply