"This is computer science. There aren't restrictions." - It's literally what many people forget when they let themselves constraint by frameworks and claim this and that isn't possible. Very refreshing to see how liberating it is if you go one level deeper.
cnlohr is in his own league, this guy is bordering on genius.
Dissatisfied with the state of the vendor SDK for CH32V003 microcontrollers (ultra-cheap RISC-V MCUs), he created his own [1], which is a pleasure to use. He also has a header-only RISC-V emulator that runs Linux (and Doom!) [2], and hacked an ESP32 to emit valid LORA frames with clever use of aliasing [3].
I think this problem is at least in part due to the hypothesis testing concept itself. Classical hypothesis testing is asymmetric: there is a "null" hypothesis, which is typically the uninteresting/useless case, and an "alternative" hypothesis, which is the one you would like to be true. Critically, you cannot determine if the data _supports_ the null hypothesis, only if the data _rejects_ it (and supports the alternative). A so-called "null result" occurs when data is not sufficient to reject the null hypothesis; then you can't tell if you actually have a useful finding (for example, that there is no major difference between species A and B) or a failed experiment (data was so bad/noisy that we cannot conclude anything). And so you end up with the unfortunate situation where you either succeed in proving your favorite hypothesis and get your degree / promotion / tenure, or you have nothing.
This happens because hypothesis testing conflates effect size (how big is the difference between A and B) with uncertainty about that effect size (significance/reproducibility). Confidence intervals are more useful IMHO, as they help untangle these two aspects, for example showing that the difference between A and B is small _and_ reproducible. Bayesian analysis is also a major improvement, as it allows examining both the "null" and "alternative" hypotheses on equal terms, as well as reasoning about our prior beliefs / biases. Unfortunately many areas of science are still stuck with statistical methods from the early 1900's.
Reading this I couldn't help thinking of the epicycle theory of planetary motion. Under the holy assumption that planets had to move in perfect circles, people invented increasingly complicated circles-on-top-of-circles models in order to explain all observed trajectories. Then Kepler came along and said "hey, it's an ellipse!"
Seems like that’d be the default assumption - I pay zero point one bucks per kWh, of course it’s an amount. What’s more interesting is kW/h, it feels like a rate but it’s more like an acceleration.
Like he said, it would measure the rate of change between two kilowatt levels. You might ask questions like "how quickly can this power plant adjust the amount of power it's producing?", and the answer would take the form of an amount of power over an amount of time.
This is one of the most salient ways in which different types of power plants differ; it's something that people are very concerned with.
The yoctomole is a wonderful unit, for example when you want just slightly more than half a cup of coffee!
But the mole as a "counting unit" is different from OP's idea of assigning units to things like network requests. The mole is just a shorthand for a number, like a dozen or a score. We don't have different kind of "moles" for, say, carbon atoms and water molecules. Or coffee.
Just a reminder that "70% higher risk" is a relative value describing the _fractional_ increase in risk. In absolute terms, the probability of developing Parkinson's (prevalence) was 0.33% in group exposed to TCE and 0.21% in the non-exposed group. So you might also say TCE increases the risk of Parkinson by 0.12 percentage points.