When it was Toyotas fictitious problem it came down to a bunch of old people.
Yeah, right. Blame it on "old people". Certainly not the code base with 10,000 global variables:
The design review found things like: Other egregious deviations from standard practice were the number of global variables in the system. (A variable is a location in memory that has a number in it. A global variable is any piece of software anywhere in the system can get to that number and read it or write it.) The academic standard is zero. Toyota had more than 10,000 global variables.
The excessively (and negatively) complicated addons described in the thread are not inherent to ICE cars. Take any car from 80s or 90s and the behavior is predictable and simple. It's only into the 00s that manufacturers started making things too complex and as a consequence, fragile and unpredictable.
I dunno, even the simplest of many-cylinder Otto-cycle ICEs is still a Rube Goldberg machine. Especially in aggregate with the transmission necessary for making usable the relatively narrow rpm range, fuel tank/system, exhaust, emissions/catalyst, they're all over the place.
Per the title, four vendors were tested. Samsung was already mentioned as a non-loser, so it can't be one of the two losers (or else the title would be wrong and the SSDs would be from 3 vendors at most).
I didn't pay careful attention to the wording of the submitted title. I may have been confused because of the wording of the actual tweet: I tested a random selection of four NVMe SSDs from four vendors.
The word "random" meant to me that Samsung drives could have been selected twice. But, yes, then there wouldn't be four distinct vendors.
Unstated but implied by you is there are only two (major) Korean vendors to choose from.
So if Samsung is a Korean winner then Hynix must be the Korean loser. Which is now clear to me.
Is it possible there's a third (minor) Korean player? Could I possibly still have a chance? :)
> Is it possible there's a third (minor) Korean player? Could I possibly still have a chance? :)
Well supposedly Zalman (another Korean company) makes SSDs, but I don't think I've ever seen one in the wild. Their specialty is heatsinks and fans, last I checked.
> Too bad programmer laziness won and most current hardware doesn't support this.
There were discussions around this a few years back when Regher brought up the subject, and one of the issues commonly brought up is if you want to handle (or force handling of) overflow, traps are pretty shit, because it means you have to update the trap handler before each instruction which can overflow, because a global interrupt handler won't help you as it will just be a slower overflow flag (at which point you might as well just use an overflow flag). Traps are fine if you can set up a single trap handler then go through the entire program, but that's not how high-level languages deal with these issues.
32b x86 had INTO, and compilers didn't bother using it.
Modern programming language exception handler implementations use tables with entries describing the code at each call-site instead of having costly setjmp()/longjmp() calls. I think you could do something similar with trap-sites, but the tables would probably be larger.
BTW. The Mill architecture handles exceptions from integer code like floating point NaN: setting a meta-data flag called NaR - Not a Result. It gets carried through calculations just like NaNs do: setting every result to NaR if used as an operand... Up until it gets used as operand to an actual trapping instruction, such as a store. And of course you could also test for NaR instead of trapping.
My guess would be a pipelining issue where `INTO` isn't treated as a `Jcc`, but as an `INT` (mainly because it is an interrupt). Agner Fog's instruction tables[0] show (for the Pentium 4) `Jcc` takes one uOP with a throughput of 2-4. `INTO`, OTOH, when not taken uses four uOPs with a throughput of 18! Zen 3 is much better with a throughput of 2, but that's still worse than `JO raiseINTO`.
It's more complicated than shows up in micro benchmarks like that. Since when you do it, it's pretty much every add, you end up polluting your branch predictor by using jo instructions everywhere and it can lead to worse overall perf.
Modulo wraparound is just as much a feature in some situations as it is a bug in others. And signed vs unsigned are just different views on the same bag of bits (assuming two's complement numbers), most operations on two's complement numbers are 'sign agnostic' - I guess from a hardware designer's pov, that's the whole point :)
The question is rather: was it really a good idea to bake 'signedness' into the type system? ;)
That's why Rust has separate operations for wrapping and non-wrapping arithmetic. When wrapping matters (e.g. you're writing a hash function), you make it explicit you want wrapping. Otherwise arithmetic can check for overflow (and does by default in debug builds).
> Modulo wraparound is just as much a feature in some situations as it is a bug in others.
That’s an extremely disingenuous line of reasoning, the situations where it’s a feature is a microscopic fraction of total code: most code is neither interested in nor built to handle modular arithmetics, and most of the code which is interested in modular arithmetics needs custom modulos (e.g. hashmaps), in which case register-size-modulo is useless.
That leaves a few cryptographic routines built specifically to leverage hardware modular arithmetics, which could trivially be opted in, because the developers of those specific routines know very well what they want.
> signed vs unsigned are just different views on the same bag of bits […] The question is rather: was it really a good idea to bake 'signedness' into the type system? ;)
The entire point of a type system is to interpret bytes in different ways, so that’s no different from asking whether it’s really a good idea to have a type system.
As to your final question, Java removed signedness from the type system (by making everything signed). It’s a pain in the ass.
> Java removed signedness from the type system (by making everything signed)
That's not removing signedness. Removing signedness would be treating integers as sign-less "bags of bits", and just map a signed or unsigned 'view' over those bits when actually needed (for instance when converting to a human-readable string). Essentially Schroedinger's Cat integers.
There'd need to be a handful 'signed operations' (for instance widening with sign extension vs filling with zero bits, or arithmetic vs logic right-shift), but most operations would be "sign-agnostic". It would be a more explicit way of doing integer math, closer to assembly, but it would also be a lot less confusing.
Modulo wraparound is convenient in non-trivial expressions involving addition, subtraction and multiplication because it will always give a correct in-range result if one exists. "Checking for overflow" in such cases is necessarily more complex than a simple check per operation; it must be designed case by case.
Overflow checks can be very expensive without hardware support. Even on platforms with lightweight support (e.g. x86 'INTO'), you're replacing one of the fastest instructions out there -- think of how many execution units can handle a basic add -- with a sequence of two dependent instructions.
A vast majority of the cost is missed optimization due to having to compute partial states in connection to overflow errors. The checks themselves are trivially predicted, and that's when the compiler can't optimize them out.
I usually do read the linked content but I agree with GP poster that comments are often more informative.
Yes there is sometimes an echo chamber here, but it's only for limited topics. It very much has a Silicon Valley feel to it, but @dang and I have gone around on this and he assures us that the readership and comments have broad geographic representation.[1] It's a worldwide echo chamber. :)
Fortunately the echo chamber doesn't exist for most submissions. Most of the discussion on HN is on non-polarizing topics.
profootballtalk.com works great if you don't want to vote or comment
macrumors.com great functionality
nitter.net happily takes the place of twitter.com
drudgereport.com works great and I rarely turn on JS when I go to the sites he links to, usually the text on target sites is there if not as pretty as it could be
individual subreddits (e.g. old.reddit.com/r/Portland/ ) are quite good w/o JS. But the "old." is probably important.
I admit that there are lots of sites that don't work, e.g. /r/IdiotsInCars/ doesn't work because reddit uses JS for video. For so many sites the text is there but images and videos aren't. Also need to turn off "page style" for some recalcitrant sites.
In conclusion, contrary to your JS experience, I'd say that I spend over 90% of my time browsing w/o JS and am happy with my experience. Things are lightning fast and I see few or no ads. I don't need an ad blocker since 99% of ads just don't happen w/o JS.
> In conclusion, contrary to your JS experience, I'd say that I spend over 90% of my time browsing w/o JS and am happy with my experience. Things are lightning fast and I see few or no ads. I don't need an ad blocker since 99% of ads just don't happen w/o JS.
Well, you still have lots of tracking stuff loaded probably, unless you got something extra for blocking trackers. A tracking pixels does not need JS. A font loading from CSS does not need JS. Personally I dislike those too, so I would still recommend using a blocker for those.
Well, you still have lots of tracking stuff loaded probably, unless you got something extra for blocking trackers.
Yes I'm sure I have that stuff loaded. But I don't care because it's quite ephemeral:
I exit Firefox multiple times a day, there's really no performance cost to doing that after every group of websites. E.g. if, while reading HN, I look up something on Wikipedia, or I search with Bing or Google, everything goes away together.
In my settings: delete cookies and site data when Firefox is closed
In my settings: clear history when Firefox closes, everything goes except browsing and download history
No suggestions except for bookmarks.
So when I restart Firefox to then browse reddit it starts with a clean slate.
Comcast insisted I purchase a DOCSIS3 modem quite a while ago. Once downloads are at 100 mpbs+, does it really matter if I repeatedly re-download a few items to cache?
The only noticeable downside is when I switch to Safari to view something that needs JS, I then see ads for clothing that my wife and daughters might be interested in. I presume this is due to fallback to tracking via IP address. Of course I always clear history and empty caches in Safari.
Obviously this doesn't work for someone who wants to or needs to keep 100 browser windows open at once, for months at a time. But that's not me. I don't think that way, never have.
Edit: just had to add that sites like Wikipedia are better w/o JS (unless you edit?). I don't see those annoying week-long pleas for money. Do they still do those?
> Obviously this doesn't work for someone who wants to or needs to keep 100 browser windows open at once, for months at a time. But that's not me. I don't think that way, never have.
Caught me. Tab hoarder here : )
> I don't see those annoying week-long pleas for money. Do they still do those?
They still do those. At least I have seen them less than a year ago.
That's the thing here, morals are flexible for most people if there's a decent paycheck on the other side. It's another reason why politics are so corrupt these days. There's plenty of ways to avoid direct corruption, via "campaign funds", board of director seats, and lucrative corporate positions after a political career - most recently there's Nick Clegg, a former UK politician who will now be paid $15 million a year and probably bonuses and stocks as well to represent Facebook.
>I really hate to even say this, but "Crazy Vlad" nearby is what true evil is about.
I would say that systematic evil, evil that is a consequence of technology or reality, will always surpass individual evil. It's like comparing the horrors of slavery as an instituted system compared to one really evil, sadistic slave-owner. Or how the native Americas we're to the 90% killed by viruses, rather than the evil of greedy conquerors. Perhaps you could argue that Putin is a manifestation of an evil system as well, but I'd think that if he was replaced by a good person tomorrow, the world would be a radically better place.
Google is clearly working hard, as an powerful institution, to perpetuate the system.
No that has turned to shit (for me anyway). Used to be fine, now presents a captcha when JS off. Okay so I switch from Firefox to Safari (where I leave JS on) and it still presents a captcha. I'd rather use the original site with JS than solve captchas.
That has been my consistent recent experience for a multitude of those.
or a Tor-like service
I've never used Tor, but aren't there a lot of complaints of repetitive captchas when using it?
randomizes one's IP address and browser fingerprint
I haven't followed this closely, but didn't Apple make claims that they would soon have an opt-in service that did something like this?
I'm curious what evidence do you have that he is any better? Case in point the article being discussed here, awarding his brother the top NYPD job, awarding his good friend who retired in 2014 because of a federal corruption scandal a plum job. And those two were just his first two weeks in office.
That's quite a sad statement if you really believe "all that matters" is being just slightly less incompetent than the most incompetent. And even the jury on that is still out as he's only been in office six weeks.
"all that matters" is being just slightly less incompetent
Many are upset about that reality. But I've been observing politics for many decades and I haven't seen anything better. NY doesn't do recalls (or didn't when I lived there, maybe it changed). California does allow them but that seems to result in an altogether different clown show.
Needs 2006 in title. Needs stupid JavaScript needed just to read it. Does stupid JavaScript games.
But it's actually decent content. It's an overview of how Babbage's Analytical Engine was intended to function. The implementation problem was that what Babbage wanted was way way way over-designed. E.g. 50 decimal digit arithmetic.
They started a trillion dollar company because they were smart enough to write a program that actually worked, that turned 1970s hardware into something that was useful to a lot of people.
Yeah, right. Blame it on "old people". Certainly not the code base with 10,000 global variables:
The design review found things like: Other egregious deviations from standard practice were the number of global variables in the system. (A variable is a location in memory that has a number in it. A global variable is any piece of software anywhere in the system can get to that number and read it or write it.) The academic standard is zero. Toyota had more than 10,000 global variables.
[1] https://news.ycombinator.com/item?id=19613055 That original link is now broken. I don't think Toyota paid to scrub it, probably just decay.