I'm tempted to exclude the Axiom of Choice (AC) from any math I do, and instead include the Axiom of Determinacy (AD) [1] (which contradicts AC), so that all subsets of R^n are measurable [2] (thus precluding the Banach–Tarski paradox), and the Axiom of Dependent Choice (DC), which is weaker than AC but sufficient to develop most of real analysis. Like, I don't really care if not all vector spaces have a basis; it's enough for me that all interesting vector spaces do (I think).
But then we have this (from [3]):
> For each of the following statements, there is some model of ZF¬C where it is true:
> - In some model, there is a set that can be partitioned into strictly more equivalence classes than the original set has elements, and a function whose domain is strictly smaller than its range. In fact, this is the case in all known models.
So it's really, pick your own poison - either one of these:
- you can take apart a ball and put the pieces back together into two balls
- there exists a function whose range is larger than its domain
interestingly enough, the “v” in risc-v also means “v” as in vector [1]
> and so we named it RISC-V. As one of our goals in defining RISC-V was to support research in data-parallel architectures, the Roman numeral ‘V’ also conveniently served as an acronymic pun for “Vector.”
I always thought it was a shame the ascii table is rarely shown in columns (or rows) of 32, as it makes a lot of this quite obvious. eg, http://pastebin.com/cdaga5i1
It becomes immediately obvious why, eg, ^[ becomes escape. Or that the alphabet is just 40h + the ordinal position of the letter (or 60h for lower-case). Or that we shift between upper & lower-case with a single bit.
esr's rendering of the table - forcing it to fit hexadecimal as eight groups of 4 bits, rather than four groups of 5 bits, makes the relationship between ^I and tab, or ^[ and escape, nearly invisible.
It's like making the periodic table 16 elements wide because we're partial to hex, and then wondering why no-one can spot the relationships anymore.
Imagine the demands: Have only founders, visionaries that code. That won't work and doesn't make sense.
It is perfectly fine to simple doing your job, however this "doing" has shifted. It is not about slavishly following specs, it is about feedback in complex situations. No need to be a prodigy, just give sufficient feedback.
And BTW: Camille Fournier wrote one of the best books on professionell development as a software engineer. It is called "The Manager's Path: A Guide for Tech Leaders Navigating Growth and Change". This is golden.
Reminds me of Sean Parent's recounting[1] of how his son's beads and string toy could represent a tree data structure with stateless traversal and some other interesting properties, like:
> ...to erase elements from some point in my model to some other point in my model, what I do is grab those two locations on the string, and I pick the whole thing up, and the beads that fall off are the ones that get erased ... (48:41)
First, you have to distinguish between intelligence gathering, which includes "spies", and covert action, which actively tries to do something to the enemy. There's an argument for keeping those separate, best articulated by Gen. Reinhard Gelhen in his memoirs.
In the US, only the CIA really does both. DIA, DNI, NSA, NGA, and the intelligence services of the military branches are mostly pure intel gathering.
On the active side are also the various special forces units, which the US has gathered up under the JSOC. They sometimes do intel gathering, but long-term covert is rare. Anybody in the field on the active side will probably become known to the enemy sooner or later.
There are historic successes in both areas, but not a huge number of them. The US put a huge amount of effort into assessing USSR military capabilities, and mostly got it wrong. Poor intel led to the "bomber gap" and "missile gap", and missed the USSR's A-bomb and H-bomb, as well as the breakup of the USSR. The USSR put a lot of effort into spying on atomic weapons, with some success (succeeded with A-bomb, failed with H-bomb), got spies into the State Department (useless, according to KGB archives released at the end of the cold war) and struggled to find out how the US made reliable jet engines, without much success.
There's a history of special operations, written for that community, for which I don't have a reference right now. They go over the major special operations of the 20th century. This was written pre-9/11. For each they ask "Was it a success" and "was it worth it"? The only operations for which both answers are "yes" were Eben-Emael[1] and Entebbe.[2] Only Eben-Emael changed the course of a war.
Except for the parts in SMM (often considered "ring -2", below virtualization) that are installed by UEFI to implement stuff like Authenticated Variables (the thing that makes UEFI Secure Boot work and requires a rather complete crypto library for it). See https://firmware.intel.com/sites/default/files/resources/A_T...
What killed SBlaster was not motherboards having sound cards.
It was movie studios convincing microsoft to ban from Windows the best featuresof SBlaster cards (those features are STILL banned by the way) because they were afraid of people using them to pirate movies.
It is a reaaaaally long story, but to make it short: MS changed radically how windows drivers worked, and what a driver could do, during the transition to DVDs, it killed sound cards (because no more advanced 3D sound rendering that sound blaster could do), CRTs (no more funky resolutions, no more overlay mode, and other fancy CRT features), and initiated the slow death of VGA (that also happened for other reasons but this contributed a lot).
Alternately the old-school variation (works since at least Windows 3.11): [alt] + [space] to open upper-left corner menu, [m] to move, then [arrow keys] to move the window around.
Not sure if anyone in here subscribes to The Signal Path [1] on YouTube, but he does tons of teardowns (and repairs!) of RF/signal processing hardware new and old. You'll see precisely what jaquesm is talking about. The marquee specs might not look so great on paper, but you'll find absurd levels of craft and attention to detail in some of that older equipment.
Glancing at the posts, I have a question: do they defend against the pathological cases that require O(n) passes, and therefore at least naively have O(n^2) or worse behavior? Let's say that a jump can be encoded with either an 8-bit signed offset in 2 bytes, or a 32-bit signed offset in 5 bytes. (Supposedly x86 has a 16-bit offset version, but that uses the same opcode as the 32-bit offset, and seems unsupported in 64-bit mode.) Consider the following:
L0: jmp M0
L1: jmp M1
L2: jmp M2
...
L25: jmp M25
M0: jmp N0
M1: jmp N1
...
M25: jmp N25
N0: jmp O0
...
...
Z0: jmp LAST
Z1: jmp LAST
...
Z25: jmp LAST
[more than 128 bytes of unrelated stuff]
LAST:
If all the jmps are encoded with 5 bytes, then the offset from e.g. L0 to M0 is 130 bytes, which is over the limit of 127 you can get from an 8-bit signed offset. The optimistic approach will discover in the first pass that all the Z jmps have to be 5 bytes. Pass 2, assuming it goes left to right, will see that everything up to Y24 still has offset ≤ 127 (even "Y24: jmp Z24" could be 127 bytes or 130 depending on how Y25's jump is encoded), but will find that "Y25: jmp Z25" has an offset of 130 and must be 5 bytes. Pass 3 will discover that Y24 needs to be 5 bytes too, and discover nothing else. Pass 4 will discover the sole fact that Y23 needs to be 5 bytes, and so on. If the total number of jmps here is n, then it will take close to n passes to fix up all the jmps. (If the passes go from right to left, or even alternate the direction or randomize... I think it's still possible to construct an example where pessimizing Y10 kicks Y20 over the edge, which kicks Y9 over the edge, which triggers pessimizing Y21, etc.)
This isn't a flaw unique to the optimistic approach, by the way—I could construct a similar version where all the L, M, etc. chains go from 0 to 62, and the "more than 128 bytes of unrelated stuff" becomes "0 bytes of unrelated stuff"; then, in the pessimistic approach, once the Zs have settled down, the subsequent passes will discover, one at a time, that Y62 only has a 126-byte offset and can be encoded with 2 bytes, then that Y61 can be similarly encoded, and so on.
I'm sure this doesn't actually happen in practice, unless you're compiling malicious code off the internet (in which case there are likely worse vulnerabilities inside the compiler)... but I'm curious if assemblers have nonetheless seriously addressed the problem.
One of the posts talks about NUMBER_OF_PASSES; this suggests, but I don't think the post explicitly states, that the defense mechanism is "arbitrarily limiting the number of passes". Do you know if fasm and others actually do this? (Looks like nasm has a command-line switch to turn off multi-pass branch offset optimization.)
The article says "by Evelyn Lamb February 21, 2019"
..but at the bottom says "This article was originally published in our “The Absurd” issue in June, 2017."
..but John Baez's article "Rectified Truncated Icosahedron" (i.e. the name of Kaplan's solid) is dated April 1, 2016. https://blogs.ams.org/visualinsight/2016/04/01/rectified_tru... .. Lamb somehow omits the solid's name. Baez refers to Johnson, Kaplan, McNeill and a lot of other people/sites as well. And links to them! Both have blogged for the AMS.
Unless Lamb is Baez, this seems oddly like plagiarism. But I guess there must be some decent explanation.
edit: Although maybe that's ridiculous - the articles are different: she's got some quotes from each of the main players - even Baez - including tidbits about other near misses (Only the Simpsons-Fermat one was new to me, so they didn't seem the meat of the article).
You can quote me, as I've gotten down and dirty with the Xbox ( https://github.com/monocasa/xbvm ), and have written board support packages for CE for work.
The OG Xbox kernel is an extremely heavily stripped down and modified Win2k kernel. No support for user mode, multiple address spaces, or more than one running process. No win32 in the kernel. USB, sound, and the vast majority of the graphics driver are statically linked into the process executable.
Unitron made Apple clones (I think the Apple II clones were licenced, but they ran into trouble when they tried to setup a joint venture for the Mac clone and decided to just copy it).
Gradiente and Sharp made MSX clones (they were licenced).
In the beginning of the 90's our economy opened and our then President cited our slow computers as one the main reasons for doing so (just PR talk, but it is interesting to note this)
Are memristors real? This paper[0] in addition to a handful that have appeared on HN in the last few months asserts that "the 2008 memristor is not the 1971 postulate and neither of them is fundamental" and moreover that "[t]he ideal memristor is an unphysical active device and any physically realizable memristor is a nonlinear composition of resistors with active hysteresis."
I'm confused by something like this being an open question, and even more confused about why this debate isn't lighting up the electrical/computer engineering community as far as I can tell.
1) There is this pseudo-scientific methodology called NHST (null hypothesis significance testing) devised pretty much on accident in the 1940-50s that tricks people into thinking they have discovered something interesting.
2) It is so much easier to generate these "significant" results than to study a phenomenon and come up with a model capable of making a precise prediction, then have multiple people/groups carefully collect data to check that prediction.
3) People willing to do the NHST thing can publish papers orders of magnitude faster than people not willing to.
4) Number of publications is the primary metric of success in academia.
5) People willing to do the NHST thing slowly take over the field as those who want to do real science get pushed out, retire, or leave for greener pastures.
6) Eventually the field is filled with people generating worthless papers that cannot be replicated in principle and often contradict each other.
- a) Because all that matters is you got a significant result published in a peer reviewed journal
- b) The discovery has already been made, the statistics said so, why would anyone need to replicate it?
And to preempt a common confusion. This has nothing to do with bayesian vs frequentist, testing a strawman null hypothesis can be done using bayesian math just as well as frequentist.
Some decent intros to this problem (yes, it was pointed out long ago but there is no stopping the need to publish apparently):
https://youtu.be/Gv2I7qTux7g
C but with the problems fixed.