You have to first prove that p is not undecidable. This is not being done in many proof by contra-positive based proofs in mathematics. First prove that p is not undecidable is not a step in Analysis I by Terence Tao for instance.
uh,,, dude you are wrongly calculating taxes. income tax is never a flat 30% or whatever. it is always staggered. marginally this looks to me the WORST or maximum rate that can be paid and not the lowest.
dublin rates dont make sense, are people doing double jobs to sustain life?
The article you linked is from a FOIA request. CIA's In-Q-Tel was an investor in the Keyhole product, and that Google has continued to receive excluive contracts for Google Maps.
The tweets note that Sergy proposed the name Bird mode, not the CIA, although the article suggests he (and Google) was closely working with CIA and US intelligence agencies. CIA also denied providing details on why/how they solid the In-Q-Tel funded product to Google.
Which is all to say, you should question what Google's doing with Maps and the massive amount of data its mining every moment its on, rather than passing off baseless statements as conspiracy theories.
Abstraction is useful when the resulting code is logically simpler, much like say category theory makes certain results in algebraic topology seem obvious.
Perhaps, you simply mean a different thing when you say "abstraction".
A colleague of mine who gets confused by "all the symbols" said that they appreciated the verbose stream notation that I build throughout the article. I would recommend giving the article a earnest try up through to page 5. Then after that, jump to page 17 and read Section 5C on Set Operations. See if the standard concepts of union, intersection, difference, etc. mixed with the verbose stream notation gives you insight and ultimately, the confidence to keep trying to tackle the work.
I'm not sure how what you are proposing is any better than sed. I said math envy since you resorted to using algebraic ring theory when there was a much simpler solution. xD
There are many Turing Complete languages. There are many stream-based languages. Moreover, there are many Turing Complete stream-based languages. These are instances of the presented stream ring algebra. I wanted to understand the foundation of such languages so as to have the algebraic toolkit to generate more instances.
Superoptimization is slow, and it's generally considered feasible only for shortish bits of code. A block of about 30 instructions takes around 10 minutes to superoptimize even on state-of-the-art stochastic superoptimizers.
You also need correct semantics for instructions. Go read the PCMPISTRI instruction and try to write out the SMT formulas for it so you can prove equivalence. Floating point is in an even worse state, since most interesting floating point transformations are only legal if you're willing to slacken requirements for bit-equivalence.
Superoptimization is not mature enough to become part of the regular compiler flow. What is happening is you're seeing people couple superoptimizers to find missing optimizations in the compiler--John Regehr is basically doing this for LLVM's InstCombine pass with Souper.
Lol, my comment was restricted to SIMD, which definitely does not take 10 minutes.
Secondly, no mainstream compiler actually compiles code to the PCMPISTRI instruction. Presumably it was meant to be used directly as assembly. I'm not sure why you are bringing in this obscure instruction into the discussion of superoptimizers.
I personally have a paranoid fantasy where NSA/GCHQ introduced this instruction to speed up password cracking. :D
You're citing the classic superoptimization paper, although it's actually quite old at this point. The goal of superoptimization has always been to try to compile to the no-real-C-equivalent instructions in an ISA, which includes the weird vector instructions such as PCMPISTRI. (I believe Strata attempted to match some of these instructions, and I think they managed to get formulas for about half of the weird immediate forms--these are the instructions that caused them to match "half" an instruction).
In any practical vectorized tight inner-loop, the block you're trying to optimize is inherently going to be large. Superoptimization is exponential in the size of the block being optimized, which limits its utility. That was my entire point: it becomes unacceptably expensive way too quickly to get used in compilers. (Some of the code I'm looking at right now has 100s of instructions in a single basic block, definitely not atypical for a compiler).