I've switched to DDG and I've hardly looked back. Google's search has been seriously declined in quality. Most search operators [1] are no longer supported. Even those directly in the "tools" menu don't work.
For example if you search "Nothing Can Stop Google. DuckDuckGo Is Trying Anyways site:medium.com", and set a custom date range to sometime last year. You'll see results which state this blog post was posted in 2018-October-31 for example, or which every date you prefer because I assume they just fuzzily fit the post date -into- that range. You can make google tell you this blog post is 2+ years old.
The Google.com I found useful in the early 00's even had document qualifiers so I could search for strings, but filter to just PDF's, or HTML, or JPG's. Now I have to pay for these features via a Google App-Engine private search instance. It just feels like having somebody spit in your face. When features were free, but they quietly became pay-to-play without zero warning.
I find it to be an awesome way of finding a document that's actually a few page long list of random document titles, one of which happen to be the one you're searching for.
For High Throughput compute, and cache correctness. Here are the primers I can give. None of these are required. Strong langauge, cs, interpersonal skills, team inter-interoperability, and commitment to quality (via unit and integration testing, and skill working with liners) can get you a lot further.
The resources I'm linking are supplementary to the above, and you'll likely encounter them in the wild. But they'll help you build a base of knowledge, and give you terminology to search for, and work with.
- If you plan on working with linux these is an excellent reference: http://man7.org/linux/man-pages/dir_section_2.html remember there is no magic in Linux, everything eventually has to go through a system call. So if you learn the systemcalls, you can can learn how things work :)
This should give you a good primer on Concurrency, and DB's. For networking likely a basic TLA+ certification class will be 99% review, but for the things it isn't will offer great insight.
C89 only requires that static values be initialized.
A modern C standard (section: 6.7.8(10)) requires static values be initialized, but what the value is initialized too be _technically_ indeterminate.
There is the guide line given that integers must be zero, and pointers be NULL. But if a static storage class isn't consisting of purely integers, pointers, or (fixed sized) structures, arrays, and unions who's elements can recursively reduced to integers or pointers. Then the standard says the initialized value is indeterminate.
While relatively straightforward, there is a few gotcha's.
The only reason I'd seriously consider upgrading is if they fixed the typing speed. The input latency on iPhone has just _horrible_, and I know after another iOS update it'll be just as bad.
I taught myself to touch type at about ~80WPM with my thumbs, and how I have this bad of habit of typing a full google search query, and just staring my phone for 90seconds until all the text magically fills in and my phone shakes with haptic feed back that should've occurred nearly 2 minutes ago.
It is such a cheap experience I had to turn haptic feedback off because its not even remotely synced to touch inputs.
10 years ago there was less file system integration, user land virus scanning, kernel level virus scanning, os-hooks, OS-compatibility re-direction, and 32/64bit compatibility checks.
This was mostly added during NT6.0 era, which occured ~12 years ago. VISTA was the first OS using NT6.0 and VISTA was VERY much not in vogue ~12 years ago. In fact it was avoided like a plague as of 2008 (unless you were using 64bit, and had >=4GiB of RAM)
So many were using Windows-XP 32bit, or the NT5.2 kernel. Even those with >=4GiB of RAM were on Windows-XP 64bit, as VISTA had a ton of driver problems.
I'd really like to thank Cyan for their contributions. `zstd` and `lz4` are great. I'm pretty much exclusively using `zstd` for my tarball needs in the present day as it beats the pants off `gzip` and for plane text code (most of what I compress) it performs amazingly. (shameless self promotion) I wrote my own tar clone to make usage of it [1].
It is nice to have disk IO be the limiting factor on decompression even when you are using NVMe drives.
for those you who don't remember Xenix was Microsoft's UNIX that it marketted prior to releasing DOS. Originally the idea was Xenix was the multi-user OS, DOS was the single user.