I've read five of of the seven papers on the list. The two I haven't read are Cerf and Kahn's, and Berner-Lee's.
Turing's paper on computability was particularly hard to follow, for me, because he used these gothic-font upper-chase characters to name all sorts of objects, and all those characters looked kinda the same to me! I had to use auxiliary materials to be able to make my way through the paper. Today, I would recommend reading it with Charles Petzold's easy-to-follow book on the paper: https://www.amazon.com/Annotated-Turing-Through-Historic-Com...
Cook's paper on NP-completeness was also hard to follow (again, for me). As with Turing's paper, I had to use auxiliary materials to make my way. Today, I would recommend reading instead an introductory book on computational complexity that works through Cook's proof.
Shannon's paper is a work of art, clearly articulated and beautifully written. It's just not casual reading, to put it mildly.
Brin and Page's paper, and Codd's paper, are not hard to follow, at least as I remember them, but understanding Brin and Page's work requires some knowledge of Linear Algebra.
Where does the Brin-and-Page paper require linear algebra? It mentions "eigenvector" once, in a tangential remark. The "simple iterative algorithm" is how you find the fixed point of any contraction mapping, linear or not. Knowing that it is also an eigenvector is just a distraction -- you aren't going to use Gaussian elimination, not if you know what is good for you.
It doesn't require linear algebra to understand the paper or how the algorithm works, but it does require linear algebra to understand why the algorithm works. In general, since the induced 1-norm of a stochastic matrix S is exactly equal to 1 but not smaller than 1, the mapping x↦Sx is NOT a contraction. Neither convergence of the power method nor uniqueness of fixed point are guaranteed. (If there are multiple fixed points, there are multiple inconsistent rankings.)
In the paper, the significance of the so-called "damping factor" is not clear. However, with linear algebra, we know that the damping factor makes the stochastic matrix positive rather than merely nonnegative. Hence the Perron eigenvalue is "simple" (i.e. of multiplicity one), the Perron vector is unique and the power iteration must converge to it.
That's easier explained using fixed-point theory: The damping factor makes the mapping x↦Sx into an actual contraction (on the space of probability distributions). Not to mention that it has a simple common-sense justification (you don't want to get stuck in a subnetwork that only links to itself, or drown out a subnetwork that has more outlinks than inlinks).
There is probably some gain from understanding the algorithm specifically as a Markov chain iteration (if nothing else, it provides a great example for Markov chain iteration), but I think it's perfectly possible -- and easier -- to understand it as a fixed-point iteration on a compact space. And I am someone who does algebra for a living and normally explains everything algebraically if ever possible...
Yes, but the paper blathers on and on in prose about history and related work and goals and future work. It might only mention eigenvector once in a tangential remark, but that remark is like 80% of the algorithm content of the paper.
I've read five of of the seven papers on the list. The two I haven't read are Cerf and Kahn's, and Berner-Lee's.
Turing's paper on computability was particularly hard to follow, for me, because he used these gothic-font upper-chase characters to name all sorts of objects, and all those characters looked kinda the same to me! I had to use auxiliary materials to be able to make my way through the paper. Today, I would recommend reading it with Charles Petzold's easy-to-follow book on the paper: https://www.amazon.com/Annotated-Turing-Through-Historic-Com...
Cook's paper on NP-completeness was also hard to follow (again, for me). As with Turing's paper, I had to use auxiliary materials to make my way. Today, I would recommend reading instead an introductory book on computational complexity that works through Cook's proof.
Shannon's paper is a work of art, clearly articulated and beautifully written. It's just not casual reading, to put it mildly.
Brin and Page's paper, and Codd's paper, are not hard to follow, at least as I remember them, but understanding Brin and Page's work requires some knowledge of Linear Algebra.
Thank you for sharing this on HN.