I had the same impression. He says things which are demonstrably untrue, it implies a lack of domain knowledge. But, on paper anyway, he has the domain knowledge.
I agree with the idea that the right time to decide how to handle AI is before it becomes extremely powerful. But he uses so much hyperbole, and what seems to me to be intentionally inflammatory language for purposes of creating fear.
It makes we want to take the cynical perspective that he's trying to cash in on fear to create a (valuable) following.
I generally agree that tech monopolies are potentially bad for, not just national security, but general human wellbeing.
> The tech sector is out of alignment with law enforcement
In this particular case, I'd say that's a good thing. Law enforcement, and certain three letter agencies, have a terrible track record here. If big tech is making it a little harder for them to bypass basic privacy rights, I'm ok with that.
I don't think it's counter productive at all. There was a period of time, when the appendix was (absurdly) considered vestigial, that surgeons would remove the appendix as a side quest if they happened to have the area opened for some other purpose.
That was a terrible idea, but one that was supported by science at the time. There are practical reasons to be skeptical about scientific assumptions.
Science becomes less wrong faster if we allow history to remind us that a lot of what we believe will likely turn out to be wrong.
I agree, and also it's easy to forget how silly "Web 2.0" was
Not the technology itself, that was great, and was already named.
When Web 2.0 arrived as a buzzword the web was already an interactive, dynamic platform with databases, server and client side scripting, user generated content and social networking. All of the technologies involved were in fairly widespread use.
For a minute the term Web 2.0 might have been a way to recognize how cool the natural evolution of the web was, but it was quickly co-opted for use as cutting edge sounding investor bait. Same general principle as Web 3.0, but with useful technology.
Side note: The Web 2.0 hype was mostly a 2000's thing.
The carbon footprint you refer to applies only to proof-of-work blockchains. Unfortunately 3 of the largest systems use this (bitcoin, Ethereum, and doge) but an alternative has already been created called proof-of-stake. Nearly every new crypto uses proof-of-stake and Ethereum plans to switch to it. The market seems to approve of this as well as some of the big gainers in market cap use this system.
There are of course a variety of factors, including the popularity of the site the page is published on. The signals related to the site are often as important as the content on the page itself. Even different parts of the same site can lend varying weight to something published in that section.
Engagement, as measured in clicks and time spent on page, plays a big part.
But you're right, to a degree, as frequently updated pages can rank higher in many areas. A newly published page has been recently updated.
A lot depends on the (algorithmically perceived) topic too. Where news is concerned, you're completely right, algos are always going to favor newer content unless your search terms specify otherwise.
PageRank, in it's original form, is long dead. Inbound link related signals are much more complex and contextual now, and other types of signals get more weight.
I agree with the idea that the right time to decide how to handle AI is before it becomes extremely powerful. But he uses so much hyperbole, and what seems to me to be intentionally inflammatory language for purposes of creating fear.
It makes we want to take the cynical perspective that he's trying to cash in on fear to create a (valuable) following.