I agree with geius that I'd rather have some indication of what I've blocked. I like the Murdoch blocker (https://chrome.google.com/webstore/detail/moepiacmhnmbiilhpo...) for that reason - it'll tell you why it's blocked you from a site, and gives you the option to whitelist it.
I know that, on the face of it, sounds dumb, but there's some that annoy _everyone_.
experts-exchange was the first site I've blocked, for example. Anyone have a list of those annoying sites that pop up when you're trying to search for some obscure error message etc? I'd love to be a bit proactive.
Background: Had an account since April 2007, had reviews removed, last month had my account deleted without notice or warning. I emailed asking for a reason, but have heard nothing.
Yelp uses its "rank" in Google, Yahoo, et. al to bribe and extort small businesses, to prevent facts from being public knowledge, and to censor citizen reporting.
If a small business doesn't have its own website, Yelp is usually the first thing that pops up in a search; it knows this and milks it for everything it can.
What I did: go to Stack Overflow, pick the most popular question of the month and search on Google. Lots of 'mirrors' will appear (probably not on the first page) and you can use your judgement.
bigresource.com - copies content from elsewhere and injects ads
daniweb.com - terrible quality discussion
ehow.com - for when you want a step above Yahoo Answers
w3fools is one of those stupid sites that exists sorely as a protest against the existence of another site.
It's written by the kind of people who used to recommend "validating" your HTML because then it would be valid. Yay!
The pedantry against w3schools is unhelpful, misguided and at times down-right wrong.
For example:
w3schools: "URLs cannot contain spaces. URL encoding normally replaces a space with a + sign."
w3fools: "That's not true. An URL can use spaces. Nothing defines that a space is replaced with a + sign"
Actually: "The encoding used by default is based on a very early version of the general URI percent-encoding rules, with a number of modifications such as newline normalization and replacing spaces with "+" instead of "%20". The MIME type of data encoded this way is application/x-www-form-urlencoded, and it is currently defined (still in a very outdated manner) in the HTML and XForms specifications. In addition, the CGI specification contains rules for how web servers decode data of this type and make it available to applications."
Many (most) of these things are judgement calls and/or historical anomalies. W3Fools is wrong to say w3schools is a problem - it is merely insufficient on its own. They would be better off generating a better version rather than complaining.
While I agree that the W3Fools list of mistakes on the W3Schools site is heavily diluted with nitpicks, I think W3Fools makes a valid point that W3Schools is inaccurate and out-of-date to the point of being misleading, because in my personal experience there have been more than one instance in which case something I had "learned" on W3Schools before I knew better lead to mistakes I didn't realize until I started regularly consulting other sites instead.
I also don't see what you mean that it's a "stupid site that exists sorely [sic] as a protest against the existence of another site". If a hypothetical site did exist that was as bad as W3Fools claims W3Schools is (setting aside that you don't seem to think W3Schools is that bad), don't you think it's nice, or at least not stupid, to have a site you can point people to that explains what's wrong with the hypothetical bad site and what alternatives there are?
Validating your HTML is like linting your JS, I agree that it'd be dumb to do it for it's own sake, and in fact I know HTML and JS well enough and have enough quirky personal preferences that I don't do it, but I believe it is in fact good practice, especially for beginners, because invalid HTML, like JS that doesn't pass lint, can be a sign of bad practices or mistakes.
"If you don't like a site that appears in your search results, you can block all the pages within that site. Then you won't see any of those pages when you're signed in and searching on Google. If you change your mind, you can unblock the site later.
Sites will be blocked only for you, but Google may use everyone's blocking information to improve the ranking of search results overall.
You may block up to 500 sites."
This is a bit off topic, but do you know a better resource than this w3schools page on Javascript and DOM? [1] I'm just learning now, and it's really frustrating me that I can't find a better resource.
There was a Chinese knock-off (counterfeit) goods spamming Craigslist heavily a year or so back, hosting a bunch of domains out of a Fremont datacenter.
Too many domains to list individually. It's cases like this that make aggregate blocking/banning approaches far more valuable.
The problem with individual blocklists is that they have to be maintained, and over time they suffer from bitrot. And life is hard and I want a pony.
Whether or not it's legitimate, the number of people that don't find a site helpful, and block it, is probably correlated with the chance that a random Google searcher won't find it helpful.
Does anyone know if there's a parallel way to block sites from their Shopping search? If not, this would be a wonderful feature.
There are a few "low price" sites that keep popping up in my results, but never actually offer goods for sale at these teaser prices. I'd love to never be tempted by them again.
It doesn't seem to work. I've blocked w3schools, and they're still 2nd result for "doctype" (perhaps it's an US-only feature? Google keeps redirecting me to .co.uk search)
I agree — I used to hate About.com, but I've found some of the subsites have some decent articles. I've found some decent recipe ideas on the Greek Food subsite, for example.
I don't want Google to block any sites. I don't need nor want a personalized internet filter bubble. What I want Google to do is learn to recognize spammy or irrelevant sites and rank them lower.
This will give their "almighty algorithm" a great input for recognizing "spammy" sites.
Of course, as soon as it starts working, we'll get various religious or politically motivated groups blocking sites they don't agree with, and blackhat SEOs selling packages of botnetted or mechanical turk-ed "blocks" of your competitor's sites...
When a site gets blocked by Google, it doesn't disappear from the internet but all of a sudden it disappears to hundreds of millions of people. Poof. Gone.
If a site gets ranked lower, then it won't get as much traffic but at least its still findable. Even spammy sites deserve a chance to turn their shit around (although it almost never happens).
If I want to block a site for just me, fine. There's browser extensions that do that. However, when Google or Bing or DuckDuckGo lets you do it and then uses that data in ranking sites for other people there's a Big Filter Problem.
Ranking something lower, especially if it gets knocked off the first page, has the same effect in kind, if not degree, as blocking.
I'm not sure which part of this you think contributes to the filter bubble effect (which I'm skeptical about in the first place). If I block a site, it's because I don't want to see results from it, and I was never going to visit it anyway.
How is using blocking data as part of the algorithm any bigger a filter problem than any other part of the algorithm? Google uses many, many search signals and this is just one of them. And I imagine a pretty good one.
I agree with geius that I'd rather have some indication of what I've blocked. I like the Murdoch blocker (https://chrome.google.com/webstore/detail/moepiacmhnmbiilhpo...) for that reason - it'll tell you why it's blocked you from a site, and gives you the option to whitelist it.