I find it really disconcerting how many people on Hacker News want a corporate entity to decide what information is deemed acceptable to know. Do you even know where that word came from? The irony is almost overwhelming. But I digress...
The idea that banning certain information will somehow result in it disappearing has been shown repeatedly to not work. A cursory reading of history should make this clear. You cannot turn the entire world into West Coast USA by banning everyone that disagrees with you. You’ll only create further echo chambers, both on your own platform and on the (new) platforms inevitably created by the exiled.
It's not about banning disagreement. It's not even about opinions. It's about combating weaponized misinformation.
It's strange for Americans to be on the business side of a propaganda weapon, as we're more used to using against our enemies. Technology has finally made it feasible for malicious actors to cheaply and effectively manipulate people with lies, here in the US.
If you don't believe it's a weapon, you're wrong. If you don't believe weaponized misinformation is dangerous, look at the regimes it's toppled. And if you think the "American spirit," or somesuch nonsense, makes us immune, I hope we won't have to find out.
You are assuming that the regular media is the default, the truth, and that propaganda didn’t exist prior to 2016. Read Manufacturing Consent for a brief overview of this history of widespread weaponization of information by innumerable actors.
Weaponized misinformation has existed as long as human speech has existed. What is new is the democratization of it, hence the establishment’s hostility.
This is the kinds of "both-sides-ism" that gives centrists a bad reputation. You're basically saying: "everyone uses media to achieve their goals, therefore all sides are equally bad." No. Using misinformation to intentionally disrupt a democratic election is demonstrably dangerous to society.
Say what you will about the "establishment," but traditional media still bear some semblance of the truth. Now we're seeing that anyone with enough money and Twitter followers and completely ignore the truth. We have to draw the line somewhere.
I’m not a centrist and none of my points are making any centrist arguments.
Using misinformation to intentionally disrupt a democratic election is demonstrably dangerous to society.
And you think this is the first time in the history of American democracy that actors used misinformation to influence an election? Again, this is not a new phenomenon whatsoever. All that’s new is that more people have access to the technology, which makes the previous gatekeepers nervous.
I hope this is not an argument for giving powerful weaponized mass disinformation tools to more people, including foreign governments and other actors actively seeking to polarize society to the point of inciting conflict and dismantling democracy.
Yes, removing this type of speech from YouTube may be an imperfect / stopgap solution.
I’m not arguing for that, but it might actually be an indirect solution. By forcing everyone to deal with it, skepticism will grow and new solutions toward treating information and the media will arise.
As an example, I think a similar situation will happen with deepfakes. Once it becomes widely known that it’s easy to fake video, people will become more skeptical of it.
What is the difference between "forcing everyone to deal with it" and what we've been doing for the last four years? Or, alternatively, why do we think we'll get a different result by doing the same thing over again?
I mean, it seems to me like we already have scepticism in spades. There's two sides, and if one of them said the sky was blue the other would be sure it were green.
It looks to me like what is needed is detached, discerning thinking, which is not the same thing. What reason do we have to think that people will start to do better about it and it will take care of itself?
I'm not opposed to your view, I just don't think I'm on the same page as you and it I'd like to hear more.
One leaves control of speech up to the individual speaker while the other leaves it in the hands of an Oligarchy of people who think they are know better than everyone else because they have more money. There's a degree of arrogance behind the thinking, "I'm smart enough to discern fact from fiction but those less intelligent than I must need my help or else they might believe a falsehood"
Everyone ought to be allowed to speak but networks ought to shut down organized campaigns to spread disinformation and networks that exist purely to spread hate and misinformation ought to be challenged by the people lied about on such platforms. Its not an ideal solution but none is probably possible to implement or live through.
Almost entirely because of misinformation we just spent 4 years with a complete idiot having the key to nuclear weapons that if used will almost certainly lead to at minimum crashing global civilization as we know it. His mishandling will ultimately be responsible for hundreds of thousands of excess deaths.
We can probably make individual expression possible without allowing organized disinfo campaigns to continue unchecked.
Acting like this is an unsolvable problem is an intellectual disservice.
You’re missing a key point — journalistic integrity. It’s something that can be measured and validated.
Second is experience in discerning fact from fiction — not everybody has it. And example is that guy who believed that the earth was flat, so he jury rigged a rocket and shot himself straight up in the air. He then died from his injuries. Elementary school mathematics taught us that he wasn’t going to be successful, yet he chose to reject even the most foundational verified truths.
Third are the mathematical concepts of probability versus possibility. People who choose to believe things that are untrue tend to lean on the idea of “possibility“ without considering it’s likelihood (“probability“).
And there are people who question things simply for the purpose of questioning them. This is where fear, uncertainty, and doubt comes from. These are things that hold us back rather than pushing this forward. This is also known as “conspiracy“.
But there is a dramatically smaller group which aims to _seek truth_. These are the people who not just have an opinion, but are willing to put in the effort to evaluate evidence and allow their opinions to be changed by what can be proven. They recognize that it’s easy to want to look for patterns which support their existing biases, and they do their best to guard against that. They also recognize that psychology has taught us the humans like to look for patterns in data which suggest cause-and-effect, even if none exists. An example of this is when someone correlates the will of God to finding a $20 bill on the ground, when in reality it’s simply coincidence.
When you see hoof prints, you should think horses, not zebras. The simplest conclusion is usually (has the highest probability of being) right. One simply has to look at Trump’s track record with the truth to realize that he’s probably just lying.
And finally, the word “censorship”. This is an accusatory word used by people who think it relates to their non-existent “freedom of speech”. The first amendment applies to the agreement between the US government and its citizens. It does not apply to your relationship with YouTube. YouTube is a corporation that has an entirely separate agreement with its users, which does not include freedom of speech. Therefore, it’s not censorship — by definition. You’re not being censored, if you’ve broken the rules of that agreement. And in the end, it’s YouTube’s decision since it’s their platform you’re using.
"And finally, the word “censorship”. This is an accusatory word used by people who think it relates to their non-existent “freedom of speech”. The first amendment applies to the agreement between the US government and its citizens. "
The US Constitution restricts the Government from infringing on rights we, as citizens, already have.
The Constitution does NOT grant us rights. The whole freaking point of our revolution vs., say the French Revolution, is the PEOPLE have the rights and government is restricted - NOT THE OTHER WAY AROUND.
YouTube's unequal and arbitrary application of rules is EDITORIAL CONTROL. As such they fall FAR more on the side of a publisher than platform and that's the part of 230 that needs SERIOUS overhaul.
Then again maybe not. Their latest antics have pushed even more people off onto other platforms; and ultimately that will be the best correction. De-aggregation is the best antidote.
> And finally, the word “censorship”. This is an accusatory word used by people who think it relates to their non-existent “freedom of speech”. The first amendment applies to the agreement between the US government and its citizens. It does not apply to your relationship with YouTube.
> YouTube is a corporation that has an entirely separate agreement with its users, which does not include freedom of speech. Therefore, it’s not censorship — by definition.
The first amendment and freedom of speech are not synonyms, quite obviously because there can be freedom of speech outside of the jurisdiction of the US government; and similarly, Youtube is a trans-national corporation with more than just US citizens using it.
Even if it were wholly within US jurisdiction and Youtube only served US citizens, the first amendment is still not a synonym for freedom of speech. The word censor comes from a Latin word for a governmental position in ancient Rome but it does not follow that all censorship is therefore only possible or enacted by government. Not only is there no supporting logic for that, it's not evident in practice. Companies like Twitter[1], Facebook[2] and Google[3] remove things from their platforms for political convenience (or, as Anand Giridharadas points out[4], for any convenience):
> When you look at the ways in which the winners of our age give back, help out, make a difference, they are often designed to protect the system - above all - that the winners stand on top of.
As ever, I suggest getting a copy of On Liberty[0] by J.S. Mill, where he goes over both the tyranny of government and the tyranny of non-governmental actors, with regards to speech.
But that won't work. It's well-known that misinformation propaganda works even if people know it's propaganda. People won't magically build up some kind of immunity and suddenly trust the scientific method - they will simply grow sceptical of all kinds of news sources and retreat further into their own bubble. If employed by a state, this is often the goal of that propaganda.
> ... they will simply grow sceptical of all kinds of news sources and retreat further into their own bubble.
This is a central theme of the book This Is Not Propaganda: Adventures in the War Against Reality by Peter Pomerantsev, which I found thought-provoking.
I've just been looking for a decent review. This is half-decent:
However, there are a lot of people who don't trust certain types of 'scientists' because they've observed that those people don't seem to use the scientific method, even though they claim they do. That's a very reasonable and correct form of skepticism.
The lazy assumption that anyone who claims to be scientific actually is has been exploited by demagogues throughout history. Karl Marx was famously keen on presenting his own views as 'scientific' although there is of course nothing scientific about Marxism. Lysenko justified his beliefs on the grounds that they were scientific, and all sceptics of this claim were suppressed (i.e. jailed).
The way to tackle scepticism about the scientific method is to recognise that it's not scepticism of a methodology but an academic sub-culture.
The most prominent examples of modern anti-science in the US, such as in climate change, abortion, and epidemiology, do not find their roots in well-justified scepticism of particular scientists. Climate deniers don't care what science says, or how reliable peer-reviewed journals, and have no interest in the scientific method: they base their opposition in ideological grounds: "I don't believe climate change to be true, therefore anyone who does is a liar." They've made their decision because they have financial (or religious, or other) interests that make the truth uncomfortable for them. Why bother trying to understand the truth when you can just deny it?
You can't argue rationally with someone like that. They aren't skeptics, they're devout believers.
As someone who has rationally argued against epidemiology based on finding a constant stream of severe errors in their papers, I can assure you that epidemiology is almost entirely pseudo-scientific. I don't think I've ever encountered a field as disastrous as this one (though I've never really looked at climatology).
Epidemiology has so many massive cultural and methodological issues you could write an entire book about it, maybe one day I will. It is absolutely de rigueur in this field to make predictions without ever bothering to go back after an epidemic played out to study how well those predictions matched reality. They cherry pick data at an absurd rate: if they can get a more dramatic paper by using data that's 8 months old and based on a sample size of 7 people when they could use data published last week and which has a sample size of hundreds of thousands, they'll happily pick the former every time. Nobody in that field will notice or care, and they'll throw in a misleading citation or two to disguise what they're doing. There is no code quality control: these people happily publish papers based on models filled with memory corruption errors that can't even replicate their own output.
As for ideology, the fact that you can't explain what climatology sceptics believe should give you pause for thought. You appear to be projecting: a blind devout belief in "scientists" (they aren't really) is itself an ideology. Rather than engaging with the concerns of people who are pointing out problems in the published literature, you're just blowing them off as irrational.
> if they can get a more dramatic paper by using data that's 8 months old and based on a sample size of 7 people when they could use data published last week and which has a sample size of hundreds of thousands, they'll happily pick the former every time.
Heck more than one. Try "Determining the optimal strategy for reopening schools, the impact of test and trace interventions, and the risk of occurrence of a second COVID-19 epidemic wave in the UK: a modelling study"
The model takes its IFR data from Verity et al (see table 2), which is about 4x too high because it's taken from Wuhan evacuee data in January. The Verity paper that calculated this had access to a bigger and better dataset, the Diamond Princess cruise ship which they acknowledged they had looked at, and which yielded a lower IFR, but they didn't use it:
"The Verity et al. CFR estimates were derived primarily from Chinese data, which reflected non-random testing ... When Verity et al. was prepared, the final death toll was not known. The data available only ran to 5 March 2020, at which point 7 passengers had died."
There's an analysis of the school re-opening paper that explains all this here. It also claims the paper used values for k (over-dispersion) from February, when there were hardly any cases on which to calculate that: https://lockdownsceptics.org/schools-paper/
Here's another paper that used an IFR of 1% in May:
Epidemiologists know IFR rates fall over time because they always do. Yet they continued to use the earliest calculated value they could find, based on a tiny dataset that wasn't even the best available, because it gives the highest value and that lets them make the most dramatic results. This is a systematically untrustworthy field, they literally do not care about their own reliability at all.
In the U.S. we did actually have this system prior to TV news and especially prior to radio.
Newspapers were the preferred news media then and they were numerous, sensationalist and partisan (literally often party owned). Misinformation was abundant and often inflammatory.
I'm not implying we should want to return to those times. (Personally I think this is largely about trade-offs.)Just that this is not the weird time. 1950-2000 was the weird time when there was a majority or at least a plurality consuming most their information from a very small and concentrated media sphere.
Half of people have a below average IQ. More like 65% of older adults are both below average thinkers and above average voters.
Why would you think skepticism would be a suitable response to misinformation? The only mathematically viable way to combat such misinformation is to disenfranchise people too stupid to be in charge of anything that effects other people's lives.
Why is getting elected proof of intelligence? We literally elected the guy that convinced an overwhelming number of uneducated people that he was going to build a wall to keep the illegals from stealing their jobs.
Intelligence is not a prerequisite for being elected, and in many cases intelligence is a disadvantage. Most people are stupid, and they would rather see one of their own in positions of power. Dumb people can more easily connect with certain areas of the electorate.
Examples of this phenomenon are so copious and obvious that I shan't point them out.
This rhetoric makes no sense to me. People are already subject to incomprehensible levels of propaganda from every angle and have been for over a century. When and where was there a perfect democracy?
For traditional publishing there is at least one person that is ultimately responsible for publishing it. The person is in the vast majority of cases subject to the same jurisdiction.
On YouTube, Facebook, Instagram, etc. I can push my agenda worldwide. Russia, China, Iran, and other non-democracies can make insane amounts of propaganda for very cheap to reach millions of people. The engagement algorithms even help the propagandists reach their target audience effectively. The algorithms help then widen the audience automatically through shares of their supporters and further engagement.
I don't see how the Chinese and Iranian governments need to worry about YouTube being blocked when they want to spread propaganda in the US and Europe.
> foreign governments and other actors actively seeking to polarize society
> What are some better alternatives?
I have no idea if it's mathematically possible but I'd be optimistic about the potential of a (hypothetical) privacy preserving web of trust metric. It would be really nice to have at least some limited indication of how the person behind the account fits into the world at large. Right now you can't reliably determine (arbitrary examples) country or even continent of residence, paid posts by an organized campaign (PR, propaganda, etc) versus organic occurrences, etc.
Of course, Facebook is the ever present counterexample where people proudly attach their full legal name to hate filled streams. But at least I personally know for certain that they're local people who actually exist and aren't being paid for their posts! Silver linings and all that.
But web of trust is still something you have to know and consciously decide to use. What if a majority of people would simply ignore it because it conflicts with the opinion formed in their own bubble - or if they simply don't know about it or don't know how to use it?
Not if it's built into the communication platform you happen to be using. Just one or a few basic indicators to give you even the slightest bit of information about who wrote what you're reading. Just a simple "p = 0.03 US resident" or an aggregate trust score based on a combination of social graph connectivity and spam reports or something. Sure, people could intentionally ignore it, but right now there's no indicator to be had even if you want it!
To be clear, I'm not talking about present day clunky GPG web of trust with key signing parties and all that. I'm talking about a hypothetical (ie as yet nonexistent) magical web of trust that somehow doesn't destroy your privacy in the process of being used. (It's not as crazy as it sounds - we already have zero knowledge proofs, blinded encryption, and various other privacy preserving cryptographic schemes.)
I don’t like the word bothsidesism because it implies that in reality only one side is the bad actor. That is a massive implication considering how casually it’s thrown around. Moreover, it’s used like an accusation. Accusing people of acknowledging that things probably aren’t black and white. Full on groupthink.
How are you objectively going to document misinformation? How are you even going to assign weights?
Voter fraud is 100 points for Trump, the Hunter story is 50 points for the Dems, etc..?
The thing is that if you ask side X they’ll say it’s 80/20 for side Y and vice versa. Neither should be so confident that they’re right. And pointing that out is now called bothsidesism. The lengths people go to to avoid cognitive dissonance is beyond ridiculous.
The Democrats ran maybe the biggest misinformation campaign in my lifetime following the 2016 election. Massive turmoil and investigations later, it was all found to be false. What is this about “sides” again?
Assessing Russian Activities and Intentions in Recent US Elections
ICA 2017-01D 6 January 2017
Key Judgments
Russian efforts to influence the 2016 US presidential election represent the most recent expression of Moscow’s longstanding desire to undermine the US-led liberal democratic order, but these activities demonstrated a significant escalation in directness, level of activity, and scope of effort compared to previous operations.
We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency. We further assess Putin and the Russian Government developed a clear preference for President-elect Trump. We have high confidence in these judgments. ...
No American on the Trump side was indicted of anything actually related to "Russiagate", but for unrelated procedural crimes discovered during the investigation. (You know the saying that everyone wittingly or unwittingly commits three crimes a day?) Several Russian agents were indicted for Russiagate-related things; Putin will turn his goons over to US custody any day now!
Then we have people like Carter Page, whose name was raked over the coals for years because a FBI lawyer intentionally altered evidence (https://www.nytimes.com/2019/12/09/us/politics/fbi-ig-report...) showing that far from being a Russian asset, Page had for years briefed the CIA every time he met with suspicious Russians. (Got to love how the Times describes said altering evidence as a "serious error".) You want an actual Russiagate-related indictment and guilty plea? Kevin Clinesmith, said FBI lawyer, is your man.
Mueller found Russia attacked the 2016 election and the Senate Intel Committee report found that Trump’s campaign manager was feeding internal campaign data to a Russian Intel Officer. Roger Stone was caught and convicted for lying to Congress about communications with Julian Assange, and the whole lot of them including Trump lied to the American people about hundreds of contacts with Russian nationals (some of whom turned out to be spies and Intel officers).
"traditional media still bear some semblance of the truth."
Seriously? Four years of "RUSSIA RUSSIA RUSSIA" and it turns out that the same people knew before they started peddling that narrative that it was (and still is) at its core a smear campaign cooked up by the Clintons.
Hunter Biden - completely buried before the election, now all of the sudden that the election is over the media is "breaking" this important story?
I could go on but I'm sure if it didn't fit your preconceived notions of what the "truth" is you'd be too busy drawing your lines.
> Hunter Biden - completely buried before the election, now all of the sudden that the election is over the media is "breaking" this important story?
The allegations were well covered before the election. The new story that they are covering now is the announcement by DoJ of the ongoing investigation. The media didn't suppress that before the election, Bill Barr did, doing a surprising simulation (given his past conduct) of professionalism in observing historical Justice norms around elections on that point.
>Weaponized misinformation has existed as long as human speech has existed. What is new is the democratization of it, hence the establishment’s hostility.
Beautifully put.
Better get ahead of it lest the proles gain class consciousness and use it against us.
I'm sure you mean that sarcastically, but you shouldn't. It's fun to complain about "the establishment," until it isn't there anymore. In the current case, "the establishment" means American democracy, which is built on people's faith in the institution of elections. If you erode that faith, democracy no longer works. That may be fine with you, but it's not fine with me.
May I ask your opinion the below ? In your opinion, is this disinformation ? Something that should be suppressed and censored and to keep the establishment glowing ?
Then why base it on faith? Make it impossible to vote multiple times/places, or as someone else. Invalidate registration [6] for , and immediately flag when, a dead person tries to vote. Make it so you can instantaneously see that your vote was entered and confirm it to help with the anomaly detection. I think this is what everyone wants, and I think it would appease many worries. Unfortunately, this would almost certainly require voter ID, which has mixed popularity, and a reporting system, which requires tech, which the governments lowest bidder mentality appears to be wholly incapable of.
Additionally, harden these electronic voting machines, which have had constant press about their vulnerable for the last two decades [1][2][3][4]. Open source the software stack. Even simple, human, steps like each person puts a provided penny into the machine, so counts can be correlated, would be an improvement and add a real world aspect that people could personally understand, over the counter in memory that has no traceability.
The thing that I find most frustrating part about all of this is the perspective. Rather than silencing talk about fraud, hoping it will go away, we should tackle the reality of it and ask why people distrust the voting system, and find real world solutions, to increase that trust. The fact that every election before this one ended with sensational headlines of examples of voter fraud, but this year it's journalistic silence and social taboo, probably doesn't help (as insignificant as those previous stories were).
Make that voting computers, and abolish. If even HN readers can't get this right, there is no hope of electoral reform in the USA to raise it to an acceptable democratic level:
ODIHR election observers must not be blocked anymore.¹ Elections are to be held on a Sunday to not disenfranchise the working population. Votes are always counted and tallied manually.² Postal votes that arrive after the election date must be invalidated. Amend the constitution to require elections to be universal, direct, free, equal, secret, transparent and effective.³
If anyone is against these reforms, well, I'm certain Taibbi will not hesitate to call you too un-American, wrong and other things.
----
¹ The USA are a member state.
² This must be done so that the proper following of the procedure can be verified by anyone. Voting computers cannot be verified by just anyone. –– Using voting computers, being general purpose computers, are inherently easy to manipulate. Cracking just one model enables wide-spread fraud. Counting manually however means that wide-spread voting fraud requires undermining each location separately which is much harder to pull off and also multiplies the risk of detection.
I understand this perspective, and mostly agree, but I think that having a deep level of verification, traceability, and limits could really help with trust. I don't think that having a box of votes disappear into a, from our perspective, somewhat mysteries process filled with rooms full of volunteers counting things in relative secrecy, could provide as much confidence in the process. I'm not sure how that could be achieved without some digital system. Maybe a required mix of physical and digital systems?
For any system, I think some form of adversarial counting is required to know that someone "on your side" had a chance to count your vote. This could be implemented by having each party of interest select the counters, along with an "unbiased" automated system/digital results. The votes could be randomized between them. Some portion is then feed back through to help catch if any of the intentionally biased counters tried to tilt the system, to keep everyone honest. Regardless of the system, I think the minimum requirements for the counting process should be enough to obtain some level of trust, and be at a federal level, since we all have to live with the results.
Sure, but you also need to pair it with free Gov issued IDs for all individuals. The Voter fraud canard is a red herring when electoral fraud is an order of magnitude worse.
Other kind of media having issues does not disprove the comment you're responding to. You're also assuming some opinions at the beginning which we haven't seen in the parent's comment. Things don't have to be black or white - it doesn't have to be democratisation vs establishment. There's way more nuance here. Including platform protecting its existence in case they fail so badly at filtering content that they get regulated by... establishment.
You have invoked the fallacy fallacy. Unfortunately, special pleading is only a fallacy if something isn't actually different from the thing it's being compared to. So I disagree fundamentally that it's special pleading.
The hostility is not just from establishment, it's from normal people like me, GP, and millions of other Americans that feel like we're taking crazy pills because of all the stupid shit people now inexplicably believe due to weaponized misinformation at a scale never before seen.
"Weaponized information" can mean everything from partisan hyperbole to observations that don't mesh with one's biases.
Addicts everywhere repeat the phrase, "this time is different". What makes it special pleading is the lack of evidence or even a reasoned argument supporting this assertion.
You are cordially invited to qualify why "this time is different" and how speech or as you put it "weaponized misinformation", is equivalent to violence.
Defend the assertion with a reasoned argument. Otherwise, it is simply special pleading.
Well I will not pretend that I am the one to make a good case for it. I'm not going to spend a lot of time trying to debate you about it.
I never said it was equivalent to violence. However, if it leads to violence, speech is absolutely dangerous. See murder cases where people are convinced to kill others under false pretenses. Or they are just talked into doing it by a charismatic individual. This sort of speech is actually already illegal. I'm sure it's not difficult for you to imagine speech that leads slightly indirectly to violence. That isn't necessarily the type of speech we are discussing here, I just thought I'd point out the extreme case where we already acknowledge the power of speech.
As you have heard said and I'm sure will disagree with, the internet has lowered the bar to attaining a widespread audience. It used to be that you would have to be eloquent enough and thoughtful enough to throw your ideas into a coherent book, show, essay, or whatever. And there would be people vetting your material for quality before deciding it should go before an audience.
Obviously, that doesn't stop all the bullshit (and actually leads to other bullshit), but having a higher barrier to information exchange meant that content was much more likely to vetted by one or more people who had some critical thinking skills. As a publisher, you couldn't just put out anything because it would cost you something. Now, any half baked idea you have you can put it out there, and similarly minded people who maybe formed the same sorts of half baked ideas or are just incapable of vetting the truth for themselves can find each other and form large groups of people who all now believe the same bullshit. This was always possible with word of mouth, but the internet is a much more effective medium for bullshit and allows it to be much more viral and widespread in nature.
The vast majority of widely available "information" out there is no longer authored by experts, thanks to the internet. It's authored by people who mistake their layperson knowledge as expert knowledge, or believe that their "common sense" is enough to tell them how the world works. The audience can now choose what "information" to consume and doesn't have to believe the unpleasant things experts say or the understand the complicated stuff if they don't want to. They can just pick and choose whatever simple or comfortable truths they desire.
As for evidence, most hard social and political problems are impossible to agree on. That problem is exacerbated by the type of misinformation we're talking about here, because it's even harder to agree on what the facts are. For example, somehow we are still arguing about global warming despite mountains of evidence. The the same thing happened with smoking and lung cancer. I don't have any evidence to share with you, but I'm sure that if I did you'd find a way to dismiss it with your own.
I'm confident that you have heard similar reasoning before.
The bar has been lowered for entry. The bar has continued to lower throughout history. To advance this point, you would need to establish a demarcation point where "this time is different".
Above you suggest that credentialed gatekeepers somehow did a better job of protecting the public from misinformation which results in dangerous violence. A cursory examination of history debunks this.
The Pulitzer prize (a coveted credential) is named for the originator of "yellow journalism". Half truths, mistruths and outright fabrications have always been with us. Look no further than the media's uncritical promotion of Iraqi WMD as a prelude to war. Experts were trotted out to tell us that Saddam had yellow cake etc. Hans Blix was silenced.
If communication can be equated to violence, then the sources of record trumpeting false justifications for war must be the most dangerous and violent of them all. Yet, these are the gatekeepers you suggest are more responsible and accountable. There has been no accountability for this propaganda. Many would suggest that the same methods continue today.
Youtube's censorship is a natural continuation of establishment gatekeeping. A cynic might even suggest that gatekeeping is the purpose of walled gardens.
More of the same, this time is not different. It fits perfectly with historical precedent.
You argue that the scale of the problem does not make it different ("more of the same"). I disagree. I could not draw you a line in a gradient of white to black to demonstrate exactly where light-grey becomes dark-grey. That does not mean that there is no distinction between the two hues or that it cannot be demonstrated.
Also, you keep talking about equating it to violence, which I never said.
Manufacturing an argument the other party didn't make, ("You are assuming that the regular media is the default, the truth"), and choosing to rebut that strawman then their actual key point ("It's about combating weaponized misinformation"), is disengenuous in the extreme.
The weapons now are bigger, faster, have greater reach, higher fidelity, multimedia appeal, vastly more intelligence and data, can be tuned in realtime, and delivered with precisionnto higly selected audiences completely under the public radar.
All of which "Weaponized misinformation has existed as long as human speech has existed" utterly ignores.
People have been killing each other and themselves for the supremacy of fairy tales since the beginning of time, but please tell me how getting someone to believe a conspiracy theory and vote "badly" is somehow a society-breaking new invention.
I agree - you can't trust the main stream media anymore. Always verify their reports by yourself. They are careful about directly lying - but twisting and omitting information is something they are genius at.
Calling the opinion of someone you disagree with “weaponized
disinformation” doesn’t mean it’s anything but someone else’s opinion. It is chilling that you don’t seem to know the difference between a free Press and an immediate threat to your life.
Facts are not opinions. More and more, people are distrustful of large media organizations, which at least historically have a commitment to doing what they can to report true facts. If you believe that those organizations are untrustworthy, that's your opinion that you have a right to. However, you don't have a right to hold up another set of facts as valid without evidence. The truth doesn't care about your opinion.
They're also not handed down from God. A "fact" is socially constructed, and while the simple-minded among us can afford to pretend that a "fact" is handed down on clay tablets by a deity, they can only do so because they're downstream of a complex and decentralized system for determining what we consider high-confidence enough to communicate as "truth" to those in need of it (eg children). This process includes dissent and "disinformation", and it's baked into everything from our media environment to the scientific method.
> More and more, people are distrustful of large media organizations, which at least historically have a commitment to doing what they can to report true facts
Lol. Forget picking up a history book, all you need to do is read the news for a week as critically as you would read a blog or tweet and you can find plenty of examples of agenda- or simply ignorance-driven inaccuracies in the most mainstream and well-regarded publications. It's ludicrous to claim that media as a whole "does what they can to report facts" and that that inoculates them from being challenged. They've certainly got standards that the average person doesn't, but those standards are barely even commensurate with the responsibility they have, and the divine right to Facts that the ignorant blindly imbue them with. If you're reading any source uncritically and assuming it's Fact, you're doing it wrong.
> However, you don't have a right to hold up another set of facts as valid without evidence.
Another loaded assumption which doesn't hold up on its own. Nobody cares about the case where a wild assertion is made with no attempt at evidence. ~Everyone has "evidence" for their claim. The quality of the evidence varies wildly, both in its internal consistency and in how it comports with external models of the world. What's determining your worldview is what you implicitly dismiss or accept as evidence, based on your own biases and unconscious reasoning (given that your proposal is tantamount to turning off conscious reasoning about the truth). It should be obvious why this is far inferior for anyone with a brain to reading different sources, weighting them by credibility, and synthesizing your own model. You don't need to be particularly smart to be a journalist, and layered on top of that are all kinds of perverse incentives that don't point towards "truth". The notion that you should blindly outsource the process of a final determination of your model of the world is beyond insane.
Was it "fact" when the media universally reported that the pandemic was to be ignored[1], and wrote articles mocking companies for distancing? Was it fact when they universally and confidently rejected masks as useful? Should I have been protected from the "disinformation" that sounded the alarm early, leading me to look into the science and determine that both of these narratives were dead wrong[2], or at the very least suffered from a cultural inability to reason under uncertainty?
The key insight here is that the interaction is two-way: if enough heterodox ideas are on firm footing and contradict your external model of the world, you should be digging into your external model and considering its weaknesses and what to tweak.
The pandemic is an incredibly bizarre time to choose to tear down society's truth-generation system. At every single stage, I've been at least partially at odds with public health advice due to 1) noticing flaws in public health reasoning, 2) exposing myself to contrary narratives/analysis that your definition would call "made-up facts", 3) reading the relevant science, 4) shifting my understanding of the risks. Again and again and again, "the Truth" has caught up to me weeks or even months later, while my friends have blindly been following the narrative off a cliff and exposing themselves to more risk _and_ more restrictions.
Again, I'm aware that most people are not remotely capable of this, and I don't begrudge them that anymore than my basketball teammates begrudge me being too short to dunk or block as well as they do. But the idea that we should tear down the process by which we arrive at truth because having to use their brains make the black-and-white crowd uneasy is infuriating. By all means, continue to uncritically swallow media narratives; you'll be wrong a lot more often, but at least you won't have to use any neurons. But it's insane to decide that nobody else should have access to analysis that isn't pre-approved just because you don't want a reminder that being an adult in a complicated world is difficult.
[1] "Get a grippe, people", as the Washington Post so memorably phrased it in just one example.
[2] This isn't just theoretical: I told the older folks in my extended family to stock up on non-perishables while The Truth was still saying "forget the pandemic myth, go to the parade today!", which meant my parents and other high-risk family members (some in areas hit very hard in the first wave) were able to ride out the risky first month of the pandemic without leaving their house. The reason everyone listened to me without question is because I have a track record of being right when the media is obviously wrong, due to, you guessed it, my exposure to what you'd term "disinformation" and ability to avoid uncritically swallowing whatever Vox tells me is true.
I think you don't know what the weapon is. The weapon is wielded by people who know better: by Trump, by Giuliani, by Hannity. They know the truth, but are cynically using their platform to manipulate the population. The consequences are destructive.
The opinions are not the weapon. The opinions are the wound, caused by the weapon.
Your point raises a couple of questions. First, do you think it is only your outgroup (Fox News, Republican politicians, etc) that would use misinformation? Democrats and other media outlets don't stand to gain from its use, or are too moral to resort to it? Second, the precedent this sets is that the government can state one set of facts to be true, and force compliance to that truth from media. Do you not see this as a potential problem? Even if you believe that your group would never abuse that power, do you think that they will remain in power eternally? Is it not possible for this "defense" against "weaponized misinformation" to be used against you?
> First, do you think it is only your outgroup (Fox News, Republican politicians, etc) that would use misinformation?
Can you point to any evidence anyone on the Democratic side of the fence has questioned the legitimacy of the last election even though control of the Senate may still easily fall into the hands of the group that prevented legal court nominations by an elected president while, under the very same conditions, rushed through the nomination to a SCOTUS seat of a judge that has barely three years of experience in that function?
The evidence that you request does not match the quote you pulled out. And even if the Democrats did want to further solidify their victory, that would be an extraordinarily foolish way of doing it, as any doubt cast on the legitimacy of the election furthers Trump's agenda, not their own.
> If you don't believe it's a weapon, you're wrong. If you don't believe weaponized misinformation is dangerous, look at the regimes it's toppled. And if you think the "American spirit," or somesuch nonsense, makes us immune, I hope we won't have to find out.
I don't see anyone here arguing those points, so I think this is a bit of a strawman.
> It's not about banning disagreement. It's not even about opinions. It's about combating weaponized misinformation.
The argument, as I see it, is that "weaponized misinformation" and "disagreement" are not cleanly separable categories, and than anyone with the power to ban the former will almost certainly use it to ban the latter.
The marketplace of ideas works only insofar as everyone involved operates with good faith. Weaponized misinformation is the opposite of that: expressing what appears to be an opinion, but is actually a manipulation tactic. Look in particular for arguments that appeal to emotion (especially anger), nationalism, and in-group/out-group. The fascists understood, and still understand, these strategies.
If you can't tell the difference between a genuine opinion and a misinformation tactic, you are ill prepared to participate in today's media environment.
That is frankly naive. Many earnest people repeat dangerous lies, and wicked people are more that willing to use the truth, especially if it is an uncomfortable truth that is important but overlooked by polite society.
> If you can't tell the difference between a genuine opinion and a misinformation tactic, you are ill prepared to participate in today's media environment.
On the contrary, I can do a much better job at the scale I operate at than YouTube can do at the scale it operates at.
Furthermore, I take the very liberal view that I have both the right and the responsibility to make up my own mind about others' speech. I hold in low regard those who try to take that from me.
> That is frankly naive. Many earnest people repeat dangerous lies, and wicked people are more that willing to use the truth, especially if it is an uncomfortable truth that is important but overlooked by polite society.
Correct. That's why I said the difference is intent. You have not contradicted that.
> On the contrary, I can do a much better job at the scale I operate at than YouTube can do at the scale it operates at.
Congratulations. I'm not concerned with weaponized disinformation in your social circle.
> Furthermore, I take the very liberal view that I have both the right and the responsibility to make up my own mind about others' speech. I hold in low regard those who try to take that from me.
Good for you. If everyone were as intelligent and enlightened as you are, then the world would operate a lot more smoothly.
> Correct. That's why I said the difference is intent. You have not contradicted that.
You are mistaken. The earnest falsehood can cause great damage, and the truth - even when spoken by someone of ill intent - is still important. Your standard of intent is simply not sufficient.
So called "hateful" people are sometimes necessary to break the stranglehold bad ideas have on society. Socrates was executed for "corrupting the youth".
> Congratulations. I'm not concerned with weaponized disinformation in your social circle.
The point here is that YouTube (or any tech platform) is not in a position to implement any reasonable standard at scale. Even if your standard of intent were reasonable (it's not) intent doesn't just vary between people, intent can vary within a person over the course of a single conversation! And a person's intent can also be mixed. There is no way any of these platforms can capture that level of nuance in human social interaction.
> Good for you. If everyone were as intelligent and enlightened as you are, then the world would operate a lot more smoothly.
True, but beside the point.
I want to be very clear about this. If you value democracy in any way, you must support the right for everyone to make up their own mind. That is because, unless you have that right, you do not have a democracy. All you have is the oligarchy of the gatekeepers of acceptable opinion.
> The earnest falsehood can cause great damage, and the truth - even when spoken by someone of ill intent - is still important.
I didn't say that earnest falsehoods can't cause damage, I merely said that it's not a weapon. Someone who is weaponizing information to achieve their own ends definitely knows what they're doing. And obviously, speaking the truth, regardless of intent, is not a case of misinformation.
To draw an analogy: I'm not saying we should outlaw guns. I'm saying we should outlaw murder. The problem is not the spread of opinions the problem is the intentional use of insincerely held opinions to cause damage.
I support the right for everyone to make up their own mind, which is exactly why I oppose weaponized misinformation. I don't even know what it would mean to oppose the right for everyone to make up their own mind. Reminds of the caps from the Tripod Trilogy.
The power of those with great wealth to influence others should not be underestimated. We're not immune. If you want to live in a society where everyone makes up their own mind, don't let them join a cult.
I’m probably older than you and heard this exact same argument used by the Indian government to censor television (we had exactly 1 channel of government run TV growing up).
Are you listening to yourself? You sound like a religious zealot hundreds of years ago arguing to ban the technology of books because they might say things you disagree with. ‘Ban Martin Luther, he is trying to destroy the Catholic Church’.
It remains a mystery why so many Americans are so incredibly ignorant of history. Is it because school systems became overrun by propaganda because of teachers unions?
1. There is a quantifiable difference between banning dissenting opinions and banning intentional weaponized misinformation. No ban will be perfect, but a failure to fight back against misinformation is the same as giving power to anyone with enough money and enough Twitter followers. The loudest voice controls the most minds.
2. We're not in the middle ages anymore. Back then, a malicious individual seeking to spread misinformation was limited in their options. The target audience wasn't literate, so the best you could do was stand shouting on the street corner. These days, unlike any previous time in history, we have fast, effective, cheap methods, that adapt to the user's preferences, to tell them exactly what they want to hear. We know from plenty of historical examples of both wartime and peace-time propaganda, that lies are effective. How can you be so ignorant of history?
In short, it's not Martin Luther I'm fighting against, but rather the Catholic Church.
> Of registered voters surveyed, 53% said the 2016 election between Trump and Democrat Hillary Clinton was free and fair and 33% said it was not.... 39% of Democratic or Democratic-leaning voters said it was. Half of Democratic or Democratic-leaning voters said it wasn’t free and fair.
I’m not trying to create an equivalence between what Clinton said and what Trump is saying. I’m talking about the beliefs of the voting public, which are clear in the polls. Half of the people who lost in 2016 thought the election wasn’t fair, but democracy didn’t end. What’s the danger here that justifies censorship?
Very different situations though. Hillary conceded. And no one I personally knew really thought that Trump would not become president. Most people believed there was Russian interference, but most didn’t believe they were literally changing votes. The Senate agreed with that assessment.
While I’m no fan of Trump and vote mostly Democratic I do think Trump won in 2016 and I don’t know anyone who doesn’t. I have a much smaller social circle of Republicans and they ALL believe Trump won this year.
2. Broadcasting the hearings in the accused states (MI, PA, GA, WI, AZ, etc)
3. Actually hearing the cases at both the state and federal level, and then throwing them out on the facts presented, instead of refusing to hear them on "lack of standing" to begin with?
Win in the court of transparency. Make the crackpots look like crackpots, loudly and publicly, through a process of transparency that then becomes impossible to challenge or cast doubt upon, except for the most hardened of partisans.
So you're clearly not actually following the proceedings of any of this anyway, so what difference would it make?
Testimony has been broadcast, court rulings are public, the reasoning and legal procedure is all happening in the open.
You're demanding transparency because it suits whatever narrative you have in your head, yet demonstrating you won't put any effort into following the process which is already public.
Thanks for the link. This seems to prove the point though? The vast majority of official cases were dismissed on procedural issues (standing, filing in the "wrong" court, etc etc). You'll need to click through to each of the sources of why each case was dismissed, but the majority of linked cases seem like no decision was made on the facts presented, if they were allowed to be presented to begin with.
So if a crime is committed, but a judge refuses to hear it, did the crime not happen?
A dismissed case means that a lawsuit is closed with no finding of guilt and no conviction for the defendant in a criminal case by a court of law. Even though the defendant was not convicted, a dismissed case does not prove that the defendant is factually innocent for the crime for which he or she was arrested.[0]
Huh? You're innocent until you've been proven guilty. While the case is ongoing, you're still innocent. You aren't proven to be not guilty, sure, but you're still innocent
Like I said - if you're not given due process, did the crime not occur? Or did you not get justice because of a corrupt institution? The discussion is about getting the evidence judged on its merits (or lack thereof), which has not happened in this case, and looks unlikely to happen at this rate.
You provide facts while you're applying for the court to take the case.
Not having standing means you haven't argued that there is a case, since you haven't been wronged. These are transparent already - there are documents where the crackpots have shown themselves to be crackpots already. Just because you don't want to read, doesn't mean it has to be presented on your format of choice
They could also take this case to Russian courts or Canadian ones, but what exactly are you expecting them to do? Courts have a limited scope of what they rule on
> Actually hearing the cases at both the state and federal level, and then throwing them out on the facts presented, instead of refusing to hear them on "lack of standing" to begin with?
Standing is a threshold issue without which there is no legally cognizable case to hear, but I suppose if you are backing a campaign against democracy being against the rule of law as well isn’t surprising.
It's not anyone’s fault but Trump's that the main party that actually has standing to bring a case challenging the results as improperly reached (the candidate who claims to be improperly defeated) keeps voluntarily dismissing the ones they initiate rather than trying to get them to trial, almost as if they are filing cases as a PR strategy with no intent to actually litigate them.
There’s refusal because it’s not sincere. If we did all that and Biden still won then it would just lead to the next conspiracy that the judges didn’t listen to the facts really. And the signature matching wasn’t stringent enough. And then they would argue that they didn’t let everyone required speak at the proceedings. Next they need to actually interview everyone who did a Mail in ballot and if they don’t show up then the vote doesn’t count. Etc...
And having standing is pretty important in something like this.
As I said, I’m not comparing the candidates and judging their behavior. I’m comparing what the public believed: “Two out of three Democrats also claim Russia tampered with vote tallies on Election Day to help the President – something for which there has been no credible evidence.” The point is to understand what’s the effect of people believing the election was not fair.
> It's strange for Americans to be on the business side of a propaganda weapon, as we're more used to using against our enemies.
Funny that no top replies pounced on that starting axiom. USA have been widely known as a propaganda massive producer firstly toward its own citizens, even today. From military cult, pledge at schools, commies witch hunting and globalisation hammering, and of course, all your religions and sub-religions, you all have been living and keep living inside daily propaganda.
Yes. The difference was, back then, the US government had a monopoly on propaganda in the US. I'm not saying that's great, but at least it's stable. Now we're in an age where propaganda literally goes to the highest bidder.
Propaganda is a weapon, and I'd rather my own government hold it, than unknown entities.
What happens when the "weaponized misinformation" ends up being true?
From the article: Hunter Biden was announcing that his “tax affairs” were under investigation....That news was denounced as Russian disinformation by virtually everyone in “reputable” media, who often dismissed the story with an aristocratic snort....That tale was not Russian disinformation, however, and Biden’s announcement this week strongly suggests Twitter and Facebook suppressed a real story of legitimate public interest just before a presidential election.
So Trump’s campaign manager shuffling internal campaign polling data to a Russian Intel Officer (the Senate Intel Committee’s characterization, not mine), while Russian Intelligence is busy waging a directed psyops campaign against the American public, and then lying about it to the FBI is not evidence of collusion? Trump’s close associate Roger Stone had advanced knowledge of hacked DNC materials from Wikileaks and told Trump, but not the FBI, then lied about it to Congress. Not evidence of collusion? Trump’s son, son in law, and campaign manager met with a now convicted Russian spy in Trump’s house, who laid out terms for delivering materials damaging to Clinton - relaxed sanctions. Trump lied about this meeting’s existence to the American public. Not evidence of collusion?
To summarize:
Trump knew Russia was trying to help him. He knew they wanted to offer dirt on Clinton in exchange for better relations. The hacked information was indeed delivered, meanwhile Trump’s campaign was funneling internal data to Russian Intel, which was attacking The election. And surprise surprise, the Republican platform that year was softer on Russia than it ever had been in the last 40 years. Suddenly Republicans don’t care about Russian annexation of Crimea? For no reason at all?
No evidence? Really? Let me guess, you only read the Bill Barr memo, which a Federal Judge has noted seemed to be intentionally misleading of the Mueller Report. I guess you also haven’t read the Senate Intel report, which lays out 900 pages of evidence related to Russian interference of the 2016 election, and the Trump campaign’s entangled involvement with it.
Maybe you don’t believe they colluded, but let’s remember that the original, official Trump campaign line was that they had zero contacts and zero deals with Russia. This turned out to be a complete lie.
To the extent that you think those things constitute evidence (and some of them are clearly wrong/lacking context/misinformation), you should also believe that these things constitute evidence of voter fraud:
- Jurisdictions with more votes than voters
- Dead people voting
- Individual alive people who claim to not have voted, being counted in the voter roll
- Literal videos of ballot counters marking ballots
- Thousands of "mail-in" ballots that have literally one mark, only for president, all in sequence of each other, without even being creased
- Thousands of claims of harassment and blocking of poll watchers
- Thousands of votes being flipped by the polling machines, all in the direction of Trump -> Biden
- Statistical analysis of the unlikeliness of the authenticity of the ballots in the contended jurisdictions, all of which stopped counting in the dead of the night, when they subsequently received mass piles of fresh ballots that leaned unrealistically heavily in one direction
- Non-enforcement of the only way to sort-of authenticate ballots: signature matching
- Not necessarily "evidence," but the unlikeliness of republicans winning literally every single hotly-contended house seat, yet losing support for the president
Nah. It's about whether or not we are okay with giant corporate tech companies deciding what is and what isn't "weaponized misinformation". I'm fully up to speed on the history of the United States using information as a weapon abroad, but what makes me uncomfortable about this situation is that tech folks seem to have dropped our usual skepticism about that strategy, and are somehow now okay with this happening domestically. Maybe I'm being naive, and this skepticism was never really justified, or is easily overrided by feelings about "making the world a better place". Whatever. It's still going to be weird to me to watch my friends who work in tech, who in literally any other circumstance would be freaking out at the prospect of tech companies deciding what you get to see on these gigaplatforms, jettison their skepticism in order to "protect democracy".
>And if you think the "American spirit," or somesuch nonsense, makes us immune, I hope we won't have to find out.
Not sure this invocation makes the point you think it does. Seems to me the "American spirit" you're invoking in the negative is one built on a mythos of telling totalitarians, be they government or corporate, to go fuck themselves. Recent history gives one good reason to be skeptical, give the corporatism and financialism rampant in the US economy. But that's still the mythos. I've still got a little Faith left.
This is a really underrated point. This whole site is really "tech industry news" with some hacking sprinkled in, but its name at least is based on the idea of the hacker ethos. That whole thing about decentralization, distrusting authority, information being free? A curiosity for how things work and logical investigation rather than letting someone else do your thinking for you? Disinformation is a huge problem and I understand the impulse to grab for the easiest available solution, but this is fool's gold and anathema to what hackers and Western democracies claim are the foundations of their belief system.
I really can’t think of anything more alien to the hacker ethos than, “A group of unknown people at a technology corporation should be the ultimate authority on what I’m allowed to say, read, or share with my friends.”
Disinformation is indeed a difficult problem, but there are enough smart people in the world to figure it out without resorting to authoritarianism. We can do better.
What happens next is your platform either gets zero traction, or it gets ridiculed as alt-right conspiracy theorist shithole, which is basically a self fulfilling prophecy.
Don’t forget plants/trolls. People that don’t like a platform will go there act like the worst human alive just to get it banned or looked down upon. How many news stories now use proof of what you said by using “one user said..and another said..” therefore we have proof that this platform is or allows xyz. Its effective.
I'm pretty sure being critical of efforts to censor information is part of the hacker ethos ("information should be free").
I think this is actually why the Twitter approach ("some or all of this tweet is disputed") is a good way to go. You're providing more information—links to people who disagree—rather than hiding information.
Misinformation is more akin to me saying that I want gcc to ship my malware. No one is saying you can’t ship malware, but gcc doesn’t have to be your vehicle.
This is a classic problem of "who decides?" Unlike malware, lots of things initially written off as misinformation turn out to be true. I'm not going to argue that YouTube doesn't have a legal right to do what they're doing, but assuming that they're specially knowledgeable about truth and falsity strikes me as an incredible act of hubris.
All of that said, I recognize that disinformation and misinformation is a serious and tricky problem. I think the American right partially has themselves to blame for throwing so much mud at the wall. I understand YouTube not wanting to be hijacked and used as a political vehicle. But at the end of the day they function as a commons, and I think society is worse off when they censor content like this.
They also prevent sending links to known phishing sites. They prevent the spread of malware.
Yes, this Trump-idiocy is malware.
Yes, this oversight needs oversight in the open. A list of banned stuff, with explanation.
Also, not surprisingly alternative platforms quickly sprung up to serve that audience and host. Though they might eventually get kicked off Cloudflare, and so on.
HN is fairly equivocal on the concept of walled gardens as a safety measure, despite the lack of accountability. There's some consensus that it's perfectly fine for e.g. Apple to prevent you from installing something outside the app store, or to put up major hurdles to installing anything unsigned on MacOS, because Grandma and Grandpa can't be trusted not to do something silly and get pwned. "It's not a hacker device!" is the usual refrain.
Yet the consequences of having your brain be pwned are so much worse. There are people who believe the Earth is flat and that vaccines cause autism; some of these beliefs can cost lives. Maybe YouTube shouldn't be a "hacker platform" either, but a place where people can watch videos without fear of being led down a rabbit hole.
At some point you have to trust adults and allow them to make mistakes. I get protecting children from bad information but adults should be trusted with their own lives. We should trust them to have a basic level of comprehension and logic by a certain age. If they don’t we need to revamp the education system, not cater to the lowest common denominator.
It's the same problem as the "both sides" approach to anything [0] ("my ignorance is just as good as your knowledge").
This inherently just moves the problem to decide who's informed (or equivalently what's the required level of "informedness").
Many jurisdictions routinely suspend people's voting rights. Due process and all. Of course when it comes to giving them back after they've served their sentence the process somehow slows down. So I'm fully aware of the downsides of this.
I'm not advocating for doing that to any concrete group of people, I'm advocating for working on this problem. It's not the first time this has come up in history, nor the last.
The long term solution is education. Yes. Is there even a short-term solution? Maybe not. However I'm interested in the details of best arguments for and against.
And I'm not convinced at all that just because someone is older than X years they now have to be "trusted". After all we should protect elderly people from bad information too, they seem to live their second childhood.
> They're private platforms. You can send those links via many other routes ...
That is a complete non sequitur. You say it's not about freedom of speech. Someone responds that, in fact, blatant censorship is occurring. You don't even attempt to refute this point, instead falling back to pointing out that the censorship isn't illegal!
Censorship reduces freedom to speak. That statement remains true whether or not the speech happens to be legally protected, and regardless of how wide spread the censorship might be.
Removing spam could be considered a form of censorship. It is removing the speech of others.
Generally anti-spam measures facilitate rather than inhibit freedom of speech. A sufficiently popular internet forum without spam controls would quickly become mostly unusable.
In this case, doesn't censorship enable freedom to speak?
These aren't singular global quantities. Such censorship reduces spammers' freedom to speak in order to preserve that of the other participants. Spamming closely resembles a tragedy of the commons (overuse of the system to solicit sales) and anti-spam an associated regulatory action.
The problem with such an analogy is that spam is inherently off topic - approximately none of the other participants actually want to see it. That's fundamentally different from this case. Whether you deem it misinformation or political speech, many of the participants clearly do want to see it. In fact, they want to see it so much that such information is consistently selected by the automated algorithms that are designed specifically to maximize engagement metrics.
We should be careful not to conflate wanting to see something with clicks. By that metric, spam about free bitcoins has more interested participants than much of the political speech in question.
It's not a non sequitur. Freedom of speech is not the same thing as a (nonexistent) right to post whatever you want on a private platform regardless of the consequences for others or for the platform itself.
I never said it's not censorship. You can post links on a number of competing services (or start your own), therefore statements like
“A group of unknown people at a technology corporation should be the ultimate authority on what I’m allowed to say, read, or share with my friends.”
> Freedom of speech is not the same thing as a (nonexistent) right to post whatever you want on a private platform
Again with a non sequitur - I never claimed that it was. I said:
> > Censorship reduces freedom to speak. That statement remains true whether or not the speech happens to be legally protected
It's really hard to have a good faith discussion about the pros and cons of a nuanced issue when one of the parties repeatedly fails to make good faith interpretations of claims which appear to challenge their worldview.
Don't worry, you're free to speak your mind so long as you don't actually try to communicate with anyone. Please take care not to express your opinions outside of the officially designated free speech zones!
Don't be ridiculous. There are thousands of competing communications providers. If you want to share content that harms society or harms the platforms themselves then you might just have to do it outside of Facebook or Twitter.
> It's about freedom of Reach, not freedom of speech.
What a snappy cliche. If you prohibit certain people from using the printing press but allow others to do so, then in practice you are limiting their freedom to speak relative to other people. To imply otherwise is either disingenuous or profoundly misinformed.
Everyone has access to the modern equivalent of a printing press. Anyone can buy a domain name and a VPS and "print" as many leaflets as they want.
Publishing on YouTube is more like, well, publishing. There's a middleman. They own their own press, they have a reputation and an audience, they bring the eyeballs, they make the money and they give you a cut. It has never been censorship for a publisher to decline to publish something.
YouTube, Twitter, Facebook, Reddit, etc (and to a lesser extent search engines) are the modern equivalent of the printing press in terms of the effect they've had on how we communicate. A domain and VPS are simply not a viable substitute for access to mainstream social networks; to claim otherwise is disingenuous.
They are not at all similar to publishing. There's no editor. There's no approval process for the typical use case, only a retroactive removal process. They don't have an audience in the traditional sense of people paying someone to curate information for them but rather depend on network effects to maintain a monopoly on their segment of the market. To that end, they have more in common with a dating app than they do with the New York Times. The presence of advertising revenue is the only legitimate similarity I see to a traditional publishing model.
In spite of your claim that YouTube isn't infrastructure, it appears to me to have far more commonalities than differences with it. That it isn't (yet) regulated as such is merely a legal peculiarity from my perspective.
(And the above doesn't even begin to consider the effects that dumping VC and megacorp funded free product has had on the market. Good luck starting a competing platform when there's no viable way to operate a subscription model and your direct competitor has a monopoly on the relevant advertising market.)
The person I responded to did, in fact, directly imply this. Recall that I had compared the impact of modern mainstream social media to that of the printing press historically. Directly ignoring my central point clearly places your comment in bad faith.
"Freedom of reach" is nothing more than a thinly veiled attack on (cultural, not legal) freedom of speech (and liberalism more generally) for the reasons I've already articulated in this and nearby threads.
Original person you responded to here - I did not, in fact, imply this. My thesis is that your analogy is broken. We agree that having a domain and a VPS is a poor substitute for a voice on a major social network; likewise, owning your own printing press is no substitute for, say, a regular column in a popular newspaper. It's incorrect to frame it as forbidding access to technology, when what it really is is a middleman refusing to do business with you. We can debate about the precise nature of the middleman, but the presence or absence thereof is the defining feature. You CAN publish without Facebook. You CAN'T publish (paper) without a printing press.
It also bears noting that the gap in access to publishing technology has radically narrowed - it is WAY easier and cheaper to buy a domain and a VPS and publish your thoughts to the entire world without any content middleman, than it was to procure your own physical press and set up an operation to print even thousands of leaflets, let alone publish something with global reach. You have access to - pretty much - all the same technology that Facebook does.
No. Those are the very definition of publishers, not communications providers masquerading as publishers when it's politically convenient for them (recall the dance around Section 230 protections).
If a local newspaper ever somehow became the central point of communication for a significant fraction of the population, posting nearly everything they received by default with very little to no curation, then it would be reasonable to reexamine the expectations placed upon them by society.
I think the idea is making things harder to access.
The failed idea behind making drugs illegal is that they'd be harder to access and less people would do them. There's evidence this didn't work.
I think the idea here is the same - if the big platforms ban certain types of egregious and harmful misinformation (that gets recommended to people susceptible to conspiracies) - these companies are hoping less people overall will watch this content, and probably more importantly, if anything bad comes of this, they won't have their brands tarnished because of it.
Will it work? Who knows. They're companies. Why can't they do what they please as long as it's within the law? Does the law require YouTube to keep up all content? YouTube has a Terms of Service and has censored lots of content for a long time. Why is there such a big distinction between beheadings, child porn, anti-vax, hate speech, and election misinformation? These companies already have a history of censoring "political" content for foreign countries.
I don't disagree, but the difference with drugs is that people specifically seek them out and there is a lot of effort and involved to make it happen through the whole supply chain.
The problems with social media are largely around removing all friction from the process, and adding a huge dose of virality and highly specific targeting. The former means people get a vast amount of information with minimal effort, and the latter enables targeted propaganda which then gets magnified in echo chamber bubbles by the algo.
The individual elements all existed before, but the combination creates pretty unique effects. I'm also skeptical these companies can effectively address the problem since these properties are what has fueled their growth.
Some other people actively seek out suppressed information, even in face of higher barriers. Otherwise, we would still live in an ossified world of aristocracy and Church; the secular and republican ideas were once banned and persecuted by the powers-that-were. But they failed, all over Europe.
You are right that bans increase the friction of the process, but they also act as evolutionary pressure. After some time, the best dissenters will get really good at conveying information against all odds. A good example is Iran, where pretty much everyone can find out what is going on despite all the efforts of the Islamic Republic to crack down on unauthorized news.
Take qanon. I don't it naturally spreads as virulently as drugs. But when you add Facebook into the mix, people are able to disseminate this information instantly to people who just don't understand how to vet sources. The comparison to drugs just doesn't work.
It's not about making information disappear, it's about recognizing that humans are very susceptible to mass manipulation (see cults, mass hysteria), and it is becoming an increasingly powerful strategy and weapon in our age of rapidly growing new communications technologies. How to combat this is debatable for sure - I don't think this Youtube stuff is the most effective solution - but I don't think we can just apply old principles to new problems without first acknowledging the real challenges it's meant to address.
Your argument is unlikely to have anything to do with the reason for YouTube's decision. They're thinking about their brand. Do they want to be branded as the company that lets anyone post videos about the election being stolen without any evidence? They get to choose between "yes" and "no" and the choices lead to very different brands.
The same point comes up again and again when porn is removed from [website]. "How dare they remove porn! I don't see anything wrong with it!" Sometimes it's even looking down their noses at the obviously primitive American culture that is the problem. Of course porn won't disappear from the world, and of course people that like porn will be disappointed, but it won't be that company's brand.
The reality is if YouTube was committed to free speech this wouldn't even come up at all. People would 'brand' YouTube this way about as much as they do Chrome for letting you browse websites claiming the election was stolen, or web search for letting you find it, or your ISP for letting you download the HTML. Nobody would give a crap, they'd just argue about other platforms where the owners were engaging in censorship.
Political censorship isn't just morally wrong, it's bad for business and bad for your brand too. The moment you allow that camel to put its nose under the tent, you're inviting in a whole world of pain. Suddenly every zealot out there will be demanding you silence the people who disagree with them, because they know that maybe if they make enough noise you will. They'll attack your brand hard because they know if they do you might cave. A hard core libertarian "everyone is served on principle" attitude keeps the fighting at bay because people know it's not worth putting in the energy.
Well, on the flip-side the previous thread from the announcement blog received the most comments I have ever seen on a HN post. About 3000? And most of those were displeased reactions.
A lot of people are talking past each other in these posts or arguing against straw-men. In my view, many of the seemingly "pro censorship" posts are hacker-spirits, just arguing from the point: "look, this is a natural outcome of unrestricted, powerful BigCo's". I don't think a monolith like YouTube was as ever an acceptable solution for the Hacker. We should build technical, distributed systems for sharing videos that can't be controlled as easily and then fight for the right to use them at the ballot box.
The culture has changed around here in the last 4 years to favor centralized regulation of communication, which is a dizzying reversal of what it was about every year before that.
> I find it really disconcerting how many people on Hacker News want a corporate entity to decide what information is deemed acceptable to know.
It's up to a corporate entity to decide what should be in its platform though. The issue in the case of Youtube (and other social platform) is that there are a few players that get much of the cake, so their decisions have wide effect. It is this last point that should be addressed in my opinion
It's a free market. YouTube can host videos it wants to or not. Likewise you can get a $5 server and host what videos you want to or not. That's different from "decide what information is deemed acceptable to know."
The weirdest thing about this whole discussion is the number of people that want to impose more regulation on private entities, but who are trying to couch this as some kind of anti-censorship crusade.
The problem is YouTube's algorithm -- recommendations. If we accept the algorithm, then there is no middle ground: either you ban this content from the platform entirely, or watch misinformation spread wildly.
But there could be a middle ground in which the platform changes how its algorithm works. I think neither YouTube nor its detractors are considering this as a serious possibility, for some reason.
They don't consider alternatives, because the algorithm is optimized for view counts. Sensationalism has always brought more views than "good" content. Youtube would need to forgo a lot of ad revenue to change the algorithm.
The disconcerting part is that right here in this thread the Americans are still arguing as if this is a battle with two teams, one of which wins, the other loses.
So here's the thing.
The US [social media] corporations have to lose. Simply because we already found out years ago, that ethics doesn't scale. UNLESS they solve this problem, this simply means that they OUGHT not to exist, at that scale.
The American people are of course inevitably coming out as the losers too, because of the gigantic inequalities, and the corporate capitalist culture grazing on the weak.
Whats more disconcerting is sectarian violence fomenting by propaganda, even to the point of the breakup of a nuclear armed state, the US. That is disconcerting to me. Preventing it is of the highest priority.
“You’ll only create further echo chambers, both on your own platform and on the (new) platforms inevitably created by the exiled.”
In case anyone is wondering, this isn’t just hypothetical. The website full30.com was created specifically because YouTube decided that talking about guns wasn’t advertiser-friendly. There weren’t that many gun-related videos that YouTube actually removed, but they started to de-monetize just about any video that focused on guns. So the affected creators didn’t stop making videos, they just made their own content delivery solution.
So it’s not just a hypothetical, it has already happened and the other echo chambers are out there with a warm embrace for the affected content creators. The only thing YouTube is effectively doing is making its bubble more insular, so that Californians can act more surprised the next time the country elects someone like Trump.
A couple months ago I watched a video of Wranglestar where he spoke about his "ammo dispersion units" in an attempt for his video not to be demonetized / banned.
Censorship by definition includes not only banning and deletion, but “suppression,” which fits denying monetization to particular content on a site otherwise designed for it.
For example, it wouldn’t get around the first amendment for the government to say “we’re not banning this book, you just can’t charge money for it.” (I understand that Google as a private actor is permitted to engage in censorship while the government is not. My point is that demonetization is clearly censorship because of the government did it it would be illegal.)
Demonetization is not censorship, imho. They support YouTube through their advertisers, and their advertisers don't want to be affiliated with guns. It makes sense that they won't pay you for videos that they can't monetize -- they're just passing on the loss to the video producer.
Beyond demonetization, YouTube did categorically ban videos demonstrating how to install and/or fabricate an autosear. In practice, this also extends to purely educational videos explaining how an autosear works.
So, the central point (or one of them, at least) of Manufacturing Consent is that news/media organisations are shaped by what advertisers desire, such that the reporting is mostly friendly to a capitalistic worldview.
This is 100% analogous to what Google are doing with respect to demonetising youtube videos, and troubling for exactly the same reasons.
If we can remove one perverse incentive at a time, it's still progress, but i agree that, without punishment for misinforming the public, and it's delicate to draw lines there, the problem won't go away.
I think you should extend this thought a little bit further.
If tech companies start deciding that certain gun content (to use the example in the parent) shouldn't be allowed, that will of course have an effect on the general population's opinion about guns and gun rights given their market control of online video. To me it's fairly obvious that this channel of media can be used to control what people think is "normal" or "allowed".
At the risk of Godwinizing the thread, one of the first oppressive operation against Jews in the Reich was a campaign "Kauft nicht bei den Juden" (Do not buy from Jews).
It did not carry weight of the law, you COULD still go to a Jewish shop and buy something, but the message was that the society disapproves, the state disapproves and you are a dirty being for doing so.
The result was severing of an important daily interaction between Jews and non-Jews. Once such casual contacts were severed, it was easier to convince the society that all Jews were evil, without the people having doubts like "but ironmonger Katz is such a nice person!" They did not know ironmonger Katz anymore, only a caricature of his people.
This is not the same thing. The campaign you refer to was created to harm a large group. Demonetization targets only the content makers but not their audience.
What, do content makers make videos purely for themselves? That's the whole point of monetization.
As rayiner said, the government can't get around the First Amendment's guarantee of freedom of speech by only allowing the seller of a work to do so for free. Demonetization of certain topics on a blanket basis means that content creators have less incentive to make videos that they would have made otherwise, harming their potential audience.
It's similar to any other corporate/institutionalised group - imagine the "<College_name> Anarchists", or "<College_name> Communists", or so on. "Hacker" News, run by one of the most institutionalized Venture Capital companies out there, is simply marketing at this point.
Luckily, the quality of discussion for tech matters, and the players in the industry that participate, still make this place one of the best ones on the internet for technical discussion. But don't expect the political/historical/cultural/non-technical discussions to be any more insightful than any other group of people having opinions on those subjects.
Yeah, it's kind of weird. The technical (and many other discussions) are often quite insightful here. It's always fascinating to get into some obscure topic and have a world-renowned subject-matter-expert suddenly drop in to give their two cents or clarify what they meant in some quoted piece.
I'm quickly learning to just steer clear of the political discussions though. They're usually a very jarring reminder that intelligence in one or several areas does not preclude intellectual dishonesty and intentional ignorance in another.
This mirrors my experience perfectly. Conduct on HN is uncommonly civil (for the internet) across the board - props to Dang and whoever else moderates things. Insightful discussion, however, is almost entirely limited to the extremely technical submissions. From biochemistry to compilers to machine learning, they consistently attract participation at a truly impressive caliber.
Sometimes I wonder what HN would be like if it were somehow possible to preemptively block the majority of the "fluff" articles that make it to the front page. I guess there's no way to automate such a determination though, and even if there were any such action would probably anger the majority of the user base.
> I find it really disconcerting how many people on Hacker News want a corporate entity to decide what information is deemed acceptable to know. Do you even know where that word came from? The irony is almost overwhelming. But I digress...
Many on HN are the ones implementing these features, therefore have a bit of power over the rest of us. Further, judging from cancel culture, the left seems to want to force their opinions on others that have different ones. So this makes pretty clear sense to me.
Actually, in the age of ML it more or less does. You wire up the model, specify the metrics to optimize for, and then feed it lots of data. The algorithm figures out the details of how to achieve the specified goal on it's own. Have a look at (https://cs.stanford.edu/people/karpathy/convnetjs).
I suspect you're being rhetorical, but the algorithm and specific metrics to use are selected by the developer. The data is entirely user generated - it's the result of collecting the metrics over some period of time. The trained model is the result of feeding the collected data into the chosen algorithm.
The point is that the algorithm is, for all practical purposes, tuning itself. The developer has essentially selected a black box to feed the data into, told it what to optimize for, and given it the ability to wiggle a bunch of unlabeled knobs. Which knobs it should tweak and in precisely what way is never specified by the developer. Instead of "show the following things to the following users", the developer just says "maximize number of videos viewed per visit" and the algorithm tweaks whatever parameters have been made available to it until it finds something that works.
Unfortunately, "something that works" is often not what we might have liked. ML is a bit like a Djinn, fulfilling wishes in an unpredictable and borderline malicious manner.
In YouTube’s case it may do in part. If you give it the wrong objective like „make people look at more Videos“ it will optimize its suggestions for this
It would be useful to give it a different objective, say, to "suggest content that disproves what you just saw using verifiable facts", but that'd be too hard without curating lists of such proofs and sources.
> The idea that banning certain information will somehow result in it disappearing has been shown repeatedly to not work.
The goal is not to make it disappear, it's to make the knowledge have a social cost. Normal people won't touch it, and they provide the 80M votes you need to win control of the government.
> The idea that banning certain information will somehow result in it disappearing has been shown repeatedly to not work. A cursory reading of history should make this clear. You cannot turn the entire world into West Coast USA by banning everyone that disagrees with you. You’ll only create further echo chambers, both on your own platform and on the (new) platforms inevitably created by the exiled.
And in the process you’ll create a system that will eventually alienate enough people in the middle to undermine you. Banning discussion of this—instead of letting the courts do their work—will alienate the half of the country that voted for Trump. It’ll alienate traditional liberals for whom free speech (not in the narrow legal sense but the larger social sense) is a core value. And eventually the system you’ve created will do something to overreach. The desire to censor the other side won’t end here. And at that point, the majority of people will find themselves on the other side and you’ll have a problem.
Traditional liberal values were a good thing and we should hesitate to abandon them. Liberals defended nazis marching through American cities in the name of free speech. If we can handle that we can have handle baseless claims of election fraud that are being swiftly dealt with by the courts.
Speaking as an American who didn't vote for Trump, either time, you've certainly alienated me. You'd (collectively) best hope you're making more friends than enemies.
>Texas GOP chair says 'law-abiding states' should 'form a union' after SCOTUS rejects election suit
I feel there may be a real point to that. Is he being a bit emotional with his post? Sure. Should that be enough to write off a possible threat to democracy? No. I mean if that alienates you, I expect what the TX GOP chair did will REALLY make you passionate
Trump support is more about disillusionment and distrust of career politicians and frustration at the steady decline of blue-collar prosperity than it is about actually believing his mouth words.
>The idea that banning certain information will somehow result in it disappearing has been shown repeatedly to not work
Maybe it didn't work in the old days.
But now maybe the CEO of Cloudflare will wake up one morning, decide he doesn't like your politics and block your DNS.
Now you might wake up one morning and find that Mailchimp have terminated your account because they don't like your politics (Molyneux).
Now your social media twitter clone might be banned by the Apple/Google store because they don't like what people are posting on it (Gab). As if you're a publisher... which you're not under section 230...
Now your GoFundme gets pulled if whatever leftist running GoFundme decides they don't like you.
Now the payment companies won't process your payments (too many examples to name).
Now, despite being the fourth largest newspaper in the country, your twitter account will be disabled indefinitely for posting "hacked" "russian disinformation" that turns out to be neither hacked nor disinformation. And we find out after the election that the Attorney General of Delaware has been investigating this matter for over a year.
Now some dipshit moderator on Hacker News will shadowban you permanently on a whim.
Your google docs will be blocked for violating the terms of service.
Youtube will demonetize you.
There is nowhere to hide from the dystopia that every fucking retarded left-winger here is so eager to embrace.
It's like being in the middle of the red scare, but everyone says they're ok with telephone companies listening to the bad people's conversations and disconnecting their phone lines if they mention communism. Because, you know, they're "private companies" or something. The only cold comfort is that it will surely be used against them one day.
You have an incredibly well put message here, but then you went and tacked on those last two paragraphs that border on uncivil, are polarizing, and (IMO) fail to add anything meaningful.
>Because, you know, they're "private companies" or something.
Wait so are you for them being able to do whatever they want, where you can be banned from their service? Or do you want them under the arm of the government where they could be regulated? I think you're just mad at the culture. Trump isn't gonna fix that
Those aren't mutually exclusive. The government can simultaneously regulate specific behaviors of large entities where there is reason for concern while otherwise largely leaving them to do whatever they want.
I don't think GP is necessarily suggesting that Trump will address their objections or that they personally support him. Rather, I read it as suggesting that much of his support may in fact be due to backlash against such cultural trends on the left.
These types of articles are always insane to me. YouTube is a private company. They're pretty much allowed to moderate the content on their platform in an arbitrary way. I would argue that this is a good thing.
We can't have it both ways, but we want it both ways. We want a non-government controlled way to communicate, but we want to mandate that they're liable to uphold the same First Amendment protections that the government has to provide. That's insane, that's not a good standard to pursue, and we should rather focus on the cost-benefit and pros and cons of either reinstating the Fairness Doctrine or creating a government-run social media website where information can be freely shared and subjected uniformly to First Amendment protections.
The only time Taibbi brought up government intervention in the article was to oppose it. Specifically he criticized a new bill that would "require that political ads or content produced by foreign governments be marked by disclaimers, and that companies should remove any such content appearing without disclaimers. It would also expand language in the Foreign Agents Registration Act (FARA) requiring that any content intended to influence U.S. citizens politically be reported to the Department of Justice."
Saying something is bad is not the same thing as saying something should be illegal.
I think there needs to be updated thought leadership on what qualifies as "common carrier" and what doesn't.
Whether gov't or private, if a platform is primarily promoted as an open communication channel then I think that some, if not all, aspects of "common carrier" laws should apply.
Social media platforms would not have the trust and would not have been adopted as quickly if they started off censoring as much as they do now. In that regard, they presented themselves as open and impartial, then shifted once they achieved sufficient critical mass.
YouTube censors content all the time. Pornography and pirated content, being just the most obvious examples. Others that want censored content have only to look to other sites to find that content.
In my opinion, YouTube should (and does) have content standards, and be extremely free in how they determine them. People certainly have a right to publish and consume conspiracy theories, but I don't understand why YouTube has anything close to an obligation to host or promote them.
Why is it insane to criticize the decisions of a corporation? Activists of all stripes regularly criticize corporate behavior that is completely legal and within their rights as a private company. Is there some substantial difference between arguing a corporation should go out of its way to accommodate the spirit of environmental conservation, and going out of its way to accommodate the spirit of open discussion? You shouldn't presume such criticism is lobbying for a change to the law (the original article does not).
This reflexive “YouTube can ban whatever they want” is just a way for people to stop thinking about the issue to avoid cognitive dissonance. Aside from a few hard-headed libertarians, no one who says that believes it in any other context aside from censoring conservatives.
right, in principle there is nothing wrong (imo, at least) with a private company deciding what can/cannot be said on its online properties. we might, as individuals, pressure a company to change it's moderation policy, but we should not seek to outlaw moderation.
of course, this doesn't work so well in practice when all the viable platforms are controlled by a few huge companies that are more or less ideologically aligned. in my view, this is more of an antitrust issue than a freedom of speech issue. moderation on social media would be a non-issue if there were a reasonable range of platforms to choose from. hateful and/or misleading content would also be far less impactful if uploading a single video to the youtube didn't have the potential of reaching an enormous audience.
I don't disagree with this. It is a monopoly problem. However, what this article suggests and a lot of other solutions suggest is patching the symptom and not getting to the root of the problem. We've lost our stomach for stimulating competitive marketplaces and forgotten that a sufficiently competitive market will settle on an optimal outcome. That does mean regulation is required and it does mean that breakups are required. That's treating the root cause and not just the symptom.
So solutions should be focused on lowering the power of Youtube by making it easy to have competition to it but instead it seems to me the opposite is being proposed: with every new regulation that companies like Youtube have to follow, the harder the regulation is to enforce the harder it is for competitors to get into the market (so essentially these regulations are building a moat around YouTube). That doesn't seem to me like the right goal here.
Why can't the phone company listen to private conversations? What does being paid have to do with their enforcement of their own property rights? Couldn't they just say in the EULA "we will listen to your calls and disconnect you if you say something we don't like"?
There are laws against eavesdropping on private electronic communications.
> What does being paid have to do with their enforcement of their own property rights?
Because paying for a service establishes certain rights and expectations that patronizing a free/ad-supported website does not. In other words, payment creates a contract.
> Couldn't they just say in the EULA "we will listen to your calls and disconnect you if you say something we don't like"?
I suppose in theory, though it would be difficult to show that all parties to the call agreed to such a EULA.
> There are laws against eavesdropping on private electronic communications.
Likewise, people are discussing reform of the laws that govern the behavior of giant media properties. Surely you agree those laws (against eavesdropping) were passed for a reason and are not arbitrary?
> Because paying for a service establishes certain rights and expectations that patronizing a free/ad-supported website does not. In other words, payment creates a contract.
People are not basing their criticisms on the violation of a contract but on the public good of the free exchange of information.
> I suppose in theory, though it would be difficult to show that all parties to the call agreed to such a EULA.
"By continuing to stay on the line, you agree to the end user license agreement available in full online or by mail upon request."
> Surely you agree those laws (against eavesdropping) were passed for a reason and are not arbitrary?
I'm puzzled by the question. My position on wiretapping laws is beside the point.
> People are not basing their criticisms on the violation of a contract but on the public good of the free exchange of information.
Again, this is puzzling because it doesn't seem related to the topic at hand. You asked a direct question (why is paying different?) and I gave a direct answer (because contracts).
> "By continuing to stay on the line, you agree to the end user license agreement available in full online or by mail upon request."
Doesn't cover minors, who can't consent to binding contracts, and I doubt it would be popular with customers anyway. Can you imagine having that play every time you make or receive a phone call?
> I'm puzzled by the question. My position on wiretapping laws is beside the point.
No, it isn't. Spartan-S63 seem to be advocating [0] YouTube can moderate their platform however they like. souprock suggests [1] that same logic allows the phone company to disconnect anyone who criticizes the phone company. You say [2] that doesn't apply because that would require breaking the law. I'm observing that the law you refer to places limits on the phone company, which is what people are suggesting with respect to YouTube. Section 230 reforms might limit the ability of YouTube to do what they want with their platform, just like the laws governing telecom utilities were written in order to balance the interests of the public and the consumer against the interest of the telecom company to do what they want with their property.
> Again, this is puzzling because it doesn't seem related to the topic at hand. You asked a direct question (why is paying different?) and I gave a direct answer (because contracts).
I'm asking why you think paying or not is relevant to the issue of section 230 reform. People aren't advocating for the reform of section 230 because their contracts were violated. They're advocating for the reform of section 230 because they believe its harmful for companies like YouTube and Facebook to exert this much influence over the discourse without any liability. Payment or lack of a contract is immaterial.
> Doesn't cover minors, who can't consent to binding contracts, and I doubt it would be popular with customers anyway. Can you imagine having that play every time you make or receive a phone call?
Yet GDPR exists and there are popup windows on many webpages because of the legality of the tracking measures employed by advertising companies.
> These types of articles are always insane to me. YouTube is a private company. They're pretty much allowed to moderate the content on their platform in an arbitrary way.
Now post “YouTube is a private company and I support their right to ban BLM”
There is a difference between "allowed to" and "ought to". It's a question of ethics, not law.
In my opinion it feels immoral for someone who has this much control over human interaction to act in a way that undermines free discourse so blatantly. To see an American company ignore the value of free speech is disheartening to me. YouTube is so popular and I wish they took a different stance on these issues.
Isn't that part of what's critical to have a free market? Sure, there are market inefficiencies when you get to the level that trusts and monopolies exist, don't get me wrong.
Rather than patch the symptoms of the problem, attack the root cause: If companies are acting anticompetitively, stimulate competition in the marketplace. If companies are allowed to make arbitrary decisions (which they should in a free-enough market), seek alternatives, such as government enterprises.
If folks don't like payment processors denying payment or ISPs denying service, seek forming a government enterprise that _can't_ deny service to any citizen.
In a theoretical world, sure. In reality it is government actors that are twisting the arms' of tech companies to remove unapproved speech. So I'm not so sure a government controlled ISP/payment processor/video platform (something like China?) would really fix the situation.
As long as it isn't due to a protected class reason, I don't immediately see the problem. If you break your ISPs TOS, or you commit fraud, it makes sense that those private companies would not want you as a customer.
If this is your argument then you should be against laws that prevent discrimination against a protected class. If the markets are efficient enough to account for free speech issues, i.e. someone can just start another facebook/youtube, the markets should be efficient enough to correct themselves for companies that decide to discriminate against protected classes, i.e. someone can just start another facebook/youtube.
That's not a reasonable analogy. a) Youtube's decision (wise or not) is not arbitrary. b) Neither video creators nor viewers are "customers" of Youtube.
Even more insane is the idea that YouTube are "blocking content" and therefore treading on people's "freedom" to publish whatever they want.
Until platforms like YouTube even existed the only way to get anything seen or heard by people was to go through a big TV or radio network, who of course filtered what they would or would not air. Just because you filmed a video about a crackpot theory DOES NOT give you the right to air it to millions, and never in our history has a person been able to do so.
If you want to stand in Times Square holding a sign about your theory, you're free to do so. If you want to buy your own TV channel and air it, you're free to do so. If you DEMAND some TV network run it, you're gunna have a bad time.
>These types of articles are always insane to me. YouTube is a private company. They're pretty much allowed to moderate the content on their platform in an arbitrary way. I would argue that this is a good thing.
They are allowed to, but they should be criticized for being hostile to the concept of free speech. Especially since nobody believes YouTube endorses what is posted on Youtube videos.
Separately, their monopoly should be broken up. We can't let a company dictate all political discourse in the country.
Can you call something a private company when they are the defacto monopolistic standard? What other video social media sites are there that eve remotely come close to competing?
When that happens, you're either basically a utility for the public, or a monopoly and need to be broken up. This whole concept that these "first amendment protects companies even though they control the market" reasonings are just absurd. It's exactly like defending the railroads for not wanting to ship black people because they don't have rails for "colored folk" and since they're a "private company" they can do what they want.
If I were a foreign actor with malign intent towards the US, I would be very pleased with how things are playing out.
1. Corrosion of formerly stalwart American Institutions
2. The majority of Americans being habituated to mistrust their fellow American
3. The ability for a few entities with concentrated power to harness and control the nervous systems of millions of Americans
> If I were a foreign actor with malign intent towards the US, I would be very pleased with how things are playing out.
Then you wouldn't have thought enough about it. Look at what happened when the USSR collapsed: the world became less stable and nuclear tech and scientists found their ways in unsavory countries.
There were some civil wars or frozen conflicts that are still a pain today (Transnitria, breakaway republics in Georgia, the whole Ukraine thing, and more recently Azerbaijan vs Armenia).
When the enemy has nukes you want him to be a known quantity, stable and predictable.
To anyone reading these threads who is disconcerted by all the apologists, rationalizations, and blindness: Hello, this is what modern totalitarianism looks like. The whole point is to shake the foundations of your principles, and make you doubt yourself.
Corporations that control 90% of the internet, and therefore public discourse, are hiding behind the technicality of being private entities, and therefore can enforce whatever rules they want. This can't really be disputed, but it's obviously unethical, and obviously used in bad faith. They are no longer just companies, but quasi-states.
Some people saw this coming in the early 2010's, but no one really cared. We saw people's indifference to being tracked by these companies and governments with Snowden's leaks. We saw the transformation of Google, YouTube, Facebook, and Twitter from supposedly "open platforms" to editorialized publishers, and arbiters of truth.
This is exactly what we all signed up for, without caring about the consequences. This is only going to get worse.
Some people have called for a decentralized internet, where there is no overlord, but truly open platforms. That's the modern counter-culture, or modern activism. Decentralization won't solve all our problems, but it would make it easier to work on them in good faith.
Things I'd like to encourage, from a pragmatic view:
- Stop using the corporate internet.
- We must hear people out, even if they're wrong.
- We need to remove the addictive and exploitative systems currently ingrained in social media design.
- Move from a perspective of "us versus them" into a mindset of compassion for others who have valid grievances that are not being addressed.
So, I'm going to pull out good old Harry Frankfurt here and his "On Bullshit".
If the one you debate is debating in good faith, you are right.
If the one you debate is bullshitting, you are wrong.
"Bullshitting" is not caring about true or false, but simply arguing for an effect.
If someone argues in good faith but is "wrong", hear them out. In particular, it is easy to dismiss someone who proposes horrible solutions to problems. Just realize, the problem may be very real, even if their proposed solution is horrifying.
On top of that, people will propose straight up mad solutions, if they feel nobody cares about their problems.
Just recognize that we are talking about two very different things here; bullshitting and being "wrong". And they need to be handled differently.
So this "good faith" requirement seems like a huge gaping loophole, just a wonderfully easy way to dismiss and even censor a ton of people you merely disagree with. Whether or not they are arguing in good faith is so subjective and therefore the question of who you should hear out becomes the usual popularity contest: if you are sufficiently unpopular, it will be easy to convince people to dismiss someone as arguing in bad faith if they already dislike that person.
One example might convince you that this "good faith" rule is too restrictive if you agree with me on one premise, that ~90% of politicians regularly argue in bad faith. Yet most of us who agree with this do not as a rule claim that we will never hear them out.
I think it's encouraging that there's a way to categorize what's happening, because it suggests an alternative route, or at least a way to engage with it without feeling crazy.
These are strange times, and a lot of smart people seem to be voting in favor of their own defeat. A tough question would be to ask why, and I think it's a combination of factors: an atmosphere of insecurity and wanting a strong figure to end it, naive idealism, wish fulfillment, cultural alignment, etc.
Smart people can find ways to rationalize bad things. There's no sense in getting caught in the quagmire of trying to convince someone on the internet that we're headed down the wrong path. Instead, we can actually create an alternative that's just plain better.
So, maybe it's not all so bad - but more of a call to action.
Well, I didn't disagree with you. I was just hoping you had a solution. I don't have one.
For me I keep hoping people will have an epiphany: "Could my contempt for half of the country be at least part of the problem?"
It's always "other group" who is instigating Bad, and "our group" who is fighting for Good. To many it's a literal bare-knuckled fight against "evil", not realizing or not caring that their own tactics and outlook are themselves destabilizing. The media is just feeding into it. I don't see any easy way out.
In one post you might have everyone absolutely berating Facebook for allowing the platform to be used to spread misinformation, in another you have everyone berating Google for trying to disallow it. :/ Is there anyone discussing what to do about this trade off and not just yelling? How do you protect people from weaponized information, should you, can you? It seems like there should be consequences to lying and spreading invalid information as fact, but who should do it and how?
The problem is that the two sides have fundamentally incompatible moral stances, that stem from different premises. There's no long-term consensus to be had here - we can allow "a little" of free speech, and "a little" of hate speech, but neither side will accept that as a permanent state of affairs; at best, it would be a truce.
Though I too am surprised that YouTube is setting such a precedent for fine-grained moderation, we cannot pretend as if this is an "infringement of our rights." YouTube is running a for-profit website, you don't have the "right" to a spot on their platform.
YouTube has made the careful calculation that the people who are turned off by this ban are far outweighed by public support, and will reduce their contribution to toxic behavior and instability in the country.
> you don't have the "right" to a spot on their platform
Who are you quoting? Those words, or that sentiment, don't appear in the article. If you're quoting a comment then reply to that comment. And if you didn't read the article then consider reading it next time.
You don't have a "right" to a spot on their platform but you sure as hell have a right to complain about it and speak up against it when appropriate - which is what this piece is attempting to do.
There seems to be a lot of "there seems to be" talk in the comments. I have yet to see one comment actually calling this move a violation of the first amendment.
I don't have to support it, I'm acknowledging that YouTube is allowed to do it. Framing it as being "against our rights" is fruitless. The article author claims that this will inflame tensions, but that's unlikely.
I really doubt there's any careful calculation involved with this. YouTube is run by Susan Wojcicki who's a woman at Google and thus untouchable. She's repeatedly imposed her own left wing views on it despite this obviously attracting severe and negative attention from the ruling parties at the time, despite there being no obvious business case for it and despite many of her mandates making no sense and being unenforceable. "Anything that disagrees with the WHO" was an especially idiotic one because the WHO routinely disagrees with itself.
Basically, the reason this thread is about YouTube and not Google is because Google now has weak and basically absent CEO leadership, so it's split into fiefdoms where different moral standards apply. Chrome is content neutral, Gmail too, Android mostly but a bit less so and YouTube is flat out "if you aren't like Ms Wojcicki you're immoral and should be banned". A strong CEO would sort it out so things are at least consistent but Pichai is useless, he seems to spend his time doing whatever he thinks will get him shouted at the least.
A lot of discussion about whether YouTube censorship is valid or not, carries an implicit assumption that the marketplace of ideas is up to the task of sorting fact from fiction.
It isn't, at least not right now, because of technologies like YouTube's recommendation engine. When everyone thinks that virality is a signal of truthiness, an algorithm which amplifies outrage to generate virality undermines the marketplace of ideas.
YouTube isn't going to address the source of the problem because they would go out of business, or at least have a few very bad quarters. What they can do however, is pretend to moderate the marketplace and do it poorly enough that we stay focused on one another; instead of on their algorithm that literally converts outrage into money.
>When everyone thinks that virality is a signal of truthiness, an algorithm which amplifies outrage to generate virality undermines the marketplace of ideas.
Nailed it!
>YouTube isn't going to address the source of the problem because they would go out of business
In a way, we (the people) are getting exactly what we asked for... "Democracy is the theory that the common people know what they want, and deserve to get it good and hard." H. L. Mencken
I have encountered few people who have said “we shouldn’t have free speech in society”. Perhaps your argument needs more nuance to reflect the complexity of this issue.
I have seen fifteen or so such top- or second-level comments on this and the previous thread. It was about one apologist to every two critics there, and one to four here. It is by no means a rare perception here.
yeah but i wonder... the main argument here is that youtube is private and it can do whatever it wants.
but if it's true for youtube, why it cannot be true for an hosting company or a service provider? can't they just shut down my server or my ip? they are private after all...
is there a legally safe to speak freely on the internet? or our freedom of speech is limited only to photocopies?
> It is amazing to me that there are so many people, smart people even, that are adamant that we shouldn't have free speech in society.
This can be attributed to nothing more than personal political bias. There is no way these people would be arguing for censorship if the censorship favored the other side.
It also makes me want to move. There is no advantage to being an American without the freedom of speech and press.
No where on earth. If freedom of speech and press are over here, and it certainly looks like it is over here. Then why not move to China or Singapore? You could have the same level of censorship, but a more effective political system and more opportunity to grow wealth.
Freedom of speech in the broadest sense (as in "the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation, censorship, or sanction.") isn't binary, and cannot be absolute. There are some forms of expression that we almost universally agree shouldn't be allowed. Incitement to violence, "shouting fire in a crowded theater", depictions of sex with minors, etc are all unprotected from restriction.
In that sense it's like the concept of freedom in general. It's slippery, difficult to define, and has fuzzy boundaries. We're not always going to agree on the extent of each freedom. Where does your freedom end and mine begin?
In this particular case, how big does a company need to be before we start forcing it to host content that it doesn't want to host? When is it ok to force someone to provide a universal platform... to tie their hands so they can't moderate as they see fit?
Do I have a right to stand on your stage and shout whatever I want at your audience?
You have a right to say whatever you want to whomever you want without fear of government retaliation, as long as it isn't "hate speech" in a fairly narrow legal sense. This is what "having free speech in society" has always meant.
The government will also punish people who commit crimes against you merely because they dislike your words, which frees you to say things you might otherwise keep to yourself.
What you don't have a right to do is amplify your message using the massively powerful social-media megaphones which are created and owned by tech companies. If you want to amplify a message that these companies find strongly objectionable, you'll have to build your own megaphone, or find one owned by someone with different views from Google/Facebook/Twitter.
That said, I wonder if it's sufficient to reduce the amount of amplification rather than outright censoring things the company disagrees with. For example, YouTube could let conspiracy videos be hosted but leave it up to viewers to discover them rather than recommending them in search or "related videos". It seems likely that censoring will have unintended consequences.
> You have a right to say whatever you want to whomever you want without fear of government retaliation, as long as it isn't "hate speech" in a fairly narrow legal sense. This is what "having free speech in society" has always meant
Wrong twice.
First of all, the United States doesn't have laws against hate speech. The US Supreme Court has said hate speech is protected by the first amendment. So, you're protected from government retaliation, even if it's hate speech.
Second of all, that's NOT what "having free speech in society" means. It's what the first amendment means. In American culture, writing op eds to a privately owned newspaper is a classic example of free speech, going all the way to revolutionary times (and actually predating the constitution). In the past (before the internet), prominent newspapers would brag that they would publish editorials from any significant public figure, even if they detested that public figure, because free speech was a good thing. That's why the NYTimes has published so many editorials from dictators.
> the United States doesn't have laws against hate speech
Interesting, thanks for enlightening me! Canada does have laws against hate speech and I'd mistakenly thought the US did as well. In any case, speech intended to incite lawbreaking, whether a hate crime or otherwise, is not protected under the first amendment. If you tell your Twitter followers to go burn down all the Jewish-owned business or whatever, and they do, your speech is illegal. So there are at least some limits.
And I agree that freedom of speech goes beyond the first amendment. But the "significant public figure" of that is important: not anyone could get published in the New York Times, you already had to have reach and notoriety. They wouldn't have published articles by some random basement crackpot moon conspiracist, because that person wouldn't have a chance to get famous in the first place. They probably also don't publish opinion pieces with easily-disproven counterfactual statements. YouTube's algorithm, by default, makes no distinction: any idiot can go viral and get tons of reach on a video loaded with outright lies. YouTube has provided a platform that makes the spread of misinformation much easier and in my opinion it's their responsibility to take action to correct that.
> In any case, speech intended to incite lawbreaking,
The standard is "imminent lawless action." You can in fact advocate lawbreaking, including violent lawbreaking (e.g. people advocating punching all Nazis are protected).
Only if they did so or were likely to do so right away.
I think the standard could change, considering that when that standard was set, such advocacy couldn't be effectively spread without a trail of publishers and broadcasters that would be vulnerable to direct retaliatory violence.
The article you are commenting on did not propose any new laws, nor did I in my comment. Did you mean to reply to somebody else or to comment on another article?
I don't think the government should require tech companies like Google to support free speech. I think Google should support free speech because it's the right thing to do. I believe the author of this article feels similarly, just as the editors of major American newspapers did from about 1600 to 2005.
What happens when all megaphones are either government- or large-corporation-owned, and whenever you try to speak on your own, the nearest megaphone just yells over whatever you have to say?
Was that ever not the case? Was there a time when any person with any message could reach the whole country with their message?
The scarce resource here isn't freedom of speech, it's people's attention. As always, you can say whatever you want, and as always, it's hard to get people to listen to you. Before the internet, you basically needed an already-famous person to take an interest in you in order to get noticed. Now a message can go viral organically, without needing to involve any journalists or talk-show hosts, but the platforms are free to stop something from going viral if they don't like it. Facebook giveth and Facebook taketh away.
But to answer your question, I think breaking up the platforms would be a better solution than treating them as de-facto public services and trying to regulate them as such. Or appropriate them and make them into actual public services :)
It's a matter of relative power to spread the message, not absolute. So it's not a question of whether any single random person could reach the whole country, but rather how many people can that single random person reach, and how much can those who control the means of mass distribution of information can use that to reduce the reach of a single person.
It's a very good argument to support 1950's style anti-monopoly legislation, and as someone who defends platforms' rights to have some degree of moderation it follows that we should have as many platforms as needed to satisfy demand.
Absolutely. I don't think that private censorship per se is problematic - the problem, rather, is that it amounts to monopoly abuse in the present configuration of the market.
(Although I'd prefer an economic system that makes monopolies fundamentally impossible or very unlikely - i.e. one that constrains the ability to concentrate capital.)
Google/Facebook/Twitter have become monopolistic de-facto utilities. The you'll have to build your own megaphone argument is no longer valid. Especially when they're attempting to impact election results while cashing in on governmental protections that minimize their risk.
It used to be that the tech companies where champions of free speech, the ideal. It changed when generation woke entered their workforce. There has been a generational change brought on by changed values in American universities. That's what really is happening.
> We asked whether people believe that citizens should be able to make public statements that are offensive to minority groups, or whether the government should be able to prevent people from saying these things. Four-in-ten Millennials say the government should be able to prevent people publicly making statements that are offensive to minority groups, while 58% said such speech is OK.
Even though a larger share of Millennials favor allowing offensive speech against minorities, the 40% who oppose it is striking given that only around a quarter of Gen Xers (27%) and Boomers (24%) and roughly one-in-ten Silents (12%) say the government should be able to prevent such speech.
What a retconning farce this is. 40 years ago you could not publicly endorse recreational drug use, atheism, or interracial marriages without a substancial block of society chastating you as an incorrectable heathen.
Values change, what people find acceptable and what they do not find acceptable changes.
>What a retconning farce this is. 40 years ago you could not publicly endorse recreational drug use, atheism, or interracial marriages without a substancial block of society chastating you as an incorrectable heathen.
Imagine if the value that changed was that we treated people with different opinions with respect instead of changing the opinions that get you run out on a rail.
I'm not 100% sure either. I think maybe GP is suggesting that rather than being stalwart champions of free speech as you implied, boomers are just more racist and want to be free to say racist things.
Fortunately for all of us, it's not the general population of millenials or boomers who decide what speech the government gets to prevent; it's experienced justices, who tend to take the constitution much more seriously than the average person.
The statistic that 40% of millenials want laws preventing statements offensive to minorities is kind of baffling to me. Do you think they just don't understand the constitution, or do they want it rewritten to allow this kind of law?
Thats an interesting definition of capitalism that excludes the part where the capital was accumulated. Probably that's why the libertarians say "free market capitalism" in order to exclude the mixed market and autocratic versions.
> "What you don't have a right to do is amplify your message using the massively powerful social-media megaphones which are created and owned by tech companies."
What makes anyone think that other groups won't be able to create their own social-media megaphones? The tech is well understood already and they're highly motivated and well funded. It's been tried a couple of times already and eventually they're going to succeed. And if the last two US presidential election results are any indication, they might grow as big (or bigger!) as any of the other social-media giants. What will we do then?
Thank god for all of the people at Youtube working on censoring misinformation. They are so much smarter than me and obviously better than me at figuring out what is true and what isn't! And isn't it great that they always act selflessly in favor of the truth and never in the narrow interest of Google? I don't know what I would do if I didn't have them there to protect me from the dangerous misinformation. I am so helpless on my own.
I'll take the bait in the spirit of casual friday night internet debating and recognizing your tone as purely sarcastic. I would ask then...
What if, there was in fact an organization comprised of people smarter than you and obviously better than you at figuring out what is true or not - and I would go further - one that can tell you how to make better/more optimal decisions than you would have made without them?
Would you then cede your decision making to them if it were demonstrably better?
Would you argue that you should be free to make measurably worse or less optimized decisions for yourself - by virtue of the value of you being un-baised was more important than you making a better decision?
This is probably my favorite topic where optimization and normative economics converge, so I'm curious the replies here.
> Would you then cede your decision making to them if it were demonstrably better?
There are, and I use them constantly in the form of domain experts. So does everyone. For example, I would never assume a plumber is wrong and I am right about pipes, though I may get a 2nd opinion.
> Would you argue that you should be free to make measurably worse or less optimized decisions for yourself - by virtue of the value of you being un-baised was more important than you making a better decision?
Yes, and this is the case in most areas. There is nothing preventing me from representing myself in court, for example. I think this would be analogous to Youtube adding content filters to block different categories of potentially undesirable content, including their filter for misinformation, and allowing the user to opt in or out. If Youtube did this I would not object, even it the default was to turn the filters on.
Should you be free to make (acknowledged) sub-optimal decisions for yourself that adversely affect others?
Anticipating the "no - if it's illegal" argument, how would that be the philosophically consistent with freedom to choose free from having a group of people who are "smarter and better" choose for you?
This is the challenge of externalities and market pricing.
>Should you be free to make (acknowledged) sub-optimal decisions for yourself that adversely affect others?
Yes, this is what personal freedom is about. Optimal-ness of decisions does not matter.
There are some things we agreed as a society are not only suboptimal, but harmful enough to warrant restricting some of that freedom. Note the enough, we hold freedom above (or at least used to hold it) some consequences that may result from its existance. Ideally this standard of enough should be pretty high.
The argument against censorship is that censoring what people discuss is harmful to both that freedpm (fundamental - according to how we base our society) and to society as a whole, seeding dissent from these groups that are singled out.
EDIT: I also think your assumption is wrong in that it is dubious if there are people whose opinion is better. Just look at science's history, where most of the (now recognized) big breakthroughs were first seen as absolute stupidity from the established experts.
>Anticipating the "no - if it's illegal" argument, how would that be the philosophically consistent with freedom to choose free from having a group of people who are "smarter and better" choose for you?
As for this, society is hardly philosophically consistent. At some point a concensus has to be reached. Here practicality forces us to abandon philosphical perfection in order to preserve social order.
To be clear you're claiming that the virtue of personal freedom is making choices that have adverse affects on others? I'm not familiar with that being a part of any philosophy.
>To be clear you're claiming that the virtue of personal freedom is making choices that have adverse affects on others
Yes, I am saying personal freedom allows for potential adverse side effects. (Edit: Or why would you be allowed to drive a car, own a gun, insult someone?)
> I would never assume a plumber is wrong and I am right about pipes
Hahah, great example. I want an explanation of everything -- from plumbers to doctors. I wouldn't start out with the assumption I'm right and they are wrong, but how do I tell a trustworthy one from an untrustworthy one? I got ripped off by an auto shop who sold me a bunch of pointless work because they were having a slow week -- it's not an academic question.
> What if, there was in fact an organization comprised of people smarter than you and obviously better than you at figuring out what is true or not - and I would go further - one that can tell you how to make better/more optimal decisions than you would have made without them?
No, because our interests would probably not converge on many things.
The kind of orgs you describe already exist, but they're not benevolent, they don't act for the good of humanity.
I’m not going to answer your exact question and instead respond with an alternative: Nobody blames AT&T/Verizon when an absurd scammer calls you, or a grandpa wife’s money to a Nigerian scammer because of a phone call.
Think long and hard about why you blame Facebook/YouTube for the same. You’ll inevitably coalesce around the idea that these “platforms” did it to themselves by Arbitrarily banning legal content.
I blame the carriers of which AT&T/Verizon are major players for allowing of spoofing of numbers on their networks which allows these bad actors to flourish.
> What if, there was in fact an organization comprised of people smarter than you and obviously better than you at figuring out what is true or not - and I would go further - one that can tell you how to make better/more optimal decisions than you would have made without them?
> Would you then cede your decision making to them if it were demonstrably better?
No. Nobody should ever have to cede their decision making authority. If this organization is so smart and right, they should have no trouble finding a way to convince people without coercion
Even, if they are smarter than me, are they perfect? Is it impossible for them to be wrong? Of course not. That's why we need multiple sources. Nobody is right about everything all the time. All of this talk about "misinformation" ignores the possibility that you might one day be a source of misinformation, so it is vitally important that people be able to disagree with you.
I think OP's question was whether one would voluntarily defer to some groups' decisions. Speaking about TFA, Youtube isn't coercing anyone, technically. There's no negative externality (punishment) being implemented, only the deprivation of a service that they provide. One might argue that since Youtube is effectively a monopoly, it's different. The counterargument is that people who want a different platform are free to go elsewhere. This appears to be happening with Parler as an alternative to Reddit, for instance. To bring things full-circle: "remember, the players are not your customers -- they are your new boss." https://news.ycombinator.com/item?id=25376714
If they were truly right, you could weigh the evidence they present and you would be convinced it is correct because they are so smart and compelling. Then you would make the decision that their evidence suggests. There would be no need to cede any authority to them, they should be able to convince you on the merits of their arguments.
True. It needs to be traded against the cost of doing all that research for yourself, though. It all depends on how much you enjoy researching a subject vs. how important the conclusion is to you vs. how much you trust others to get it right.
"Would you then cede your decision making to them if it were demonstrably better?"
I would definitely consult them.
Who would be responsible for the consequences of a decision of theirs were it to turn out worse than if I had decided? For instance, what if they decided to purchase for me, using the money that I ceded to them, a house that unbeknownst to them turned out to have unhealthy air - whether mold or radon or asbestos - and it had a deleterious effect on my health? Would they be financially responsible for - correct that, invested - in my rehabilitation? Would they sit late at night by my bedside in the hospital, agonizing over my fate along with my family?
I think resistance to the idea of ceding control comes from the recognition that we alone are all ultimately responsible for, and invested in, our own fates
Gandhi said, “Freedom is not worth having if it does not include the freedom to make mistakes.”
Humans value freedom too much. Statistically speaking, we would rather feel like we are in control, even if we have objectively worse outcomes that way. Just think about air travel vs driving: one is much more likely to die in an automobile accident per mile traveled, but air accidents are more terrifying because there's nothing one can do about it as it is happening. Humans are wired to be optimistic in the narrow sense of control systems theory: we believe can control everything. Hence childhood beliefs in telekinesis, but also the reason children blame themselves for their parents' divorces.
Then what? Allowing false information to fester is wrong, censorship is wrong. The way forward is to ban recommendation algorithms and go back to personal (as in, from other humans you know) recommendation systems.
> Allowing false information to fester is wrong...
That sounds good, but it is not true. If the CEO of YouTube was a committed Hindi the public would demand that he overlook Buddhist/Muslim/Christian/Atheist content on his platform. Tolerance of people we truly believe to be wrong is a key value that needs to be preserved.
The sort of people who talk about "false information" in their policies tend to be running more authoritarian style countries - they don't actually know what is true and false, but they do know what they want to hear to promote stasis in leadership.
What about people who earnestly believe that genocide is justifiable? What should YouTube do with that content? Or people advocating Euthanasia of mentally handicapped individual? I'm not saying I know a good line, but there clearly is one.
The algorithm here really is the perpetrator. The insane things I've had recommended to me by Youtube after watching rather boring philosophy videos or a bit of history is amazing. One of their recommendations was literally full on nutter ranting at a camera - and it came up on a video of a guy who builds and tests medieval longbows!
Sort of a paradox personally. I think the algorithm has gotten better, not perfect, about recommending me less batshit insane videos. And I appreciate it.
But things like the Hunter censorship absolutely cross a line and are enraging. Banning (even loosely defined) hate speech is not the same thing as being the arbiter of truth.
I'm inclined to agree, but that is not the direction things seem to be headed. Dorsey called for increasingly personalized algorithms before congress a few weeks ago, while Zuckerberg basically implied FB would continue to acquiesce to congressional requests for censorship.
People want Youtube to be free and open...and remove things that aren't true.
They want Facebook to open its social graph so other social networks can compete...and take privacy seriously so there isn't another Cambridge Analytica.
I definitely see how YT content policy is a fraught subject but it's unbelievably naïve to say it's risking the radicalization of a group of people giving life to a 17-state effort to nullify the popular vote in the artificially packed Supreme Court. I'm willing to err on the side of not placating a faction of society that are already sold on fascism as our best path forward.
So what happens when "false" information "conspiracy theories" turn out to be true? How should it be handled when people in the position to censor information choose to do so in bad faith? (e.g. they lied about it being fake, or they weren't sure it was fake but they chose to censor bc it aligned with their political beliefs) And what if the censorship impacts were time sensitive? (e.g. they negligently censored sensitive election-related information immediately before an election, and then decided to report it as legitimate news shortly after the election - after the censorship goal was accomplished)
"Oops we made a mistake" is not even close to an acceptable response. I say we treat free speech more like civil rights, harsh penalties and all.
From the article: Hunter Biden was announcing that his “tax affairs” were under investigation....That news was denounced as Russian disinformation by virtually everyone in “reputable” media, who often dismissed the story with an aristocratic snort....That tale was not Russian disinformation, however, and Biden’s announcement this week strongly suggests Twitter and Facebook suppressed a real story of legitimate public interest just before a presidential election.
“If you want a population of people to stop thinking an election was stolen from them, it’s hard to think of a worse method than ordering a news blackout after it’s just been demonstrated that the last major blackout was a fraud.“
This might seem extreme, but I think we should allow people to make up their own minds about what is truth and what is not. They are very clear cases for censorship (direct threats of violence for example) but Questioning the outcome of an election should not be one of them. Somehow we’ve allowed corporations to be the arbiters of truth in this nation, enter control the reins of communication.
Questioning the outcome of an election is universally agreed to be perfectly acceptable. This is why we have a wide variety of opportunities for public participation and observation.
Filing dozens outlandish lawsuits is certainly a way to demonstrate fealty to the administration, but as we're seeing, it's not particularly productive.
Funding propaganda efforts to exploit people's ignorance in pursuit of undermining public confidence in the electoral process is shitty behavior at best and sedition at worst.
Conflating all of these and describing them all as 'questioning the outcome' is unlikely to change anyone's mind about any of it, but it does serve to highlight a profound lack of understanding about how elections work.
In my youth we had the FCC fairness doctrine to prevent corporate media from descending into polemics, but that rule was removed in the 80's, and then the FCC abdicated oversight of internet content in general, so I'm not sure how any of this is supposed to be fixed at this point.
How would you suggest a company like YT can "allow people to make up their own minds" about an issue if they do not "provide a platform for views" of that issue that they "don't share"?
Unless you are just making the point that YT censorship is more acceptable than government censorship (which I think everyone agrees with), then I'm not sure how you can have your cake and eat it to here.
Right. The actor matters a huge amount. We all agree on that.
But if you run a forum and say you will not allow any view to be expressed therein that you don't agree with, you aren't in fact "allowing people to make up their own minds"---unless by "allowing people" you just mean not actively harming them for expressing views in other places, or something equally wild.
Another way of saying it is that no one would say you are "tolerant" if you only "tolerate" behavior you agree with.
> But if you run a forum and say you will not allow any view to be expressed therein that you don't agree with, you aren't in fact "allowing people to make up their own minds"
I think you're missing their argument. YouTube censoring doesn't prevent information from being published elsewhere.
Yeah, I've definitely grasped their argument, which became clear once they clarified that "allowing people to make up their minds" doesn't mean "allowing content the owners disagree with" but rather "we won't try to criminalize this speech on other platforms." After all, if I were a publisher who refused to publish books with a certain viewpoint, in what other way could I say I (as opposed to the government, say, or the publishing industry as a whole) am "allowing people to make up their minds"?
(Sidenote: I do think YouTube and other corporate-run forums like that would welcome some regulation in the area, for two reasons: (1) they'd no longer be blamed for how they decide difficult content questions, and (2) it might make it more difficult for small startups to disrupt this space by increasing the legal barriers to entry.)
A lot of gymnastics in the comments here trying to justify the Youtube decision. Take a step back: If things are free-and-fair, then let transparency prove your point. As Taibbi correctly points out in his article, bending over backwards to STOP transparency and debate is simply fuel to the fire of the people who believe otherwise than you do.
You're assuming a hypothetical "informed viewer," who is able to consume multiple streams of information and make a rational choice. That's not what we're dealing with. The worst kinds of propaganda make their point with emotion, repetition, and invective. Most people can't tell the difference between that and real information.
Thus, sources that seek to manipulate people will win against those that are fair and honest. The question is, if we as a society are just going to accept a significant proportion of our population being manipulated into accepting counterfactuals from whoever has the most Twitter followers.
No I don't. Your point about "propaganda making their point with emotion, repetition, and invective" is a great descriptor of ALL media for the past several years, from all sides. If established elites are determining which media outlets are "correct" and which ones are "propaganda", then the disenfranchised, minorities, subjugated, and others who are outside this elite group have no way to amplify their voices without permission from the entrenched.
Then, when those groups realise they don't have legitimate means of voice, they either choose to exit (migration), or armed action.
This is the point Taibbi is making - don't back people into a corner.
You're right. I've been foolishly ignoring the subjugated minority of white, male, wealthy, Christian Trump supporters. Let's give a voice to this oft-neglected underclass.
It's great that you feel empowered and have been successful in this country. Lots of minorities haven't been, and for reasons outside of their control, based in historical inequalities.[0]
So do these minorities you reference have yours and other elites permission to post on Youtube? Or will they also get censored?
Noam Chomsky on Anti-Americanism: "In Soviet Union, people calling out crimes of the state against it's own people were called anti-russian: Sakharov, Solzhanitsin. That's a sign of totalitarianism. America is the only other country that does that. Imagine someone calling someone anti-italian -- they would be laughed at." (paraphrasing)
Every time I hear "anti-American", I'm reminded of this
I don't think that saying anti-American is the same as saying un-American.
Plenty of people say un-Australian, and I imagine other English-speaking countries say the same thing (e.g. un-Canadian). It's a way of trying to portray a certain set of values as intrinsic to a particular national identity. In the context of free speech, it makes sense to frame something which some might consider to be a restriction on free speech to be un-American, given that the American constitution explicitly protects free speech. Calling it anti-American would be a different sentiment.
I hate politics in general... that said this is such a dumb precedence imo and anyone cheering it on I feel is short sighted.
There was some quote about how unfortunately when you defend free speech, you're mostly sticking up for the worst of the worst.
Same thing here.
Personally, I hate how hard it is to find the really crazy conspiracy stuff these days on youtube. I remember you could fall down some rabbit holes and I loved that. I didn't believe a word of it... but all the same, it was fascinating to see these far out takes.
Russia has every right to spread disinformation in our country via an administration which was never brought to proper trial because witnesses were silenced. It’s a free country!
This is all the result of confusing 1st amendment rights with the right to access the audience that gathers at a particular URL.
What Taibbi is asking for is that the guy who tells you that drinking rat poison is good for you should be allowed an audience and that even putting a warning alongside the video would be an infringement on his rights.
Looking at the comments here I have to conclude that HN is no longer on board critical thinking much less common sense.
People are making normative arguments, not legal arguments. We are aware that this is legal, just like it used to be legal to discriminate in hiring practices against black people.
Taibbi is suggesting that if you can't trust the people not to post error and lies, how can you trust the oligarchs and officials? Especially if your access to alternative perspectives is limited. Read for yourself:
>> Cutting down the public’s ability to flip out removes one of the only real checks on the most dangerous kind of fake news, the official lie. Imagine if these mechanisms had been in place in the past. Would we disallow published claims that the Missile Gap was a fake? That the Gulf of Tonkin incident was staged? How about Watergate, a wild theory about cheating in a presidential election that was universally disbelieved by “reputable” news agencies, until it wasn’t? It’s not hard to imagine a future where authorities would ask tech platforms to quell “conspiracy theories” about everything from poisoned water systems to war crimes.
Those with platforms have always had the opportunity to lie to large groups, but extending that ability to every single person seems like an EXTREMELY BAD "solution."
Historically there's been a burden of proof for wild claims because it's been hard to get a huge mass audience. And people with those audiences were reluctant to repeat whatever wild bullshit was proposed to them if they couldn't vet it themselves.
If you didn't have your own credibility, you had to convince those who did to run your stuff. The cost of this is that it's slower to break things, and some stuff gets missed.
Unmoderated internet platforms with algorithmic jumps between otherwise-unconnected publishers let you borrow and hijack other people's credibility and platforms.
Why those platforms shouldn't be allowed to have editorial control - given that maintaining a certain reputation will still be critical for their long-term success - is beyond me and seems to have obvious un-American problems (infringement on their own private rights).
The trade-off being desired also seems fundamentally bad. More people being misled more quickly seems like a worse situation than slower breaking of news and the ability to suppress some stories, given that we were still able to break those stories you mention in the past. (Of course, I don't know what else might have been more widely reported in the past... I'm having to rely on a "we didn't feel like we were living in a totalitarian dystopia in the 60s-through-80s" assumption.)
The key word is "allowed". YouTube should be allowed to do everything they have the right to do. They have the right to stop providing all free services (unless they have contractual obligations). They have the right to ban all creators whose names start with "K". They have the right to add a 10-second delay to all page loads. They have the right to put Goatse on their homepage.
However, if they do any of the above things, the rest of us have the right to be disappointed, to think YouTube sucks, and to tell everyone else about it.
So, if they demonstrate that they have no respect for the principle of freedom of speech, we have the right to call them cowardly, un-American, probably unfair in their implementation, counterproductive even assuming their goals, etc.
> However, if they do any of the above things, the rest of us have the right to be disappointed, to think YouTube sucks, and to tell everyone else about it.
> So, if they demonstrate that they have no respect for the principle of freedom of speech, we have the right to call them cowardly, un-American, probably unfair in their implementation, counterproductive even assuming their goals, etc.*
They aren't just saying it sucks, people and politicians are calling for a repeal of Section 230 of the CDA in a knee jerk reaction.
They want to fundamentally shift the liability for user created content online, effectively ensuring that hosting any speech becomes a massive liability for those without billions of dollars comb through user uploads for illegal content.
As a business owner, I don't want to be raided by the FBI in the middle of the night and then go to prison because someone thought it would be funny to upload illegal content to my servers.
I am not a fan of repealing section 230. I think it'sactuallya pretty inspired piece of law for its time.
But it's original purpose was to remove civil liability for platforms for making an imperfect but good faith attempt to remove illegal content.
The farther we move away from the original motivating case, the less clear it is to me that Internet companies need or deserve the protection afforded to them under the auspices of section 230.
Well, Taibbi hasn't mentioned Section 230; I see only two other comments mentioning it. Also, I skimmed an article that says most people don't understand Section 230 (or the context around it—it provides immunity for certain things, and therefore you have to understand "immunity from what?"), so I would hesitate to say too much about it. It's entirely likely that there are some prominent partisans who claim to be in favor of free speech but don't have a principled stance on the subject (e.g. think flag-burning should be illegal), or who are as ignorant as I am on section 230 and less averse to recklessly advocating for political measures they don't understand.
At any rate, as I doubt you'll be surprised to hear, I am also not in favor of business owners getting raided by the FBI because users uploaded illegal content. That sounds like a mechanism for crushing small websites who can't afford their own legal department, thereby protecting large websites against competition.
You reference a totalitarian dystopia and yet you are salivating for widespread censorship to be applied. The great thing about the internet is the freedom of communication which broke the monopoly of mainstream media. If people like you have your way the internet will be as censored as cable tv used to be, in your blessed utopia of the 60s to 80s. Were you alive back then? Have you heard of the Vietnam war. It's not an exaggeration to say your ignorance and stupidity is staggering
> Those with platforms have always had the opportunity to lie to large groups, but extending that ability to every single person seems like an EXTREMELY BAD "solution."
Why is it bad? before only a few people could lie to everyone and keep the majority in the dark because they lacked access to information that would expose the lies they were told. Now everyone's voice is amplified and the people who used to have this power are upset because people believe things they don't want them to believe.
> Historically there's been a burden of proof for wild claims because it's been hard to get a huge mass audience.
This seems like a non sequitur. Historically its been hard to spread wild claims for most people because they didn't have a platform. What burden of proof are you referring to?
> And people with those audiences were reluctant to repeat whatever wild bullshit was proposed to them if they couldn't vet it themselves.
Isn't it more likely that they were reluctant to repeat stories unless it benefitted them? Yellow journalism predates the internet by almost 100 years.
> Unmoderated internet platforms with algorithmic jumps between otherwise-unconnected publishers let you borrow and hijack other people's credibility and platforms.
Perhaps. I'm not sure that I could hijack the credibility of (for example) Dr. Fauci by retweeting him. Its more likely that he could voluntarily lend me his credibility by retweeting me.
> Why those platforms shouldn't be allowed to have editorial control - given that maintaining a certain reputation will still be critical for their long-term success - is beyond me and seems to have obvious un-American problems (infringement on their own private rights).
The argument is that they have become large and commonly used enough that they are akin to a public utility. This is an open question and I certainly don't have the answer. Think of it as if the interstate highway system was owned by Procter & Gamble and they began to limit access to the interstate for carriers who delivered their competitors' products, or refused to allow left-handed redheads to access the interstate. A lot of people would say that in that case it would be an appropriate use of the government's regulatory power to nationalize or break up the "P&G Interstate" for the public good. Other people would say that it was within their rights as property owners to decide who they sold roadway access to. You'd have a situation where people's interpretations of fundamental rights conflicted because of technological advancement.
> The trade-off being desired also seems fundamentally bad. More people being misled more quickly seems like a worse situation than slower breaking of news and the ability to suppress some stories, given that we were still able to break those stories you mention in the past. (Of course, I don't know what else might have been more widely reported in the past... I'm having to rely on a "we didn't feel like we were living in a totalitarian dystopia in the 60s-through-80s" assumption.)
Consider that Manufacturing Consent was published in 1988.
> The argument is that they have become large and commonly used enough that they are akin to a public utility.
While this is an interesting conversation, you don't even have to go this far. You can just argue that rebutting bad ideas is more effective than censoring them and a good video hosting platform should value open discourse, and so YouTube should try to be as content-neutral as possible. If you convince enough YouTube users that open discourse is more important than censoring perceived falsehoods, then it might make more sense for YouTube to commit itself to open discourse.
Should you be forced to publish and host things that you think are terrible lies? If so, why? If not, why should YouTube?
Not making a comment on the validity of their claims, just trying to understand what you are saying - a private organization should be legally compelled to spend money to host ideas that they think are harmful? How does that work in practical terms?
> Should you be forced to publish and host things that you think are terrible lies?
Should I be forced to pay for public schools if I don't have kids or disagree with what they are being taught? Should I be forced to pay for roads if they will be used to support activities I disapprove of? What if I passionately and sincerely disagree with what people are using the roadway access to facilitate? Should I be forced to subsidize activities that I reasonably believe are harmful to the environment? What if I can produce peer-reviewed academic evidence supporting my point of view? Should I be forced to hire a qualified individual at my company if I have an opening if I don't like their religious beliefs? What if their religious beliefs involve arranged marriage or female genital mutilation?
Hopefully the list of rhetorical questions serves the purpose of highlighting the fact that our society and our system of government already compels people to support things they oppose, including terrible lies, harm to people and the environment, and various forms of abuse. Yet we haven't chosen to abandon this form of government for anarchy. For this reason I am unmotivated by absolutist private property arguments when applied to this issue. Thank you for your excellent question as it draws attention to a central part of the issue.
> Not making a comment on the validity of their claims, just trying to understand what you are saying - a private organization should be legally compelled to spend money to host ideas that they think are harmful? How does that work in practical terms?
I don't have the answers here, I'm participating in the conversation with the aim of moving it forward. How does it work in practical terms to require that companies hire and serve people even when the owner doesn't like their religion, ethnicity, or sexual orientation? In theory we write laws on the basis of balancing the values we hold dear. In practice lobbyists donate money to lawmakers for influence and lawmakers compromise with each other on things that they think are going to be practical to enforce and get them re-elected or some other form of benefit. Perhaps not in that order.
Government schools are pretty bad despite the ever increasing funding afforded to them, and the teachers unions wield enormous power. If we are going to fund anything, it should be the students directly (via vouchers or whatever) not the systems.
I hardly imagine that a government can really be held to account by its citizens if it also takes for itself the role of educate their children.
Thank you. This is exactly the basis upon which we justify the proposal to force YouTube to host content that their shareholders would prefer not to host. Or the alternative formulation, this is the basis on which we justify prohibiting YouTube from censoring political speech that their shareholders disagree with.
I think its entirely reasonable for YouTube to be compensated if the polity concludes that they maintain a useful public service by providing a place for people to share videos and they adhere to reasonable objective guidelines on what content to permit and remove.
> A misinformed populace acts to destroy democracy.
who is to be trusted with the authority to decide what counts as misinformation? Is it possible that a gatekeeper of information would find it easier to misinform the populace than a prominent person who had to contend with other dissenting voices?
It feels to me like both sides in this debate want to argue "of course it's patently obvious that I'm right". But services like YouTube and Facebook have no real historical precedents in terms of audience reach, and it's hard to think of a more rock-and-a-hard-place situation than choosing between "force a private company to treat their property like a public square no matter the cost to society" and "allow a private company to dictate what's allowed in a de facto public square."
So, yes, of course it's possible a gatekeeper would find it easier to misinform the populace. Over the long term, it's almost guaranteed. Yet it's also possible -- in fact, one can argue the probability is essentially 1.0 -- that refusing to have any gates will also misinform the populace. If we're looking for a blanket rule that will cover all possible situations in this new information reality, we're probably looking in vain.
The burden of proof lies with those who want to deny me rights, not those of us who want to maintain those rights. FB and YouTube has a right to remove content they don't want to host, just like me. If you want to take away my rights, you'd better come up with something really really good.
That's not something legally recognized though. You can't say you base this on something when there is nothing equivalent. An education is one thing. It's something that's been recognized in court.
Now, if you want to claim that public sites like YouTube should be forced to host content they don't want to from people they don't want to do business with, you'll have to explain how that equates with your established rights in the US?
More to the point: You'll have to explain why you want to take away my rights?
We're having a discussion about what the law should be, not what the law is.
> You can't say you base this on something when there is nothing equivalent.
The above discussion demonstrates that our society already limits property rights in the service of an informed populace, which is relevant to the issue at hand.
> An education is one thing. It's something that's been recognized in court.
Courts have recognized ballot fraud in the past as well, its not some outlandish theory. Its why people are supposed to maintain chain of custody of the ballots.
> you'll have to explain how that equates with your established rights in the US?
Corporations are chartered by the government in exchange for certain privileges and limitations. We already infringe on property rights because an informed population is a public good. QED there is a precedent for requiring a public corporation to host certain kinds of content. For example, foods are required to meet labeling requirements.
> More to the point: You'll have to explain why you want to take away my rights?
The right to remove content is potentially dangerous to democracy. See recent events, where a bunch of trump supporters are ever more convinced that an election was stolen because people keep claiming there was no evidence and YouTube has a policy where they state they won't allow anyone to post the evidence that the Biden camp says doesn't exist.
> my rights?
As a GOOG shareholder, you exchanged certain rights for other privileges when the government chartered your corporation and allowed it to traded on the public market while limiting your liability for criminal acts that might be performed by Google employees. In a sense you are taking the benefit of profit and legally protected from liability for the actions that might generate that profit. That arrangement comes at a cost of increased regulation by the government.
No, that would be the basis for the US to stand up its own video publishing platform. Forcing YouTube to host content it doesn't wish to is "un-American".
Is forcing Jeff Bezos to hire qualified black people and women un-American?
> that would be the basis for the US to stand up its own video publishing platform.
Why not just nationalize YouTube? As a corporation, Google already enjoys special privileges enshrined by law. Lets not pretend to be private property absolutists when the government already limits the liability of shareholders for the actions that bring them profit.
Black people and women are humans with equal rights. I believe this is basic humanist morality, but also they are a protected class before the law that can't be discriminated against in many cases. A false video is just a false video. Being a liar is not a protected class.
Why not nationalize YouTube is irrelevant to the conversation, but the end result would be the same. The government would then be providing a self publishing platform.
A vast majority of the people who are too stupid to realize these questionable youtube videos are hoaxes(and therefore need to be protected from them) went to public school.
A vast majority of the people who immediately recognize these YouTube videos are fake also went to public school. Something like ninety percent of America goes to public school, you can cite them as a vast majority for almost any American activity and probably be right.
This is a turning point for YouTube. YouTube was a medium of expression. In that sense your question is like "Why should the paper company allow ideas to be written on their paper that they disagree with?"
Of course YouTube does also take a editorial role. As both a platform and a publisher, we should allow them to stop giving recommendations for things that don't align politically.
By taking the more extreme step of taking down videos, they are balking on their self expressed purpose of being a platform for digital expression.
There are several people in my life who I completely disagree with politically. At the same time, I'm grateful that I can hear their perspective. YouTube is no longer supporting this dynamic.
I'm already seeing folks signing up for social networks that have less censorship, I think alternative video hosting platforms may increase too.
This doesn't even get into the really early viral videos like the charcoal briquettes lit with liquid oxygen, or the massive viewership of the comet Shoemaker-Levy 9 impact way back in the day.
Probably because its dangerous to democracy to silence people who question the integrity of elections. It sends the impression that there is something to hide. And people who suspect that the election was stolen will take this as confirmation of that suspicion. Since they won't be allowed to use that platform to raise their suspicions, no one will be able to respond to them to allay their suspicions. Then as time goes on confirmation bias will lead them to become entrenched in this belief. Eventually this belief (that the elections have been stolen) will lead to the perception of a loss of legitimacy for the government, and consequentially public servants will find it more difficult and dangerous to do their jobs.
>Probably because its dangerous to democracy to silence people
YouTube is not silencing anyone. These people are free to espouse these views espouse these views anyway they please just not on YouTube's private property. The New York Times is not a danger to democracy because they refuse to publish my article on why the earth is flat.
> YouTube is not silencing anyone. These people are free to espouse these views espouse these views anyway they please just not on YouTube's private property.
Its dangerous to democracy for oligarchs to have this much control over the discourse.
Well, I could simply point to the entirety of Taibbi's article. Or even just the headline and sub-headline: "[It's] Un-American, Wrong, and Will Backfire. Silicon Valley couldn't have designed a better way to further radicalize Trump voters." Many of the points I might make, Taibbi did so in his article.
For the sake of novelty, I'll make a different point: I see one way in which YouTube may have promoted democracy: by making their odiousness more clear (and in a public, "everyone knows that everyone knows" way), they may have encouraged quicker production or adoption of alternative platforms. This seems unlikely, because websites like them have done lots of crappy stuff before, without usually causing much effect; but it is possible that this may be seen as enough of a "They've declared war on the entire right wing" to motivate a significant migration. Two partisan platforms is better than one, for democracy and just for competition. (Better yet would be either a platform that has made some kind of enforceable and very-painful-to-break commitment to neutrality, or some kind of decentralized system that no single company or party can decide to censor. We may get there eventually.)
"Corporationism is above socialism and above liberalism. A new synthesis is created. It is a symptomatic fact that the decadence of capitalism coincides with the decadence of socialism. ... Corporative solutions can be applied anywhere"
> This is all the result of confusing 1st amendment rights with the right to access the audience that gathers at a particular URL.
No one is confused. The article mentions neither the First Amendment nor the Constitution. It only calls the matter "un-American." This phrase is clearly a value statement, not some neutral and dispassionate assessment.
The line you draw to separate the government from the private sector is quite useful in other contexts, but does not affect the matter before us today. After all, in a democracy (which the Constitution aspires to be) the ideal of free speech itself must proceed from shared cultural values; if those values change enough, we might also expect the First Amendment to be repealed anyway.
Of course YouTube of course is not required to represent these purportedly-American values; just as surely, it may be criticized as "un-American" by those promulgating such values. A much more interesting argument would assess the extent to which the values in question are, in fact, American, whether YouTube's choices are more representative (surely there's a case for this) or simply more desirable (I note many here agree with them) -- and whether American (or "American") values can coexist with these YouTube values.
The words "free speech" and "1st amendment" are never used in the article. "Free press" is used only in passing. As far as I can tell, your mention of the 1st amendment is a complete non sequitur, and you are the only one doing the confusing.
The question is whether what YouTube is doing is good for society, and Matt Taibbi says no. He's criticizing YouTube for doing something he thinks is bad, since public criticism is sometimes effective at altering corporate actions. It's pretty annoying that so many people try to sidestep this topic by saying that the first amendment does not prevent private censorship. Pretty much everyone agrees with that.
And denigrating people you disagree with is unnecessarily inflammatory.
Is it legal in the US to encourage people to jump in front of a train?
If the answer is no then telling people to drink rat poison is illegal. The legal system has the power to punish the uploader, and youtube can remove it just as it do with copyright infringements.
If the answer is yes then I would suggest to change the law. Pushing people to drink rat poison sound awful close to harassment, which is just the kind of speech which the 1st amendment do not protect.
Should we censor them? Or perhaps just put up a warning sign that not everything on the Internet - like the physical world - is worth believing? Where do the guardrails end and censorship begin?
If you have a law and a politician seems to be breaking it, then the usual answer is to have a prosecutor to look into it.
Politician also have what is called a position of trust, so in addition to legal enforcement there could also be a vote at that political level to remove said politician.
In addition, I would imagine that encouraging people to drink water poisoned with lead can put that politician in legal liability if anyone got sick.
I will add that the US has a odd legal system which potentially give the president permission to ignore laws. The best way to fix that is to remove that exclusive protection, and if its a constitutional problem, fix the constitution.
“A Massachusetts woman [Michelle Carter] who sent her suicidal boyfriend a barrage of text messages urging him to kill himself was jailed Monday on an involuntary manslaughter conviction nearly five years after he died in a truck filled with toxic gas.”
She appealed to the Massachusetts Supreme Court, and the conviction was upheld (unanimously).
> What Taibbi is asking for is that the guy who tells you that drinking rat poison is good for you should be allowed an audience
He should, it's the basis of democracy. Sure rat poison is a bad example, but thinks are much less clear cut if we speak about other things, including covid19. So putting up warnings about this problems (potential more then just a subtle note at the side) is reasonable removing it is much less so (as long as they don't e.g. tell people to go on the street and kill all politicans or similar bs.).
I mean you start with removing 100% clear bs. misinformation.
Then you remove misinformation which contains some truth (it's still misinformation).
And before you notice unpleasant opinions are removed to (like e.g. the CIA having had similarities to a terrorist organization in the past, factual right but misguiding).
EDIT: Oh and if you tell people to dink rat poison and they do you should be held responsible for tricking people into killing them-self, still that's not the same as censoring the content.
------------
An example from Germany is that there is a very mixed movement called Querdenker. It clearly contains right radicals, nazies, covid denies, q-anon believers etc. BUT a non small part of the core movement are neither of this but people which believe in covid, believe it's bad. But also believe that the decisions done by the German government do more harm then good. But they get frequently denounced and grouped with all the problematic groups from before. Which has all kinds of problematic consequences (including making this people being more susceptible to manipulation from the problematic groups). Now removing (some of) the information sources of this people instead of adding information enlighten them about what is wrong with the information is only making it only worse. Furthermore you can't censor them as this will just push them to other information platforms and if you continue to censor again and again you will end up with a censorship system no less powerfull then chinas... which I believe no one would want in a democracy. So I believe we will have to learn how to handle such information _without_ censoring information we believe is wrong/misguiding.
Do you support the election that happened on November 3rd 2020? Then you must support the results of the election, whether you like the election results, or not.
THAT is the basis of Democracy: the peaceful transfer of power based on the votes. If you do not respect the votes, then Democracy falls apart. You literally cannot have a Democracy without votes (while we had a Democracy through WW2 despite the "Office of Censorship").
We have a very, very large group of people who are now using free speech to destroy our trust in the election. We are now left with a decision: Free Speech vs Election.
My gut says that elections are more important than free speech. Historically, we have had times without any free speech what-so-ever (WWII / Office of Censorship). Its a luxury we can do without in times of crisis.
We cannot afford to lose faith in the election process. Period.
And yet the solution to a negative can still itself be the cause of equal or greater negatives. Hence why I was asking if you believe it is a NET gain. Because the Government cracking down on speech, even obvious lies, will erode some people's trust in the system as well.
> And yet the solution to a negative can still itself be the cause of equal or greater negatives.
Cracking down on anti-election rhetoric is an obvious net positive, and needs no further explanation.
> Because the Government cracking down on speech, even obvious lies, will erode some people's trust in the system as well.
Too late for that. Dozens of millions of Americans have lost faith in the 2020 election. The only concern from my perspective is to stop the obvious bleeding: we must stop the false anti-election rhetoric before it poisons the minds of even more.
The lies are winning right now. Be it masks, election fraud, mail-in ballots, or whatever. My sister's father in law believes that COVID is a hoax, I have coworkers who don't think masks help and my mom thinks Obama is a Muslim born in Kenya. I've seen enough lies, and I've lost faith that these people can have their vision cleared with the truth. My sister thinks the vaccine may hurt more than it may help and is avoiding the vaccine.
Its clear that misinformation is running amok and nothing is stopping it. The naive "debate with them" perspective goes literally no where, have you ever tried?
>Cracking down on anti-election rhetoric is an obvious net positive, and needs no further explanation.
Yes, it does. Cracking down on speech would further enrage all those people, while also pissing off a great deal more. I think riots are a more likely outcome of your solution than anything positive.
And you are just happy to assert "riots" and leave the discussion off at that?
I mean, I can say "boogieman" too and leave the discussion, pretending to have made a point. It will encourage socialism, or it will destroy freedom, or it will encourage censorship! (Etc. Etc). Come on, just stating a boogieman doesn't help anyone's point of view.
No offense to you personally, but a one sentence assertion is not an argument I'll take seriously. Your contribution to this discussion so far has been less than mine. How about you elaborate your points a bit more?
------
But whatever, I'll mirror you, so you see how stupid this gets if you just do one sentence assertions of boogiemen.
Your point of view will destroy democracy.
Ball is in your court now. Figure out a way to elevate the discussion. I'm not doing the heavy lifting unless I see some work from you as well.
"Oh and if you tell people to dink rat poison and they do you should be held responsible for tricking people into killing them-self,"
So in your mind Google should assist people in tricking others into drinking rat poison by boosting the signal of their videos by hosting them on one of the world's most popular web sites, essentially making Google employees an accessory to murder, because the "basis of democracy" is assisting bad actors in tricking people into doing things like drinking rat poison.
That's certainly an interesting take.
In my view, you host someone's web site where they trick people into drinking rat poison, that makes you morally culpable. If you have human decency, you try to avoid doing things like that.
>And before you notice unpleasant opinions are removed to (like e.g. the CIA having had similarities to a terrorist organization in the past, factual right but misguiding).
This is a slippery slope, and I'm not sure that it's justified theoretically. It's not as though this is precedent-setting - all platforms have excluded swathes of content for a very long time.
> What Taibbi is asking for is that the guy who tells you that drinking rat poison is good for you
No. That is not what Taibbi is saying. He articulated his point extremely well, he doesn't need your help with convoluted interpretations. You twisted his words sideways and upside down, then built irrelevant conclusions on top of your own nonsense, and diverted the discussion from the PROBLEM the original article was written about to your own misguided statement which bears no resemblance to Matt's argument.
> HN is no longer on board critical thinking
Indeed, otherwise your comment wouldn't be on top.
I must say though that there is a difference between putting a disclaimer next to a video to provide facts/context vs outright banning it from being posted.
Yes, it's not directly an issue of the first amendment, but it's still a topic with discussing since sites like YouTube, Twitter, and Facebook constitute a lot of where discourse takes place these days.
I'm guessing there's a lot more deliberate troll farming hitting HN, and threads like this (and how far up this incoherent rant of a post went up the list) lend credit to that.
In case you didn't know, Warfarin, a heart medication, is also used as a rat poison.
A guy telling people they should drink genuinely harmful poison is of course bad, but by analogy I would say many of us are concerned that YouTube's current trajectory would result in banning important information about the health benefits of Warfarin in their crude attempt to stop the people from taking the bad sorts of rat poison.
Not a single word in the article hints towards the 1st amendment. I suggest reading the article before acting all high-and-mighty about your "common sense."
People constantly conflate "freedom of speech" with the protection granted by the first amendment.
That's right up to and including Randal Munroe, who said "The right to free speech means the government can't arrest you for what you say"
That's not freedom of speech, that's the first amendment protections of freedom of speech... and it doesn't even adequately describe the protections. There are other things the government isn't allowed to do besides arrest you. They can't force you to say something, either. They can't prohibit certain opinions from being expressed in government owned or regulated channels.
And freedom of speech can be infringed on or limited by other entities than the government.
A better description would be "principle that supports the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation, censorship, or sanction." This means that if you're forced into silence by threat of retaliation from, say, a religious sect or political group in your area, it's still infringing on your freedom of speech even if they aren't part of the government.
Where is the line though? If I disagree with what you say am I allowed to call you an idiot? Am I allowed to no longer associate with you? Those could both be considered retaliation or sanction.
And am I required to publish your words even though I disagree with them? That would seem to infringe my freedom of speech.
You HAVE to make those determinations on a case-by-case basis.
Like all freedoms, the freedom of speech is slippery, difficult to define, and has fuzzy boundaries. It can never be absolute, because that would immediately raise contradictions. Where does your freedom end and mine begin?
In this particular case, how big does a company need to be before we start forcing it to host content that it doesn't want to host? When is it ok to force someone to provide a universal platform... to tie their hands so they can't moderate as they see fit?
Do I have a right to stand on your stage and shout whatever I want at your audience? Do you have a right to buy every stage and then exclude and censor people like me?
I'm struggling to understand the 1st amendment doesn't apply to private companies argument. Are you saying that Google, under its own volition, independently of the democratic party decided it was their ethical or patriotic duty to take this action? This is the company that built the platform that amplifies crazy ideas (many crazier than this) to hundreds a millions of people for profit. It's also well know that there's a revolving door between google and democrat administrations and campaigns. Or are you acknowledging that google and the DNC have a symbiotic relationship and that this is a valid loophole that the government can use to censor people?
You bring up the Democratic Party as if everyone who disagrees with you must always be a part of the exact opposition and you come across as a conspiracy theorist because of it. It really doesn't seem to be the world's most unsurprising thing that among two possible conclusions people could be on opposite sides without being brainwashed. It's almost like the binomial distribution exists for something like this.
You can't just say "It's well known" and then throw out some new conspiracy theory that I've never seen before.
You're not even to the "most people are familiar with your theory" part yet, much less the "I can act like everyone else thinks this is true, but won't admit it" part. Build up your brand a little first.
> Looking at the comments here I have to conclude that HN is no longer on board critical thinking much less common sense.
The book burners are running the show. And the complaints you see on hackernews are just people with no power or influence complaining about the censorship you support. Seems like a decision was already made by people much more powerful than us, and this is the path we are on. Free speech and freedom of the press are over. America is done. Nothing special about this place anymore.
I think this opinion is short sighted, because it assumes that we can trust Youtube (or other governing body) to rightfully and honestly decide which content is worth deplatforming. Isn't this idea central to the concept of freedom of speech?
I think the conversation is just a little more sophisticated than you imagine it to be.
And you can be generally in favor of free speech (!= the First Amendment) without accepting its more extreme interpretations. I mean, I'm guessing that you personally are too if you are like most people. For example, most of us would think that YT shouldn't censor either of two people saying (respectively) "Trump was elected in 2016 as a result of Russian disinformation" and "Trump was not elected in 2016 as a result of Russian disinformation."
The interesting question here is, "What speech should a privately-owned forum allow that its owners disagree with?"
Also this guy is not the same author some might have known a few years ago, he himself has been relegated to the regressive left Glenn Greenwald corner of the internet. Take his narrative with the proper grain of salt.
Why do you consider them part of the "regressive left"? Is it because you think they've behaved unethically, or are they hopelessly biased, or do you just disagree with them, or some other reason?
First, I looked up the term "regressive left" and you might want to know your use of the term doesn't match its general usage, which might make it confusing.
Secondly, I think you're doing yourself a disservice if you dismiss these voices and are critical of them but not the mainstream pro-Democrat press. Ryan Taibbi, for example, clearly dislikes Trump, but also thinks the Democrats aren't responding to him well[1], which is an interesting perspective to understand if you are anti-Trump. If you are a liberal who fails to incorporate or understand even the criticisms of people who are close to being your allies, you may be less equipped to properly combat Trumpism.
Basically, it sounds to me like you are advocating ignoring these people because you disagree with them, which doesn't strike me as the best way to rigorously improve one's political perspective.
I remember the days on forums when people screaming "YOU'RE VIOLATING THE FIRST AMENDMENT!!!11" after the admin banned them were treated as the laughable jokes they were. Now there are supposedly serious political thinkers subscribing to this same idea: that the Constitution forces private parties to do business with you against their will. It's a serious lack of civic knowledge.
It's worth reading Barry Goldwater's opposition [0] to the Civil Rights Act of 1964, despite him claiming to be "unalterably opposed to discrimination or segregation on the basis of race, color, or creed, or on any other basis". His stance was exactly what you describe: that government did not have a right to force private parties to conduct business against their will.
There's a good-faith debate to be had about positive rights vs. negative rights; or the potential backlash from forcing individuals to do the right thing; or the usual right-libertarian arguments about the sanctity of property rights. But I'll bet dollars to donuts, that the vast majority of those cheering for tech media giants booting out those with verboten views, based on private property rights, would also be horrified at the idea of even questioning the CRA under the same logic.
You are aware of the difference between "what you are" and "what you say" are you not? CRA prohibits discrimination on the basis of "what you are". So it's not really the same thing at all.
I'm not claiming it's a fully apt comparison; rather, that if one supports a principle of "it is entirely out of scope for government to force private businesses to transact against their will", with no other qualifiers, that would necessarily preclude the CRA as well.
It's a different stance to say "the government is allowed to force transactions where one party is unwilling, but only where the unwillingness is related to identity rather than actions". (Though even that distinction can blur: a religious person banned for sharing "my faith teaches that life begins at conception" could hardly be blamed for interpreting the act as being based on their protected-class religious identity, rather than their speech as such.)
Looking at your comment, I have to conclude that you think you are smarter than 95% of the population and that they need to be treated like children.
You aren't dumb enough to drink rat poison if you see a video telling you to do so, and therefore don't need this protection, but not others. No sir, the little people are too dumb, and need to be protected, guided, and fed a diet of information curated by people like you. The paternalism is as disgusting as it is arrogant.
Your kind of thinking is what created the War on Drugs. And you aren't even self-aware enough to realize it.
The first amendment right only bans the government from infringing on free speech. It doesn't say anything about private corporations. There are many practices by US companies concerning free speech that would be illegal in European countries.
Many European countries recognize freedom of speech as a basic right that everyone needs to respect; Neither the government, companies nor individuals can infringe on this.
However, these freedom of speech laws often exclude certain kinds of speech and governments have some power in defining these exclusions.
This distinction is what makes censorship such a big issue in the US. When censorship is dictated by an elected government, the people have some power over it. When censorship is dictated by a private company the size of Google, people have very little power over it.
The US constitution seems to be mainly about limiting the power of government, not about protecting people and society. As a result, companies have a lot more power and you get unaccountable decisions about what should be censored and what not.
Don't get me wrong, I think censorship is necessary, but you have to admit that there is a real danger when companies are in charge of it.
I have a few past comments on HN about how electronic voting was a recipe for civil unrest for the precise reasons that are playing out right now and that Taibbi gives examples of.
Google, Facebook and Apple are becoming less American companies as they are a kind of post-national versions of the British East India Company where they operate mainly as the arms of an empire, and instead of opium, they traffic in attention. Their interests are inseparable from those of the permanent state, and this year has been the moment when they have taken on that role in earnest.
The recent bans are tripling down on the wrong side of the issue, and what's most frustrating to me is that in over 100 years never have there been so many who were at once so wealthy and so wrong, and I can't see what the trade is.
I have a few past comments pointing out that whether electronic voting is seen as a valid reason to distrust American election results by the mainstream depends entirely on which party has the presidency. The most vivid demonstration of this was Wired running two articles within months of each other, one before Trump won insisting that rigging an election that way was impossible and there were endless safeguards protecting against it, and one after he won claiming there were no safeguard and electronic voting machine hacks could easily rig an election, each backed by copious expert opinions, despite no new evidence appearing between the two. More subtly, remember how before Trump the last time electronic voting security was a major mainstream issue with Defcon events etc was when Bush won in 2004, it suddenly returned to prominence with Trump's 2016 victory, and now it looks like it's becoming a non-issue again.
The Youtube issue seems analogous in that it's ok if "our guys" prevail. My bias is it's hard to interpret things without empirically predictive power as meaningful.
It's strange for me because I can see the rift between people as a tangible thing, where micro-classes have used the internet to become alien to one another, to where neighbours have developed completely different ontologies that are irreconcilably foreign. I think the problem is even those of us looking to bridge it keep looking for ways to repair it instead of forward to the ways in which it eventually heals.
Youtube is doing this because they think they're going to win. I'd just like to take the other side of that bet.
The author theorizes that this ban will inflame tensions, but I don't believe that's true. Look at Reddit's ban on toxic subreddits. They found that banning toxic subreddits reduced the toxicity in the website as a whole, and that "post-ban, hate speech by the same users was reduced by as much as 80-90 percent." [1]
Reddit is just a closed system of communities; YouTube is one such community in the larger system of social media. I think YouTube made the right calculation that this will in fact _reduce_ tension and slow the momentum of this destabilizing movement.
Sure, it reduced tensions on reddit, but what about tensions everywhere else? The author doesn't care about how toxic Youtube is, he cares how toxic the US is. Pushing these radicals onto radical websites only further radicalizes them. That should be an axiomatic fact, yet nobody seems to understand it.
>. Pushing these radicals onto radical websites only further radicalizes them. That should be an axiomatic fact, yet nobody seems to understand it.
First off, that's not an axiomatic fact, that's the definition of an empirical question. Secondly, even if it was true, who cares? You can't enlighten every crazy person on the planet, it's a waste of time. The goal of effective policy is to isolate radical elements to a point where they can't target mainstream communities of people who hold malleable beliefs. That is effective prevention.
I also see absolutely no evidence that the thesis of further radicalisation is even true. Here in Germany we have taken fairly strong, preventative measures to tackle hate speech and conspiracies online. It seems to have mostly worked, we have I think, one of the least polarized populations right now, broadly speaking. Insofar as there are fringe elements in their dedicated spaces, because they are in dedicated spaces, they can be monitored.
edit: another important point I forgot to mention, a lot of radical activity today uses commercial platforms for revenue to further fund their cause, maybe nowhere as evident as in the US where that sort of political activity seems to be a legitimate business. Depriving extremists of those revenue streams alone probably does significant damage to their cause.
You're not engaging with the articles main point, that this censorship and exclusion is not evenly applied, and that it is rather conspicuously applied along political lines. There are many non scientific, ideology driven ideas that have been bullying their way into every facet of communication these days. Labelling people who resist this trend as fringe radicals who should be segregated and monitored is frankly disgusting.
This is flat-earth conspiracy theory level falsehood. And it’s a falsehood about an extremely politically and socially important issue that if widely accepted could have massive repercussions. Is YouTube going to censor it?
This is hyperbole and AOC would be the first to acknowledge it. If "the world will literally end in 12 years" were to take root, then it would be a massive problem for society, not just YouTube.
“Proven false” clearly isn’t the standard. Nobody has proven the election fraud claims false. (Or the Hunter Biden e-mails story.) Like AOC’s misinformation about climate change, the conclusions are just extremely unlikely based on the facts we know. That’s been a high enough bar for censorship.
>and that it is rather conspicuously applied along political lines. There are many non scientific, ideology driven ideas that have been bullying their way into every facet of communication these days.
Yes, but not all of them are equally dangerous, so there actually is no reason to assume they ought to be evenly treated. There are ideologies that lead to people shooting up a restaurant or behead non-believers in the streets, and then there are people who are just annoyingly woke, or to take that example from the comment below, are overly cautious about climate change.
There's a difference between something that is merely non-scientific or silly and something that gets people killed or threatens democratic society or peaceful transition of power. That false equivalence seems prominent in US discourse when it comes to identifying and dealing with radicalism. The numbers of domestic terrorism in the US by political orientation actually make that very clear as well.
Isolating fringe radicals who explicitly aim to topple democratic systems is vastly preferable, in my opinion, to the actions of those fringe radicals to intimidate voters, threaten elected officials and poll workers with violence, and openly foment civil war that could bring untold misery to millions. It's not all that different from how the lectures of Anwar al-Awlaki were removed from YouTube back in 2017. Not every idea, no matter how pernicious, needs to be given room to flourish.
> The goal of effective policy is to isolate radical elements to a point where they can't target mainstream communities of people who hold malleable beliefs. That is effective prevention.
Let's say I think Google is a threat to individual rights. Are their on their right to prevent people with "malleable beliefs"(how condescending) to listen to that message? Is that what you want? A corporate-controlled Internet and society? If you think they are free to go elsewhere you must know that the natural direction in unfettered capitalism is that the winners accumulate more and more in bigger conglomerates. That is an extremely dangerous game, courting a fascist society, in the original meaning of the term.
As opposed to? Government controlled or moderated media would be devoid of agenda? I also don't see how capitalism is to blame, we are already seeing new social media platforms emerge pandering to the other side of the partisan spectrum. At the end of the day, YouTube and their ilk have executive decision making over the content they surface. I don't believe they should be beholden to some 'higher' journalistic ideals around objectivity. As a consumer if I'm not happy with their policies (extending well beyond content moderation), I'm obliged to go elsewhere. Whether this drives further polarisation or results in the creation of fringe or radicalised groups is actually besides the point.
Everybody understands that. But reach is taken from them. That's a extremely important effect. Way more important than the radicalisation, because if these people seek out new platforms to radicalize them self, they would likely have done so in every case. The danger of radicalizing a bigger part of the population is what is at stake here.
3. Someone who is not radicalized getting radicalized
#1 is bound to happen, #2 has no chance of happening, so at this point, preventing #3 is the way to go. Keeping these already radicalized people on Youtube achieves nothing. On the other hand, keeping them on Youtube gives these them a huge platform to spread misinformation. Newsmax nad OANN don't have anywhere near the reach Youtube does, especially outside the existing bubble.
Honestly if the choice is between no bans and allowing the current trends of radicalization continue, almost any outcome is preferable than what we've seen happen the last 5-10 years.
unprovable/unknowable and subjective analysis. Youtube banning people doesn't radicalize them, radicalizing content delivered to million of people on youtube radicalizes them.
A point so simple it feels silly to point it out, but here we are.
Most content is "delivered" in the form of suggested videos from within youtube itself and other social graphs that seem unwilling to concede that they might be part of the problem itself.
The rate of spread is a problem of YouTube's own devising based on clicks, views and likes.
Remove this gamification and you'll see less engagement, hence less scale of radicalization.
With youtube labeling people "radicals" is a deliberate polarising term designed to do little more than tar and feather people into a specific box. Just like segregation, just like Protestants, just like the German Jews... Etc. It's very unflattering.
Radicalization doesn’t just happen it has to make deep intuitive sense to that person based on their own prejudice. You can’t convince a normal person to kill people by showing them a bunch of videos but you can convince people not to believe hanlon’s razor and therefore deepen their hatred by false attribution of malice where none exists
Antivaxx and other conspiracy theories on the internet weren't anywhere as big anywhere else either, you're trying to assign causation where there is none. Youtube didn't start banning antivaxx until way way after it was already massive and out of control.
"Overall, we rate Newsmax Right Biased and Questionable based on the promotion of conspiracy theories and pseudoscience as well as numerous failed fact checks. (M. Huitsing 10/8/2017) Updated (11/12/2020)"
I used to strongly believe this but now I think we've moved beyond having a public discourse and into things that are not protected speech. The president and others inciting violence against people doing their jobs, and those that are megaphoning those claims and calls to action deserve to be shutdown and deplatformed before someone is killed.
Regardless of the morality, I suspect that it will prove to be quite efficacious. Banning this kind of nonsense from mainstream platforms will certainly not reduce the toxicity of the existing radicals, but it will curtail their ability to spread their message and recruit.
These movements did not materialise out of a vacuum. Their growth and replenishment of strength is largely contingent on permissive mainstream platforms on which to set up shop.
I think platforms like Youtube and Facebook have shown the dangers of democratised mass-broadcasting. There were dreams at the dawn of the internet that enhanced access to information would create a better informed citizenry, but rather it has just exacerbated our worst traits as a society.
I don't know what the solution is really. The current situation is untenable in America, but how does one unpick it without resorting to totalitarianism? I'm somewhat partial to ending anonymity on social media posts, so at least people must stand publicly by their positions and can't create sock puppets to inflate their message.
Pseudo anonymity is probably fine, but web-of-trust solutions to help quarantine armies of sock puppets and manipulators would be really useful. Can't imagine why Twitter/FB won't implement. (Ok. I can imagine)
Doesn’t Facebook already have a quasi-WOT? People on Facebook generally see the posts of their friends and those they follow, plus ads. That doesn’t stop fake news from rippling through the social graph.
I can't see any benefit to even pseudo-anonymous political discourse on open mainstream mass-broadcast social media in a free society. If your message has a reach on the order of hundreds of thousands of people, then you should be able to stand by what you say.
If it gets to the point where we have to fear the government retaliating against us, then that government would have probably already shutdown or blocked the popular social networks.
The red scare didn't end with the advent anonymity, but rather it was ended by the checks and balances within the government belatedly stepping up to the plate.
Joe McCarthy was censured by congress and fell out of political favour. Furthermore, the Supreme Court handed down a number of decisions that greatly reduced the scope of the government's ability to penalise supporters of communism such as: Yates v. United States, Scales v. United States, and United States v. Robel. This is how such matters should be tackled in a free society, rather than coming up with elaborate mechanisms to hide our views from the government.
> but rather it was ended by the checks and balances within the government belatedly stepping up to the plate.
Ok, and regardless of that, anonymity still helps people avoid consequences for their speech. Which is a good thing, given that the government has in the past done things like that.
> This is how such matters should be tackled in a free society
Well that is cold comfort to the people who suffered real harm, during the red scare. While other people were trying to solve that problem "the right way", lots of people still suffered harm in the mean time.
Things don't always get solves immediately. Sometimes, problems just exist. And regardless if things have been solved "the right way" or not, other solutions, such as anonymity still help.
None of this stuff is exclusive. Feel free to try and stop government consequences some other way. But regardless of that, anonymity still helps some people avoid some consequences for their speech, which is good.
> elaborate mechanisms
None of this is elaborate. Lots of people have pretty effective ability to post speech anonymously.
Instead, it would be you who would have to engage in large elaborate schemes to prevent people from being anonymous.
The default, easy, non-elaborate solution is what we have now. Which is that there are many platforms where anonymity is easy for most people, most of the time. And that the government would have to take pretty extreme actions to remove that from people, like court orders that don't happen very much right now for most people.
> Ok, and regardless of that, anonymity still helps people avoid consequences for their speech. Which is a good thing, given that the government has in the past done things like that.
Perhaps, but there is now firm legal precedent that greatly limits their ability to do it again.
> None of this stuff is exclusive. Feel free to try and stop government consequences some other way. But regardless of that, anonymity still helps some people avoid some consequences for their speech, which is good.
Some people should not be able to avoid the consequences of their speech. For instance, people that harass, slander, and threaten others online. Additionally, people that use their speech to obtain money under false pretenses shouldn't be able to hide behind a veil of anonymity. Police officers that enforce the law by day and joke about how much they hate black people and jews while squirreled away behind their keyboard at night should have to own their opinions in the public square.
> Lots of people have pretty effective ability to post speech anonymously.
It's possible with enough tech savviness, but tactics like VPNs are more frail than most people believe.
> but there is now firm legal precedent that greatly limits their ability to do it again.
There are also the situations of highly targeted minorities that would now be unable to protect themselves, because you've made anonymity illegal. These people really do need anonymity to protect themselves.
And no amount of laws can protect someone from a hate mob attacking them, that doesn't care about the consequences. One of the only ways for many of these people to protect themselves, is to be anonymous.
> Some people should not be able to avoid the consequences of their speech.
I have given pretty good examples where people absolutely should be given immunity from consequences. Specifically people should be immune from consequences if they are engaging in government criticism.
The examples that I gave are great reasons as for why anonymity is important for people.
> It's possible with enough tech savviness
Actually, it is possible with basically no amount of tech savviness. All you have to do is click the create new account button, on twitter, and you have just been given a pretty large amount of anonymity.
In order to have your twitter account de-anonymized, the government has to take pretty extreme actions, such as court orders.
Even if a certain anonymous situation is not 100% foolproof, it is still important that it is available, as it can help most people, most of the time.
It turns out, that simply being able to create an anonymous twitter account, is pretty anonymous, most of the time, for most people.
Also, the original point that I was countering was this "pseudo-anonymous political discourse on open mainstream mass-broadcast social media".
So I am talking about specifically why anonymity, in the form of having the ability to do what people do now, which is create anonymous twitter accounts, and engage in political discourse.
> There are also the situations of highly targeted minorities that would now be unable to protect themselves because you've made anonymity illegal. These people really do need anonymity to protect themselves.
To reiterate, I was specifically talking about circumstances where people broadcast to mass audiences. If people of a certain minority want to congregate together in a private space to discuss things important to them as though they would in their own house, then they should have every right to do so in private.
> And no amount of laws can protect someone from a hate mob attacking them, that doesn't care about the consequences. One of the only ways for many of these people to protect themselves, is to be anonymous.
I don't know what world you live in, but I haven't seen a hate mob running down my street for a while.
> Actually, it is possible with basically no amount of tech savviness. All you have to do is click the create new account button, on twitter, and you have just been given a pretty large amount of anonymity.
> In order to have your twitter account de-anonymized, the government has to take pretty extreme actions, such as court orders
That won't offer you much protection at all from the government. So you seem to have a lot of trust in warrants being issued justly, but also a great fear that the government will turn against you.
It's not that hard to get a court order if the government has probable cause that you've committed a crime. There have been 55,000 warrants for US Facebook accounts since the start of 2020. This is not including national security requests, which Facebook is not legally allowed to disclose.
> Also, the original point that I was countering was this "pseudo-anonymous political discourse on open mainstream mass-broadcast social media".
> So I am talking about specifically why anonymity, in the form of having the ability to do what people do now, which is create anonymous twitter accounts, and engage in political discourse.
I'm glad we're back on the same page, so would you say that the culture of anonymous political discourse on social media has resulted a constructive political discourse? Is there nothing you'd change about it?
> To reiterate, I was specifically talking about circumstances where people broadcast to mass audiences
Ok, and I think that it is important that highly targeted minorities are free to broadcast to mass audiences, without having to worry about being targeted or harrassed. Anyone who is fighting for civil rights, could become the target of mass hate and harassment, for both themselves and their friends and family, if the hate mob decides to target them.
If these groups did not have these protections, then it would only be the privileged, who would be able to express themselves to a wide audience. Other people, who are more frequently targeted, would no long have a voice.
> they should have every right to do so in private.
They should have the right to do that in public, as well, to a mass audience, while also being able to take actions to protect themselves from harassment, by being anonymous.
> but I haven't seen a hate mob running down my street for a while.
Women and minorities face a lot of harassment. It is not about a "physical" mob. There are more ways that mobs can target and harass people, than just some mob on the street engaging in physical violence.
I am not sure what to tell you, if you are not aware of all the threats and harassment that these people can get, from randos.
> So you seem to have a lot of trust in warrants being issued justly
Being forced to issue a warrant is another protection. That is much better, than if some random government official, just has access to that information, and can retaliate against you, without anyone being able to find out about it.
> also a great fear that the government will turn against you.
It is not just about some totalitarian regime. Instead it is about the in between stuff. Things like a no name burocrat, having your information, and using their power to target you, without there being a easy way for them to get caught.
Warrets and paper trails make it more difficult for government officials to abuse the information that they would otherwise have access to, if it was all available without a warrent.
> Is there nothing you'd change about it?
Yes, I would provide more ways for people to be even more anonymous, and provide stronger methods for people to protect themselves from retaliation, whether that be by random government employees, or random people on the internet.
More and more, I realize that I've been fed a warped view of the past. Here in the USA, history book authors and history instructors are almost exclusively on one side of the political spectrum. The people that they glorify and vilify are chosen with extreme partisan bias.
Joe McCarthy wasn't the terrible person he is made out to be. Information discovered later, long after his censure, proves that he was correct in many of the cases where he was thought wrong.
Today's lies and censorship will become the supposed "facts" that are taught to future generations. All sorts of falsehoods will be used to created graded test questions.
No, they showed the danger of popularizing and amplifying the most virulent messages, which usually are hatred and paranoia. If it were just a randomly uploaded youtube video that showed up on the front page, it would be much less likely to be picked up.
Instead, we have the possibility of the recommendation engine recommending videos en-mass.
Doxxing and harassment are already huge problems on the internet; a zero-anonymity policy would be terrible for marginalized groups because it would make spokespeople and activists extremely vulnerable. Imagine that whenever you said anything on the internet, every Trump supporter with a phonebook could find out where you lived. Then imagine you're (eg) trans.
Beside, plenty of far-right reactionaries are entirely unafraid of attaching their names to their beliefs. Just look at the stuff people say on Facebook. And consider that you don't need to publish your name in order to listen to a far-right ideologue
-- only to be one. And it doesn't take many ideologues to reach a large audience.
I don't know that there is a solution, outside of "encourage platforms to effectively deplatform bad actors." This has been successful in the past in dealing with movements which recruit heavily through social media, like ISIS — the difference is that social media companies have feared partisan backlash from conservatives if they apply these strategies against the far right.
This risks suppressing free speech, but I think this entire issue is a result of us wanting to treat social media companies as perfectly neutral universal forums. The notion of unbiased platforms needs to be abandoned, and social media monoliths need to be split into distributed, federated platforms, like email or Mastodon. This way, platforms can moderate to their heart's content: excessive moderation of any particular instance is less of an issue because users can always move to a different, better instance, but people who were justly banned will be isolated from the general public in their own servers.
This model will have challenges, foreseen and unforeseen, but it's got to be better than the current media oligopoly. At the very least, it's progress.
To be clear, my comments were limited to mass-broadcast over social media. If people want to form small clubs within which to share lived experiences without fear of societal retribution, then I'm fine with that.
> make spokespeople and activists extremely vulnerable
As you've demonstrated, anonymity is an imperfect system of protecting such spokespeople, so they should have greater legal protections. Reducing the spectrum of internet anonymity would actually support this goal as it'd simplify applying existing legal protections such as restraining orders within the internet domain.
> Beside, plenty of far-right reactionaries are entirely unafraid of attaching their names to their beliefs. Just look at the stuff people say on Facebook.
Perhaps, but that far right ideology must receive social promotion through 'likes', 'shares', and other such mechanisms in order to spread their message. Why is it necessary for these acts to be anonymous?
I sincerely don't understand how further decentralisation or federation could help resolve the problem. It just seems to me like you'd be making the moderation problem harder as you'd be shifting the burden from a few large companies to many small companies.
You're advocating to remove any ability to communicate anonymously online with the intent to shame reactionaries into not saying reactionary things. What you don't understand is that the reactionaries think they're right, and they're proud of their beliefs. They flaunt them; they print them on hats and wear them around. They have no shame.
Meanwhile, the people who are actually threatened by this are the people reactionaries are victimizing — closeted gay and trans people, for instance. They lose their ability to find supporting communities online because as soon as they engage with a forum like that, such as for instance a relevant subreddit, their participation is broadcasted to everyone they know in real life. It's easy to say "they should have greater legal protection," but they already have legal protection. It's already illegal to murder trans people for being trans, but trans people are nonetheless subject to a massive number of hate crimes. Isolating these people and making it impossible for them to reach out when they're in abusive home life situations is a terrible policy.
And this is just one example of a group for whom the loss of online anonymity would be hugely damaging. You've massively underestimated the harm which being de-anonymized online can cause (that's why "doxxing" someone, or releasing their personal details online, is such a serious threat to their safety).
> I think platforms like Youtube and Facebook have shown the dangers of democratised mass-broadcasting.
Does anyone know of any good research on the early days of radio? I have a suspicion we had a similar problem before regulation due to the overcrowded airwaves.
If you know of any good books that cover this era let me know please.
Maybe because it's not an axiomatic fact. There has been niche radical internet communities for ages. It has only been with the radical communities coming into the mainstreams ones (reddit, twitter, and facebook)that the radical communities exploded.
I think the idea is more like this: you have two clubhouses, one that has 10 super radical people and one that has 190 people who are less radical. If you take the most radical 20 people from the 190 and force them to go across the street and spend all of their time with the super radical people, those 20 people probably will become somewhat more radical. Now, the 10 people possibly become slightly less radical as maybe do the remaining 170, which might balance things out a bit? But the people who are saying this makes people more radical are concentrating on the idea of taking people on the verge of being super radical and saying that they now only get to hang out with each other and the people who were thrown out of the mainstream long ago... it isn't that that group will "grow" past the people handed to it, but that the people given to it will become radicalized (which might be more dangerous than accepting everyone into the same system and attempting to have them all adjust a bit towards the mean).
>> Sure, it reduced tensions on reddit, but what about tensions everywhere else? The author doesn't care about how toxic Youtube is, he cares how toxic the US is. Pushing these radicals onto radical websites only further radicalizes them. That should be an axiomatic fact, yet nobody seems to understand it.
> Sure, it reduced tensions on reddit, but what about tensions everywhere else? The author doesn't care about how toxic Youtube is, he cares how toxic the US is. Pushing these radicals onto radical websites only further radicalizes them. That should be an axiomatic fact, yet nobody seems to understand it.
Oh everyone understands it, but there's a trade-off you're not acknowledging:
1. Letting these people stay on twitter, YouTube, or reddit gives them an audience to recruit. The people in these toxic groups then may be less radical on average, but there will be more of them overall.
There also will still be highly radicalized members, and the sympathy of the less radical gives them support that could allow them to do more damage.
2. Quarantining them to Gab or Parler or whatever denies them the recruiting opportunities. The dead-enders who migrate will become more radicalized, but their numbers will be smaller, and perhaps decline over time, due the the lack of new blood.
Who are "them" and "these people" here? Republican voters? The Republican leadership?
Even if the plan is to marginalise Republicans by selectively suppressing their views on YouTube, it seems questionable if that will provoke the same response as banning "fatpeoplehate" and "coontown" from Reddit. Tensions are likely to increase.
> Who are "them" and "these people" here? Republican voters? The Republican leadership?
I had in mind white supremacists, QAnon, Boogaloo Bois, etc. I don't think the people making false and baseless claims about the election being stolen from Trump can easily be dealt with this way, because too many high-status people have joined that bandwagon.
I think it's an important consideration what happens to other platforms but I don't agree it's obvious that the net effect of the bans will be negative like you are implying. I think it's pretty disingenuous to say that the problem is that people just "don't understand" your argument.
I believe the US has gotten more toxic recently, in part because more and more liars and charlatans have technology enabled bullhorns. They flood feeds with garbage content that takes sober headed folks too long to wade through. It's not all political, but this is happening on the left and the right.
In other words, no, what you claim doesn't seem axiomatic at all.
This doesn't really accord with any of the research and literature on the topic that I've seen, from the initial studies of forum bbs community lifecycles to more modern online marketing community analysis.
Do you have any quantitative analysis to cite for this or is it just your feeling?
The article on HN about the ban got something like 3 /thousand/ comments, very high for HN. I don't think these people are out of the mainstream, I think a big nerve was hit.
> Pushing these radicals onto radical websites only further radicalizes them.
Alienation. Yes, this will create some self-feeding echo chamber elsewhere far away and out of sight. I am fine with that. The only people bothered by that are thought police who cry that somebody online might have a disgusting opinion.
I agree. This may be a business decision to keep the media (and regulators/lawmakers of certain political stripes) happy, and it may indeed lower tensions on reddit in the short term, but it may not result in lower tensions as a whole.
It's interesting. My question is getting a number of both downvotes and upvotes and for the moment is even. But no replies. So I'll go on.
In my view, 'zealotry' crosses over to 'radicalism' when the threshold of violence is crossed. We generally are concerned about radicals in religion or politics because we fear they will or have committed violence in the name of their cause.
Yet... there are far (far) more instances of violence on the record against Americans who support Trump than by them. Up to and including murder. In fact, there is so much political violence against them that congress created a list of violent political acts and entered the list into the congressional record[0].
Who is committing this violence and why?
Here is about 10 minutes of research:
- One of the murders, Lee Keltner, a hatter in Colorado:
Regarding the 7 page listing of more than 200 various acts of political violence, including attempted murder, against conservative politicians; I searched for the same document produced during the Obama administration and could not find one. From that I infer that during his administration, neither conservative nor liberal politicians saw political violence as enough of a concern to list it and enter into the permanent record of the United States Congress.
Political and religious violence and oppression was one of the primary motivators for the United States to seek independence from England. We are going backwards.
Any congressperson is free to enter something into the congressional record. That alone should hold no sway. As for this list, it includes tweets comparing trump to hitler as acts of "violence", and the fbi disagrees with your top-line conclusion that Trump's supporters are victims more often than assailants.
That only works if the people who are banned are fringe minority nobodies. We’re talking about banning the President of the United States, who although he lost just won 5 million more votes than Obama in 2008. This ends with further destruction of trust in elites, fracturing the media ecosystem, and opportunistic competitors arising.
The idea that gatekeepers need to withhold information from people so they believe the correct things is insane and has no limiting principle. We’ve already tumbled well down the slippery slope. Just a couple of years ago, the censorship was just going to be about “flat earth”-type stuff. Now we’re censoring stuff that’s still the subject of active litigation. They’re probably correct about it, but the first application of a terrible idea is usually workable. The Ministry of Truth probably started out to keep misinformation from spreading about vaccination, or something innocuous like that.
>Just a couple of years ago, the censorship was just going to be about “flat earth” type stuff. Now we’re censoring stuff that’s still the subject of active litigation.
Is litigation supposed to be a indicator of legitimacy or something? Anyone can file a lawsuit for any reason. In addition, most (all?) of the lawusits being dismissed without trial, which suggests they were rubbish to begin with.
> We’re talking about banning the President of the United States, who although he lost just won 5 million more votes than Obama in 2008.
No, it's a ban on baseless claims of massive electoral fraud. Coverage of the President is still allowed.
I have absolutely no idea why you reference Obama in this.
Yes, there's still active litigation. But there is yet to be any notable evidence of anything approaching serious, let alone widespread, wholesale electoral fraud.
> We’re talking about banning the President of the United States
Well, yes, we're talking about the President of the United States spreading FUD on a national election, just because he lost. That's how much America has fallen.
> further destruction of trust in elites, fracturing the media ecosystem, and opportunistic competitors arising
All of this has already happened - keeping Trump and his supporters in "mainstream sites" did nothing to restore trust in elites, or mend the rift in media, so I'm not sure why it would suddenly work now that he's finally being shown the door.
> All of this has already happened - keeping Trump and his supporters in "mainstream sites" did nothing to restore trust in elites, or mend the rift in media, so I'm not sure why it would suddenly work now that he's finally being shown the door.
The media and elites just spent four years burning their credibility in an incinerator. (Which I think is a tremendously bad thing for America.) Imagine if, instead of spinning out a yarn about Trump being a Russian asset for two years and having to massively walk it back, they had kept their powder dry and responsibly reported on the bad things actually contained in the Mueller report. Do you think that maybe folks would be less willing to believe the outrageous things Trump is saying now?
The writing is on the wall, eventually there will be bifurcation in Big Tech and Twitter and Youtube will become the walled garden for the left of center only and largely irreverent with conservatives with things like Parlor and BitChute and others taking the exodus. It will turn into a battle for the centrists eyeballs to come to their sides platform or straddle both.
The analogy of Reddit banning toxic subs only holds if everyone is ACTUALLY on reddit, most simply left. There is already talk among conservatives of alternative Reddits and Facebooks, alternatives Netflix/Hollywood companies, alternative Fox News even. Many will say "but those will be subject to the same problems of early Twitter and Reddit, how to excise the toxic elements and keep the real discussion." I would say that can EASILY be done without going down the censorship rabbit hole by simply knowing where to stop. Get rid of the illegal stuff, the spam and pornography, and leave the rest regardless of how distasteful it is. Its an Overton Window problem, you just need slightly looser boundary condition on discourse. Stop pretending to be platforms and admit you are indeed publishers to an extent. Give users the ability to filter what they don't want to see and pledge to never bias your algorithms. The cries of the left will be largely irrelevant, they will simply tell you to "go back to Twitter if you don't like it". Once the public loses faith and trust in your company or institution due to your sacrifices in credibility in the name of censorship of one sides information, there wont be any coming back from that loss of face. I'm not saying this is a good or healthy thing for public discourse, but I do think it will come to pass.
Participants in controversial discourse have long said that the system wants to silence them -- banning such discourse actually makes that true.
In the past, they said they were censored and there was a conspiracy against them, and we could all say, no there isn't, you are active on major online platforms and nobody is suppressing you. Now we can't say that anymore. Not only has the suppression they falsely alleged actually started to happen, companies are even proud to publicly announce that they are part of it.
This is a total backfire because it makes it even harder to change people's minds by confirming what conspiracy theorists suspected all along. Online platforms have fallen into the trap of making one of the main assertions of online far right nonsense actually become true. It is a massive own goal.
>This is a total backfire because it makes it even harder to change people's minds by confirming what conspiracy theorists suspected all along.
The question shouldn't be "will doing x help the antagonists", it's "will doing x result in a worse outcome for the antagonists". If they gain from either choice, we shouldn't be paralyzed in our response.
I’m married to someone who is caught up in this right-wing cult (it is a cult) and every time a video is removed, it simply convinces her even more that it’s true. If it wasn’t true, why would they hide it? Why do they not want you to see it? BECAUSE IT MUST BE TRUE.
This is just the information equivalent of Ruby Ridge - a self-fulfilling conspiracy theory.
TL;DR:
> Look at Reddit's ban on toxic subreddits. They found that banning toxic subreddits reduced the toxicity in the website as a whole, and that "post-ban, hate speech by the same users was reduced by as much as 80-90 percent."
No, they didn't
----
> They found that banning toxic subreddits reduced the toxicity in the website as a whole, and that "post-ban, hate speech by the same users was reduced by as much as 80-90 percent."
This only holds for the creative (and frankly, dishonest) definition of hate speech that the study[0] used.
> First, we automatically extract
terms which are unique to the two subreddits that were banned due to hate speech and harassment.
The resulting term list includes a number of words that indicate hate speech, as well as some
other terms that appear to be specific to the Reddit context. We then qualitatively filter these lists,
obtaining a high precision hate lexicon. These lexicons are publicly available to the community as
a resource.
Not only did they start from a corpus of _words unique to the banned subreddits_, they "qualitatively" filtered it down, with no description of the procedure used. Anyone who's spent any time on smaller subs knows that they, like any online community, develop their own lexicons, up to and including jargon[1]. Everyone who's ever gawked at hate subs like /r/coontown or /r/shitredditsays know that this tendency is on overdrive. A reduction in the vocabulary _unique_ to the banned subreddit means that a ban refugee from /r/fatpeoplehate could be just as active on /r/HamplanetHateMail (a non-banned sub mentioned in the study) but would be counted as vastly reducing her hate speech, as long as she switched from /r/fatpeoplehate's "landwhale" to /r/hamplanethatemail's preferred "lard-ass" (I made these examples up, as I've had little exposure to the fat-activist/fat-hate corners of the Internet in particular).
What this study actually shows is: "usernames from banned subreddits don't go to other subreddit's and communicate with the same username in the banned subreddit's hyper-specific jargon". Which is to say, "when a community is broken up and dispersed among other communities, the _specific_ inside jargon of the community is a lot less common, with no reference to whether the substance of the comments have changed".
I mean, duh.
This would perhaps be excusable in 2015 or something, but one should really know better by now than to uncritically cite popular coverage of sociology studies, particularly social-justice-aligned ones, without at least skimming the paper. Studies on hate speech are particular easy to check, given that (as this study puts it), "there is no universally accepted definition of the phrase"; at a minimum, seeing what they actually define as hate speech is core to understanding the findings. In this case, the authors start with a standard definition from the European Court of Human Rights and torture out a metric that's convoluted, qualitative (in their own words!), and full of holes that the authors don't even pretend not to fill with their own biases. (Seriously, I recommend reading Section 2.2)
Incels and other such creeps aren't popping up in other places or real life though. Spend a week in a rural place near you and you'll find that people there legit think that "big tech"--who they know is insanely powerful and think are biased against folks like them--attempt to a.) censor people and b.) censor claims about election fraud specifically in regards to "their president" winning. Think about how that might come off to you if you were Trump supporter. You can disagree with that viewpoint but you have to admit you can see what lots of people who totally buy it would think about those actions. Those are people who aren't going to stop talking with each other either, regardless of what we in the "big tech" think.
Bingo. But to be fair Big Tech didn’t cause this problem and it can’t solve this problem. There has been a general breakdown of trust in elites and media institutions over the past 30 years. It’s just spilling over to tech now.
> The author theorizes that this ban will inflame tensions, but I don't believe that's true. Look at Reddit's ban on toxic subreddits. They found that banning toxic subreddits reduced the toxicity in the website as a whole, and that "post-ban, hate speech by the same users was reduced by as much as 80-90 percent." [1]
Your analogy only really works if you ban half the population from the US. Sure if you banned democrats or republicans from the country, then you could reduce tensions. But if you censor one side and keep them around, it'll only inflame tensions.
> I think YouTube made the right calculation that this will in fact _reduce_ tension and slow the momentum of this destabilizing movement.
Does it seem to you that tensions are reducing? If youtube did this after the 2016 election, do you think it would have reduced tensions?
I see this argument all the time. Sweeping opinions you don't like under the rug doesn't make them go away. People will still hold the same opinions they will just do so where you can't hear them. I prefer the racist I can see. This pushes them underground and radicalizes them.
You mentioned reddit. Do you remember what voat became when reddit closed down subs? That hate didn't go away it just moved and concentrated itself.
Sweeping opinions you don't like under the rug doesn't make them go away
I think we need to look at that carefully - especially, how much of youtube hate is well-considered, articulated positions and how much is people just "swept into the propaganda." That's a complicated question but think the medium of video tends to be worst form for well-considered opinions and the form most likely to just be a carrier of irrational-emotional/progandistic messages. It's real time stream of imagines/words quality makes it hard to consider the logic of an argument and easier to get swept up in the emotions of a claim as well as easier to think that, say, an image of a scientist gives legitimacy to a claim,etc. I think this quality was demonstrated back in the days of TV as mass medium.
Exactly, this just seems like a great way of accelerating polarization. Make sure that everyone who has at least one opposing view is driven away to a place where they will rapidly be exposed to every opposing view.
The idea is not to sweep them under the rug, but rather to stop the uptake of new people. A lot of people go to reddit for funny pictures, and end up being exposed to some very radical ideas. Those same people might never be drawn to a more fringe site. Banning posters doesn’t stop them from holding their views, but it seems like it may stop them from spreading. It’s a gamble, but i think it’s worked in reddit’s case
You are trying to intentionally muddy the waters here. Vaccines, masks and who won the election are not "sweeping opinions you don't like". People have actually died due to the anti-vaxx movement, leading to +50% Measle deaths in 2019. These are hard facts and these decisions are a matter of life or death.
No one is blocking opinions. Trump has failed to prove election fraud in over 50 cases so far, and the safe harbor deadline has now past, which is why Youtube has now put this rule in place. It is now confirmed that Biden has won the election, and anyone claiming otherwise is intentionally lying. If they had proof, they would've shown it in court, where there are real consequences, not on social media where anything goes.
> If they had proof, they would've shown it in court, where there are real consequences, not on social media where anything goes.
This is a great example of why censorship is such a disaster.
The Texas case was dismissed today because Texas "has not demonstrated a judicially cognizable interest in the manner in which another State conducts its elections". So regardless of whether there was anything to complain about, Texas isn't allowed to complain about it. (This is basically what everybody predicted, but it means the Supreme Court will never even hear the case.)
The case in Pennsylvania was dismissed because they were complaining that Pennsylvania had changed its election procedures in violation of its own constitution (in ways alleged to facilitate fraud), which appears to be true, but the court dismissed the case for a procedural reason again, in this case for being brought too late.
Several other cases were dismissed for similar reasons, i.e. lack of standing or timing. The only one I'm aware of where they're even being allowed to make their case is the one in Wisconsin, which is still pending.
Does it concern you at all that you apparently didn't know that?
I did know that, thank you for blindly assuming I didn't.
First off, the reason why so many of these lawsuits are dismissed for procedural mistakes is because they are taken by awful lawyers, since real election lawyers don't want to touch these with a 10 foot pole.
Secondly, again, while they claim fraud on social media and on TV, the lawyers actually don't claim fraud in court, as that's much harder to prove, and affidavits are just hearsay when it comes to proving fraud. The reality is that so far, across 50+ lawsuits, they haven't managed to show any real proof of anything.
You explained the two SC ones but still fail to explain how the other 50 or more failed, or why none of them have been able to present any real evidence. Why is it that they all have lack of standing? Because you can't overturn an entire election based on an affidavit of someone claiming they saw trucks full of ballots, with zero other evidence. And rightly so.
As for the Texas case, what a coincidence that such a baseless suit happen to have been brought by the Texas AG, who is coincidentally under investigation by the FBI, and is seeking a pardon from the President.
I'm honestly curious what YOU think the reason is. Is it just pure bad luck that all 50+ lawsuits just failed in the most miserable way? Is the court system broken? If they have real solid evidence (and no, someone signing an affidavit is not it), then where is it? Why have we not seen any yet?
But go ahead and assume it's because of censorship if it makes you happy.
EDIT: I'd also add that many of these cases are dismissed early because, even the defense can show that even if everything the plaintiff claims is true, it still isn't enough. In most of these cases, Trump's team is trying to use irregularity with a 1-2 ballots to throw out hundreds of thousands of ballots, and that's just not how it works. That's why all those lawsuits are being dismissed without even being heard because even if they prove their point, they cannot overturn the election.
> I did know that, thank you for blindly assuming I didn't.
It seems odd then that you're asking them to present evidence in court when that is what they've been trying to do and haven't been able to as yet.
> First off, the reason why so many of these lawsuits are dismissed for procedural mistakes is because they are taken by awful lawyers, since real election lawyers don't want to touch these with a 10 foot pole.
Their lawyers claim their families have been threatened. I would believe that it could be hard to retain effective counsel in that environment.
Mistakes are also somewhat more likely than usual given the time constraints.
> Secondly, again, while they claim fraud on social media and on TV, the lawyers actually don't claim fraud in court
They weren't claiming fraud in the Pennsylvania case, because they had such a strong argument that the state had violated their own constitution. They were claiming fraud in some of the other cases.
> as that's much harder to prove, and affidavits are just hearsay when it comes to proving fraud.
Hearsay means claiming you heard someone else say something. Affidavits can contain hearsay but that doesn't automatically mean that everything in an affidavit is hearsay.
Also, there are hearsay exceptions, like statements against the interest of the speaker. If someone is admitting to backdating ballots or telling others to do it, which I expect is illegal, it's not immediately obvious why that wouldn't qualify.
> You explained the two SC ones but still fail to explain how the other 50 or more failed, or why none of them have been able to present any real evidence. Why is it that they all have lack of standing?
Most of the others weren't actually filed by the campaign. Which is one reason they may lack standing.
> Because you can't overturn an entire election based on an affidavit of someone claiming they saw trucks full of ballots, with zero other evidence.
But that's the thing. If you believed them (it was more than one person claiming to see that), you could. Maybe you don't, without more, but that's why there should be a court case. Where you then do discovery and subpoena the others who were there to see what they have to say about it etc. They got shut down before they had a chance to do that.
> I'm honestly curious what YOU think the reason is. Is it just pure bad luck that all 50+ lawsuits just failed in the most miserable way? Is the court system broken?
My impression is that the courts don't want to be involved in the controversy and are willing to take any excuse to get the case out of their courtroom. Can you even imagine what would happen if they actually proved their case?
> In most of these cases, Trump's team is trying to use irregularity with a 1-2 ballots to throw out hundreds of thousands of ballots, and that's just not how it works. That's why all those lawsuits are being dismissed without even being heard because even if they prove their point, they cannot overturn the election.
The argument there is that they have statistical evidence of fraud. Which is weak, because the election was so weird with coronavirus and mail-in ballots that you would practically expect statistical anomalies. But if the fraud the statistics say is there, is there, then you should be seeing duplicate ballots and the like. And so then they present evidence of duplicate ballots and the like, to bolster the case that the statistical evidence of fraud is actually fraud.
And there is also another issue with all of this. A lot of these cases are separately claiming that there were 10,000 ballots here, 4000 there, 6000 somewhere else. Well, they were dismissed, because none of those are enough to change the outcome... unless you add some of them together. Which is why it seems wrong to dismiss a case for not affecting enough ballots on its own, before you know the outcome for any of the others. But that seems to be what happened.
> If they have real solid evidence (and no, someone signing an affidavit is not it), then where is it? Why have we not seen any yet?
I don't know what they have. The media coverage of all of this has been nakedly partisan propaganda from both sides. I was kind of hoping they'd make it into court somewhere so I could actually see them present their case, and see the other side defend against it, instead of just having all these depositions and affidavits that only present one side of the story and which I don't actually have time to read because they're a zillion pages long.
Im not intentionally muddying anything he mentioned reddit. That was a ban of hate speech specifically, not anything to do with masks, elections, or anything you mentioned.
I agree with others about radicalization and slowing the spread of misinformation but it's a fine line to walk and throwing around reddits bullshit like it will work for real life scenarios isnt going to help anything.
Deplatforming legitimizes and empowers fringe groups and that's not something we need to be doing.
As for your rants about trump etc I agree they just had no basis in my and parents arguments.
My bad, I was arguing about the Youtube ban. As for reddit, they kept T_D along, but subs like that regularly leak out and brigade other subs, which is why they were banned. Not for the information they share. Also, many of the comment sections devolved into doxxing and treats, which is also against global rules. Reddit generally doesn't give a shit about how true or false the opinions are in a sub as long as they don't break global rules and leak to other subs.
So no, they were not banned for "sweeping opinions you don't agree with" either.
This in no way compares to what reddit is doing. There is nothing “toxic” or hateful about what they have already been applying this ban to. They have already been removing videos that are not partisan and not fake news. Merely being intellectually curious or examining various issues related to the election, even if they ultimately do not conclude that there was fraud, are being removed. It’s fucking ridiculous.
Banning topics frequently cuts losses on trends that are dying anyway. Nobody wants to see ghost town subreddits. It's good for the websites health to keep the place looking alive and clean.
Separating the effectiveness of bans from these trendy moments is difficult. Applying a ban on political discussions around 2015-2016 would have likely maintained toxicity.
I see these reddit bans referenced every now and then as evidence that censorship works. I think it's important to emphasize these bans are not evidence they shutting down communities changed participants' minds about anything. Further, the paper states that "The empirical work in this paper suggests
that when narrowly applied to small, specific groups, banning deviant hate groups can work to reduce and contain the behavior." Trump supporters are not a small group of people and they don't have a single 'meeting point' on the internet that can be banned to break them up.
Let's just be careful about comparing strategies used to target niche internet hate groups with tens of millions of Trump supporters.
I suspect there could be a frequency vs. intensity trade-off there. Fewer total radicalizations, but of the ones that remain, they may intensify on average. The "radicalization expected value" so to speak might go up even if the frequency goes down. (This obviously isn't to suggest banning is never a solution, but trade-offs do have to be considered.)
They're not questioning an election, they're making baseless accusations of fraud.
Conspiracy theories like Sharpiegate, hacked Dominion voting machines and Georgia election workers counting suitcases full of illegal ballots have been debunked. Trump's legal team has filed over 50 lawsuits to date, none of which have exposed any evidence of fraud or criminal conspiracy. The legitimate questioning is being done and the answer is, to an ever increasing degree of certainty, that Biden has won, and won legitimately.
QAnon nuts claiming the election is a coup de tat by Red China or a satanic cabal of Democratic pedophiles are not engaged in honest, good faith attempts at validating the integrity of the electoral process.
I'm not here to deny elections but I dont think much of it was including the machines going live on the net when Dominion said they didnt. They witnessed the packet traffic to international servers.
With regards to "debunking" on sites like Politifact you need to be careful. For instance they say the Smartmatic technology was not used in states that are being challenged when thats just not true. The tech is the same but runs under different banners and is all owned by Dominion. Also they focus on if Hugo Chavez family owns Smartmatic as the thing to debunk. Obviously thats not the case or the main convincing argument that is even being made. Its that the technology was used to game elections in Venezuela and that adopted tech is used in elections around the world now (why else would it be acquired unless they wanted the tech). There are no code audits because Dominion claims IP protection. Does that mean election rigging is happening? Not necessarily. But people have every right to ask that question and you should ask it too no matter what side you're on.
Partisan source? Maybe. But unfortunately the fact checkers have lost all credibility over the last few months by debunking things like, in the context of this article, hunter's laptop.
The federalist itself has spread a lot fake news over the years so you may want a better source to debunk the republicans in the state who literally ran the election who are saying it wasn't fraudulent.
Maybe you'll accept the credibility of the Republican lawmakers and election officials in Georgia who also consider those accusations of fraud to be baseless[0].
Nobody is talking about QAnon. This isn’t even about baseless accusations of fraud or claims that the election was stolen. This makes pointing out irregularities and sincere attempts to understand them bannable too. They already removed a video for this. This is a ban on even good faith questioning of anything to do with the election.
The issue with your argument is that you assume there’s some universal definition of “toxic” with which everyone should agree. That simply isn’t the case, especially with the content YouTube has decided to block.
They got rid of things like the_donald but if you dare look at comments sections you see things like cheering the death of people who spoke out against masks or lockdowns.
They got rid of the right-leaning toxicity.
They specialize in, profit from, and are ok with left-leaning toxicity.
That isn’t even the main argument. The main argument is that there is a concerted effort to let government take away the fundamental right of free speech.
That is morally bad. But besides that, it can have the effect of hiding actual truths. The author then considers thought experiments of what would have happened to the Watergate affair and other proven true scandals if there wasn’t free speech.
And of course they use a measure of toxic which means "is a republican".
Go on the front page of reddit and count how many posts from subs like /r/insanepoeplefacebook, /r/publicfreakout, /r/idiotsincars, /r/trashy, and the like there are, right now it's at 15/25. On most days I've done this is close to 20/25. Today there are 5 posts about the TX appeal not being heard by the SC which has pushed some of the usual garbage off the front page.
Reddit has become a more mean-spirited place the more subs have been banned, and every time the increase in vitriol is used as an excuse to run the next set of bans. People cheer this on until they get banned and then act surprised it happened.
I've heard the same thing about 2008, 2011, 2014, 2016, 2018 and 2019. It's not different this time either. Bad decisions from the top are the reason why the site is getting worse.
If people disagree with YouTube's normative choices they can (a) wage a boycott, (b) create a new platform that follows the desired norms, (c) attempt political action to regulate private companies.
I don't think (a) is very effective due to the practicality of network effects and also that video hosting is quite expensive. A distributed system may solve the money issues of (b), but the legal ramifications of child exploitation will chill any unmoderated distributed system. For (c), because of the first amendment, I'm not sure what political regulation is possible (see Citizen's United). The government probably can't regulate YouTube's free speech (hosting videos) even if that limits other peoples speech (on their platform).
I think one solution would be a distributed but not private video peer-to-peer hosting system. Perhaps like a public IPFS? Speech would protected against government intervention, but without privacy it could be easily policed. I don't think a peer-to-peer system will gain any traction if it is possible to host illicit content that the government could punish you over, so that's why I raise those points.
Well, the problem is that there are all these intertwined concepts so that for the most part we have:¹
* We want individuals to be able to express themselves so long as they are not explicitly deceiving people or hurting them
* We believe groups of individuals should have the freedoms individuals do
* We believe that individuals whose sole purpose is to act to provide a service to individuals expressing shouldn't be liable for the expression under some conditions. e.g. if I let you rent my sound equipment, I shouldn't be liable because you use it to call for violence against some dude and likewise for online platforms provided they take some reasonable measures
* We believe that picking and choosing what people can use your platform to express is expression in itself, but sometimes you are obligated to suppress some expression
So YouTube is one of these platforms. They should be allowed to pick and choose what people can use them to express because that's a freedom one person has, and therefore that they have as a group of people who individually have it. So the defence rests on it being expression.
However, the defence for them not being liable for expression on their platform is that they're "just the platform". By choosing not to enforce on other things and choosing to enforce on these things they're not "just the platform" - the expression they permit is their expression.
It's a bit hazy, but it feels to me like you forfeit some of your "just a platform" defence when you exercise your "it's my right to expression" defence. Morally, of course, not legally. IANAL and this isn't a court so who gives a damn about the law on its own.
¹ If you don't hold these beliefs, then you're not in the audience for the comment. Skip safely.
From for-profit perspective, YT should have done flagging banned videos as UNTRUSTED! and benefit from both free speech supporting and outraging everyone “far” and vocal into epileptic ads seizures.
I'm for one glad YouTube is doing this. I've heard so many times from friends and such "I watched this doctor on youtube and he says"
People don't fact check and yes, I agree people should be free to post what they want, but come on. The only people really fighting against this are the ones benefiting from the misinformation being spread. If they wanted to really argue why it's a bad thing they should start discrediting people instead of using it to their advantage.
The problem is that YouTube, Twitter, Facebook et al. want to have the protections of social networking platforms (e.g not being accountable for the content), while at the same time be able to autonomously remove/censor content outside the existing legal system. You can’t have your cake and eat it too. (well, apparently you can in this day and age but you shouldn’t be able to in a fair society)
Youtube is a private company and can therefore do what it wants with its property - the Youtube platform.
However, it may be a bad decision to act as a Ministry of Truth while a number of court cases are still ongoing.
Especially since the evidence presented in court proves electoral fraud - even if it is not entirely clear whether it is relevant enough to impact the outcome of the election.
YouTube is a technology company, and deciding what is and what isn't acceptable speech is not their prerogative - that's what constitutions and law courts are for.
Once they start down this censorship road there's no turning back until either all their users leave and they're forced to shut up shop or they end up as a defacto department of governments.
for folks with a strong opinion on this, how do you square your opinion on youtube censorship with your opinion on net neutrality? youtube censorship is okay but isp censorship is bad? i have multiple isps available to me without any change to the quality of my life, but no comparable alternative to youtube.
What does it mean to be a “comparable” alternative to YouTube?
There are dozens of online video platforms[0], plus other places like Patreon and OnlyFans, and decentralised systems like PeerTube. If none of them work for you, you can get a VPS and start hosting videos on your own web site.
The difference between YouTube censorship and ISP censorship is the amount of capital you need to circumvent it. If ISPs start filtering traffic, you need tens-to-hundreds of millions of dollars to build hardware infrastructure to run a competing ISP. If YouTube starts filtering videos, you need $5 to rent a server to host those videos instead.
Boy, technologists. We used to see a gap in the market and fill it with superior products. Now we see a gap in the market and complain that the market leader isn’t owning that space any more.
Perhaps people should refrain from using the term "un-American" for a while. That term didn't age well during the last 4 years. It doesn't mean anything right now.
There is one positive side, that this will further erode the trust to social media, with youtube becoming official part of conspiracies. People will adapt, they ll find new venues
"How do you respond to something so blatantly dishonest? A number of media contacts have told me that Lally shopped this hit piece around to multiple publications, but it was rejected for this very reason."
It's worth noting that the women who worked for the Exiled are still around, and when this came out they confirmed that the events depicted in the Exiled book were satirical.
This was an attempt to hijack the #metoo movement without anyone actually making an accusation.
Every online forum has moderation policies. The "law and order" brigade doesn't seem to like "order" when it is imposed on them. And if that order is to stop posting misinformation in the form of crazy conspiracy theories that encourage stochastic terrorism, or racist violence, etc then online forums are well within their rights to moderate that content.
Taibbi doesn't even talk about the biggest problem with these conspiracy videos: reinforcement bubbles. If you watch one, you will be shown more, and more and more, deep down the rabbit hole. I'm pretty sure if tech companies changed their algorithm to de-rank subsequent video suggestions of MOAR conspiracy theories, he'd again claim that they were putting their finger on the scale and that this would enrage Trump voters.
The thing is, social media is creating reinforcing but non-intersecting world view bubbles through AI suggestion mechanisms in their feeds, and unless that problem is fixed, you can't just allow an infinite amount of this shitty propaganda to be posted, because it will have damaging effects, and none of the viewers will see any counter-claims, and by the time they finally get served a counter-claim, they'll be so inside the bubble, they won't watch it. Instead, they'll go watch a reaction to it, that confirms to their existing opinion and soothes any triggering. There there now, your worldview is safe.
Because any of those are even close to competitors...there are 2 billion youtube users (80% of internet users).
Posting this reply here since the site says Im posting too fast at a lightspeed rate of 3 comments for the day:
I just dont agree with you that there is a "healthy marketplace" by all metrics youtube has a monopoly on video streaming. For instance Vimeo has 1.5 million subscribers...compared to 2000 million for youtube. Thats .075%.
You can upload videos to them and share them freely. Doing so draws views away from YouTube that they may have otherwise enjoyed, and may expose more users to the competition.
Demanding that YouTube be regulated to be forced to carry content when a healthy marketplace of competitors exists is hilariously un-American.
It really does seem out of line for much of the media to have dismissed or delayed accurate information about Hunter Biden prior to the election. It seems pretty clear this effort was designed to avoid upsetting the election.
When Trump got elected I had to listen to my friends freak out about a "Hitler" "Fascist" crazy-man. I silently rolled my eyes.
Then when he lost the election, and lied about it, and confused more than a third of the country in the process, I realized I was wrong.
I had wanted to believe (though never based on any evidence), that more information was always better, and that people are smarter than TV assumes. I wanted to believe that a pre-selected, pre-digested set of channels was ludicrous, and removing the gatekeepers would force a radically more efficient system. I wanted to believe that because I "knew" I personally didn't need such pabulum.
Perhaps we'll grow into it. Personally, I'd rather not risk it right now.
The main problem is that debunking conspiracy theories takes far more effort than spreading them. Not sure if YT has the right balance of rejecting things outright, but I do think there is a line. For example, most people wouldn't criticize YT for taking down holocaust denial theories so they should at least do that much.
However, they should probably avoid being the arbiter of truth except where truly necessary. I voted for Biden, but I'm not sure if we have hit the threshold quite yet where you should stop discourse on fraud. This should be done after the EC votes, but right now feels a little early.
Surely the conservatives will believe in the freedom of private enterprise.
YouTube can do whatever they want on their platform. For example, they can ban model train enthusiasts for no reason at all.
Love it or hate it, there's nothing un-American about a private company doing whatever they want inside their area of control. That's kind of the definition of America.
If you reject the idea that people are intelligent enough to think about politics for themselves then you reject the idea that they're intelligent enough to govern themselves.
Democracy and free speech are intrinsically linked.
After all even “democracy” is nuanced here in the states. Obvious examples from our past include the wholesale disenfranchisement of women until 100 years ago, Jim Crow era laws, poll taxes, and more. Did we have democracy then? Do we have democracy now?
Did we have free speech when broadcasters in the USA had to abide by the Fairness Doctrine? https://en.m.wikipedia.org/wiki/FCC_fairness_doctrine. Our system has it’s faults. But one can’t say that we ever had unfettered democracy or free speech, ever.
At what point is speech considered abuse? If you make the analogy between speech and network traffic, we intentionally censor “abusive” traffic to save our networks from collapse. Could one make the argument that entities with an interest in destabilizing our democracy should not be able to inject propaganda into our discourse because “free speech”? I think it’s a valid argument to have and a nuance that we have to contend with.
There are no absolutes. Trivializing this with a snappy trope of “democracy can’t survive without free speech” is disingenuous IMO.
I wouldn't call Singapore a democracy. It's an authoritarian one-party state that happens to hold elections. These elections are always won by the same party.
The democracy index rates Singapore at 6.02/10. About 0.03 away from being a "hybrid regime" and currently rated a "flawed democracy." It's ranked as worse than places such as Ukraine (currently in a civil war), Thailand (run by a military dictatorship), etc.
Having been stuck in the lockdown they imposed earlier this year, which was widely unpopular with the city's residents and which lead to the biggest support for the opposition in the country's history, I would not at all call Singapore a democracy.
Interesting that Singapore also is the 4th least corrupt country per the Corruption Perceptions Index - one of the biggest discrepancies with the Democracy Index.
> Could one make the argument that entities with an interest in destabilizing our democracy should not be able to inject propaganda into our discourse because “free speech”?
One could also argue that censoring one side of politics deepens divisions and drives people to polical extremes. I would rephrase "democracy can’t survive without free speech" as "democracy can’t survive without free political speech"
Firstly, the op like myself would have been against all those disasters of democracy as well.
Secondly, you're using things we deemed reprehensible enough to stop doing, outlaw, or repeal as an excuse to now do something in kind.
Op seems to have learned from those mistakes, why are you trying to use them to justify making new ones?
> Why is it that we have to ignore nuances in these arguments?
I very much agree. Why can't you understand that just because we've never had perfectly free speech, that maybe it's still worth fighting for most of the time. And that a few counter examples do not therefore equate to free speech being something unworthy of protecting.
Making analogies requires that the two ideas come from the same class of ideas. In what way is traffic and speech in the same class?
The reason this becomes an absolute is, unlike with traffic, the decision about which speech is abusive is objective and itself easy to abuse. When there's no way to talk about the abuse (a situation the abuser would create) then it will continue until the abuser's ability to censor speech is removed. In the real world this ability is often removed as the result of some serious violence and so most people don't like to create conditions that could lead to such a situation.
People sometimes say that speech in a democracy is a nicer alternative to violence.
You're new so you might wonder why you're downvoted. Hackernews has some guidelines, and while they're far from being perfect they do help to maintain above average online conversation. Please have a read (it's pretty short!): https://news.ycombinator.com/newsguidelines.html
Key point related to your comment:
> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
People are rational most of the times, and act based on the well of knowledge they have at the time. When you choose to promote certain content because of its high engagement benefits you given its controversial nature, then you are poisoning the well of knowledge for each individual. If you do so in a feedback loop, considering what they were interested in before, you accelerate your earnings as well as the poisoning.
Human brains are hackable, more so if you can control their inputs. Youtube, Facebook, Twitter, they all found the attack vector.
So democracy and truth are more related to each other than democracy and speech. We tend to think that speech is truth, but it is not, and people over time have voted themselves out into dictatorships over and over.
Sometimes I see the argument made that people are hackable by lies, therefore censorship is needed as an antidote. But I don't buy it, because 1) lying is equally available to all, but censorship is available only to the powerful 2) the powerful are as flawed as anyone else, except with power 3) historically the powerful who dabbled in censorship have also lied a lot, and censorship has helped them lie more easily. So the "censorship as antidote to lies" theory is just total bunk from every angle.
I was just covering one side of the equation. I don't like censorship, and as they say: you should legislate as if your opponent is the one that will apply the law. You can apply censorship to take care of one "good" reason, but eventually censorship will be applied for a "bad" reason.
What I wanted to focus on is that it is not true that free speech is the solution to all democratic problems, it's not an absolute good. We limit speech already, you can't advocate for the killing of other people, races, etc...
Also, about free speech, if you can say what you want but there are very large entities controlling who gets to hear what you say, then in a way censorship might be already happening. In this case non-arousing messages are being suppressed because they don't make money, so we all feel like everybody is either pro-this or against-this, and they're all really mad and or dangerous.
> We limit speech already, you can't advocate for the killing of other people, races, etc...
Actually, not that I support such behavior but (at least in the US) you can generally advocate for it. People usually don't (thankfully) so I don't have any examples immediately to hand, but my understanding is that the legal test is "imminent lawless action". (https://en.wikipedia.org/wiki/Brandenburg_v._Ohio)
> free speech is the solution to all democratic problems, it's not an absolute good
Rather than an absolute good, I would argue that it ought to be viewed as an absolute right. I would also argue that, whether used for good or ill, free political speech is a functional necessity of any democracy. (Necessary but not sufficient and all that.)
> in a way censorship might be already happening. In this case non-arousing messages are being suppressed
Agreed, but it's a separate issue and I've no idea what anyone is actually supposed to do about it.
Yes, every censor claims to be doing it for the common good. The Catholic Church had its Index of Prohibited Books supposedly to protect innocent souls from heresy, and likewise in various Communist countries past and present the censorship is there supposedly to stop the spread of counterrevolutionary propaganda. The modern Western world, in theory based on Enlightenment ideals, traditionally rejected such arguments, because they believed the answer to bad information was good information. While I am not a fan of "populist" movements that reject evidence, I frankly fear the efforts to suppress them more.
> People are rational most of the times, and act based on the well of knowledge they have at the time.
I'm of the opinion that people are not rational most of the time, but operate on habit and reason by use of heuristics. My evidence for this is that rational thought is costly and in psychology they teach this (that people aren't usually rational, merely motivated and economical with their attention).
> Human brains are hackable, more so if you can control their inputs. Youtube, Facebook, Twitter, they all found the attack vector.
This is a good reason why gatekeeping information is so dangerous.
> So democracy and truth are more related to each other than democracy and speech. We tend to think that speech is truth, but it is not, and people over time have voted themselves out into dictatorships over and over.
Unfortunately there is no test for truth. However if you control people's sources of information you can keep them from figuring out that they have been lied to.
How are we to ensure that the gatekeepers of information are honest and wise?
There has to be a good reason why psychological warfare is used, including by the United States in periods like the Vietnam War. The expectation is that statements written on paper or recited through loudspeakers or similar influence people's thoughts, which influences what actions they will take. Those actions are supposed to be beneficial to the forces scattering them from a helicopter or posting them on walls. They're trying to get you to do something regardless of what your preexisting thoughts are beforehand.
Now tear down the pieces of propaganda and burn them. Is this censorship?
I think human brains are sufficiently able to be hacked by the right information. With certain kinds of information, those people could end up causing material harm to someone who is completely uninvolved. That is the ultimate danger that I believe YouTube and the like are trying to avoid, and if true it would be an admittance of a sort that some kinds of information are dangerous to some kinds of people. The name "psychological warfare" implies that communication can be weaponized, in a sense. Certain people do not change their ways regardless of what you try to tell them.
I think the issue is centered not around people trying to have a reasoned debate, but those who have been lost to debate more or less permanently. Previously, you couldn't scatter this kind of information about on the scale were seeing today, and also recommend similar content using very effective algorithms. The more millions of people the content reaches, the more latent believers it will reach and end up converting, even if the conversion rate is a fraction of a percent. And many of these people are very vocal.
In a sense, the content takedown seems like an attempt to counteract the fact that the algorithms are too effective of a medium for misinformation. I don't think we would have seen this kind of scale with books or flyers.
> I think the issue is centered not around people trying to have a reasoned debate, but those who have been lost to debate more or less permanently. Previously, you couldn't scatter this kind of information about on the scale were seeing today
you and I couldn't. Oligarchs and their governments could and did. The difference is that now their position is threatened by the democratization of information warfare.
We've asked you before not to do religious flamewar on HN. If you keep doing it we're going to have to ban you.
It also looks like you've been using HN primarily for ideological battle, which is exactly what the site is not for, and which destroys what it is for (thoughtful conversation on intellectually curious topics).
This is mostly true but the three religions you mention aren’t parallel in this respect. Apostasy in Islam is (typically) defined as someone who leaves Islam after being born into a Muslim family or previously accepting Islam: https://en.wikipedia.org/wiki/Apostasy_in_Islam. So it’s not a race or ethnicity, but in a theological and practical sense people are “born Muslim.” Put differently, separation of ethnic group, language, and religion are less well-defined in many parts of the world than it is among people of European descent.
Hah people correcting racism on a technicality but not at all addressing the bigotry issue is a personal pet peeve of mine. Anti intellectual in my humble opinion
> If you reject the idea that people are intelligent enough to think about politics for themselves then you reject the idea that they're intelligent enough to govern themselves.
I mean, the American founders already rejected a lot of ideas about the people governing themselves, because they knew from history that the people often can't check their worst impulses.
> Democracy and free speech are intrinsically linked.
Yeah, but what kinds of free speech work and what kinds don't? I used to be a free speech fundamentalist, but now I'm getting more skeptical of that position. Sort of like how radical Athenian-style direct democracy doesn't work, maybe radically giving everyone near-unchecked access to nationwide broadcasting doesn't work, either.
Now, I'm not advocating for some kind of universal censorship apparatus, but maybe it's would be best if broadcast speech at least passed through some kind of editor (and I mean editor, making editorial decisions, not a censor) to build more judgement into the system.
As software engineers, we understand the value of code reviews. That kind of thing shouldn't be anathema.
Given that the Alien and Sedition Acts took less than 10 years after the passing of the Bill of Rights, and were passed by many of the same people working on the original Constitution, I would dare say that it wasn't all that unanimous.
But, and perhaps more importantly, relatively few people at the time could vote - in 1789 it was about 6% of the population as a whole. On the federal level, there was basically no protection of the right to vote at all - there was that bit about states having to have a "republican form of government", but it wasn't codified, and the broad understanding was that property requirements, poll taxes etc are all compatible with it. So, it was a very different environment overall.
(I'm not disagreeing with your basic premise - I support, it actually. But US is not a good historical model for it. I don't think there's any country that really is, or ever was. Those of us who believe that democracy is dead without radical free speech, have to figure out how to make it all work in a way that is, at least, tolerable to those who disagree with us.)
It's a common way to dismiss Founding stuff, but I'm a fan of the Frederick Douglass view that the Constitution is a "glorious liberty document."
So what if the writers didn't always practice it? If it's been unequally applied - there's no stronger case than to hold people to account with the law of the land.
> This is literally the foundation of American govt structure namely the respect for unalienable rights of individuals to express themselves.
> It’s the first line in our enumerated rights.
> This isn’t “oh we should rethink things” this is a matter of natural rights and precedes anything we’ve constructed.
I never said anyone shouldn't have the right to express themselves. Do you think it's in the Constitution that you have the unalienable right post any lie you want to YouTube? I don't remember that being in there.
IMHO, lies are toxic to free speech, and it's in everyone's interest (and moral obligation) to do their best to limit their spread.
IMHO you're not interested in free speech unless you're willing to defend speech you find repugnant.
You don't need to protect speech everyone finds unobjectionable.
You need to protect speech that is almost universally opposed.
Minorities need protection, majorities don't, and this the basis of the American notion of freedom - insofar as the individual is the smallest minority.
> Do you think it's in the Constitution that you have the unalienable right post any lie you want to YouTube? I don't remember that being in there.
Clearly it isn't and I believe in freedom of association completely too. If a business wants to not serve anyone for any reason, I recognize that - clearly law has developed to qualify this.
This notion of freedom of association has no bearing on my ability to criticize Youtube for doing something I find wrong.
I can simultaneously believe that what Youtube is doing is wrong and they're completely allowed to do it.
> IMHO you're not interested in free speech unless you're willing to defend speech you find repugnant.
Defending speech I find repugnant is not the same thing as defending a nonexistent right to post lies on Youtube.
Host the lies on your own website. Create flyers and pass them out. Talk to people in public settings. Write to the newspaper (they might not post your lies, but you're free to try to get them). Hold conferences. Do whatever you want. As long as the lies don't violate a few well-established exceptions: https://en.wikipedia.org/wiki/Freedom_of_speech_in_the_Unite...
But the right to have your lies hosted for free (and given a valuable, far-reaching audience) on a private platform, no matter the consequences to others or to the platform itself? It doesn't exist.
The whole point is the sanctimonious, heavy-handed nature of a platform that otherwise likes to act like they're concerned about the public good.
I recognize they have a right to do it, but I have a right to make noise about it and criticize them.
We got so many pop-ups, accounts suspended, etc. around the Hunter Biden stuff, and what do you know - it turns out there was substance to the story.
This never happened around the Russian collusion stuff and I suspect most of the city lib elite that inhabits tech companies wouldn't find that a sanctimonious.
> I recognize they have a right to do it, but I have a right to make noise about it and criticize them.
Nobody suggested otherwise, and your statement that "you're not interested in free speech unless you're willing to defend speech you find repugnant" is clearly a sidestep. You can support free speech without supporting the nonexistent (and unrelated) right to post lies on Youtube.
You're not really offering any solutions with these platitudes, because if we take your statement to its conclusion, heck it seem we'll just have to live with a third of the population being lulled into supporting potentially violent and seditious people who want to subvert democracy itself. Oh, guess we'll just have to let them or allow the situation to devolve into a civil war.
> IMHO you're not interested in free speech unless you're willing to defend speech you find repugnant.
I am extremely confident that I can find examples of speech you wouldn't defend. How do you feel about false advertisement? Sharing of confidential military secrets?
Things aren't made illegal by the universe. Humans decide what the difference is. You simply think that false advertisement is bad but other forms of speech aren't and that is reflected in the law. We chose that some forms of speech aren't protected.
Since you aren't willing to defend that speech and fight to make it legal, are you not interested in free speech?
You realize how liable to abuse the idea of “truth” is in a tyrannical regime? Or even just fallible and misguided people?
Like, eugenicists in the 1920s would have a whole host of horrid “truths” they thought were scientifically supported at the time. Imagine the hurdles opposition would have in trying to voice their “factual truth.”
Of course it is abusable. The state has done all sorts of tremendously evil things when wielding its power. "Free speech" has never stopped the crushing of legitimate protest. This is not a surprise to leftist activists, who have been consistently violently attacked by the state for more than a century. I find these arguments so disingenuous when people throw them out while also doing little to nothing to curb existing state abuse. I've never seen right wing "free speech absolutists" at trans-rights protests or fighting against state interference in left-coded academic work.
General restrictions on state power don't work. Capitalism crushes people. When that state power turns on the majority, the majority complains that their voices are silenced and then they go right back to ignoring the continued problems of the minority.
Perhaps the solution is to approach each situation on a case by case process. Protest and agitate when the state harms people and celebrate when the state protects people. Make harm reduction the goal.
In software people often point to how focusing on metrics can make you fail to achieve your true goal. Similarly, we should focus on the thing that free speech is intending to achieve rather than free speech itself.
Freedoms enumerated in the bill of rights were chosen as a reflection of morality. You’re arguing utilitarianism about what’s a moral issue of the inherent right of people to express themselves.
Yet you oppose certain kinds of speech outright by virtue of the fact that the population has decided via democratic legislation that they should be crimes, presumably because of the harm that they cause.
I don't believe that the intention of the framers was that free speech was a good in-and-of itself but that making it difficult (but not impossible) to restrict speech has positive outcomes. At the very least, that is my read of things.
> You don't need to protect speech everyone finds unobjectionable.
I wasn't talking about objectionable opinions or perspectives. I was talking about lies and falsehoods, which are becoming alarmingly widespread. Even now those things don't full First Amendment protections, which is why there can be laws against fraud and defamation, and the Supreme Court has even said "false statements of fact" do not enjoy protection [1].
You've recited many slogans, but it's important to consider what free speech is supposed to achieve, which, IMHO, is to allow progress towards truth. It's also important to keep other considerations in mind, and balance them. If you but too much focus on too small an area, you can easily find yourself becoming a Paperclip Maximizer.
Do note that US v Alvarez did hold that false statements are not a sufficient basis to restrict speech--there needs to be something else to justify restriction of speech.
Or, a better quote from the syllabus:
> Although the Court has frequently said or implied that false factual statements enjoy little First Amendment protection, see, e.g., Gertz v. Robert Welch, Inc., 418 U. S. 323, 340, those statements cannot be read to mean “no protection at all.”
That's true, but we're actually talking about YouTube here, not the government. So if false factual statements enjoy little First Amendment protection, then there's even less grounds to whine about a private party choosing not to disseminate them.
It's just a made up concept though. It would be physically impossible for YouTube to ban anything if it was a real thing.
That there's a conversation about YouTube censoring people says that it is perfectly possible for free speech to not exist. It's not a physical law of the universe
>Sort of like how radical Athenian-style direct democracy doesn't work, maybe radically giving everyone near-unchecked access to nationwide broadcasting doesn't work, either.
The difference is that direct democracy doesn't scale. Switzerland has a population of 8 million; the US has a population of 330 million. Also, I'm willing to bet that Switzerland is more homogeneous than the US. And size wise, Switzerland is only larger than 9 US states.
What I'm saying is that the US is huge. And diverse. It's a scaling issue.
> The difference is that direct democracy doesn't scale.
I hear that every time Switzerland is mentioned. I've yet to see a strong argument to suggest why it wouldn't scale in this day and age.
What I think doesn't scale is centralized systems like France, where all money flows to Paris and rarely any back. Switzerland on the other hand is extremely federated, to the point each canton, often smaller than average cities, have their own school system. Most tax money stays on the local level and didn't go to Bern. This independence IMHO is the critical factor for scaling that would work for moch larger countries.
As to the homogeneity: US has one language any natively born American speaks. Switzerland has 3, with a considerable political divide between the French and German spanking areas. Finally Switzerland has 20% Foreigners vs 10% in US. So again I would very much challenge you there.
Yeah, the last time the US states had that level of autonomy was 1860 [1]. After 1865, the Federal government consolidated power, and more so after the 1940s [2]. It's just that in the US, most calls for more decentralization (aka "states rights") were considered a "dog whistle" for pro-slavery sentiment (or at the least, pro-racist thoughts). I think it's crazy that the US President has the power that he has (and have felt that way for the past few Presidents) and that Congress have shirked their "checks and balances" [3]---it's just been the most blatant for the last two Presidents.
It's funny you mention the school systems. They aren't centralized in the US. Heck, they aren't even necessarily centralized by the states! In the areas I've attended school, it's been by county. And even the one I graduated high school under was administratively treated as three separate school systems (because even at the county level, it was pretty large). It's one of the few areas left that haven't been completely taken over by the Federal government.
Yes, English is widely spoken in the US, but there are areas where it's a distant second. It may surprise you to learn that the US has no official language. I live in South Florida and it's not rare to encounter areas were on Spanish is spoken. There's an older dialect of German still spoken in parts of Pennsylvania, Ohio and Indiana. And there are areas on the West Coast where Vietnamese, Japanese and Mandarin (I think it's Mandarin is the most widely spoken Chinese language) are spoken.
I am surprised at the 20% (I found figures saying it was as high as %24) in Switzerland, but I would still contend that the US is still diverse. There are cultural differences between New England, the South, Texas, the Midwest, the Southwest and the West Coast that are probably just as deep as the French/German divide in Switzerland.
[1] Just prior the the US Civil war.
[2] Just after the Great Depression and World War II.
[3] And for all my voting life time, Congress has had lower polling numbers than any President, yet we keep electing the same yo-yos to office ("Congresscritter X is an idiot, but X is my idiot!").
Calling Switzerland homogeneous because everyone there is white is as racist as saying that China and Japan are the same because they are all yellow.
The US is less diverse than Switzerland and using the current system when it was half the population of Switzerland. The only reason why we do not have direct democracy is because it's a lot harder to rob a country blind when everyone is involved in the decisions.
It's not a question of whether they are intelligent enough to do so. It is whether or not they can get the information that allows them to do so.
With the vast amount of information on social media, which is where more and more people are getting most of their information nowadays, and the way social media algorithms are designed to maximize engagement, and the way human cognitive biases work, the result is you almost never get a balanced view presenting arguments for all sides even if all sides offering information are acting in good faith.
Then add in all those who are not acting in good faith, who have a much easier time generating vast quantities of low quality information than the people arguing in good faith have for generating high quality information, and it is even worse.
There was a recent article in Scientific American that covered research into this. Here was the HN discussion of it [1].
You need expertise to interpret information and you need the right kind of expertise and even the right combinations of expertise, and even knowing what the right combination of expertise is.
It is a question of intelligence, information, expertise and meta-expertise, all of those things. Privileging any of those components of sense-making is myopia of different sorts depending on what claims people are actually wanting to forward.
For instance if you want to look like a centrist while avoiding calling people dumb you say they're low-information voters, change the news, make them listen to these experts, give person X more control or say, when the news and those experts and that person all have problems of different sorts. Not one-dimensional.
They're not banning all allegations of fraud. They're blocking a specific set of claims that have been shown to be false, for instance that Donald Trump won the 2020 US election.
Question: Why is this a threat to democracy, but previous censorship wasn't?
There's always been an expectation of truthfulness in communication: We don't allow people to go around claiming that Pepsi causes AIDS, or cures cancer. You're not allowed to claim you're collecting for a charity, and then keep all the money for yourself. I doubt you'd be terribly okay with me claiming that you're a murderer and asking how much that biases your stance.
(Genuinely trying to understand if there is something that makes this different from any other sort of fraudulent communication - I don't see people defending the other stuff under the guise of "Free Speech", and I'm not clear why this would be different)
We have a really strong presumption against what's called a prior restraint. It means you generally can't stop someone from publishing something, but you can sue them after.
This does two things. First, it makes it so that if someone feels strongly enough that they're right to be willing to risk the penalty, the public still gets the information. This is important. It's a check on the courts getting it wrong.
And the second is that in order to punish someone for speech, you have to prove it in court in a public adversarial proceeding where the person being punished has an opportunity to defend themselves and the claim is aired as part of the court record in a context where everybody knows that it's in dispute but the media and any interested party still gets to find out what it is.
This is obviously not what happens when you implement a general prohibition against all claims of election fraud with no meaningful opportunity for anyone to view the material being censored or make their case that it shouldn't be prior to its removal.
> (Genuinely trying to understand if there is something that makes this different from any other sort of fraudulent communication - I don't see people defending the other stuff under the guise of "Free Speech", and I'm not clear why this would be different)
People are not defending fraudulent communication. They are observing that the same factors that make audiences vulnerable to misinformation affect the censors and the process by which censors determine which information is harmful.
If we had a test for truth or sincerity we could use it to determine which information was false or put forth in bad faith. We don't, so any person who acts as an information gatekeeper could be one of those people who is lying or deceived and they would then be able to corrupt the discourse by censoring the information that exposes their lies or misunderstandings.
Fraud is a very specific (and hard to prove) term of art in US law.
There is no US law that would stop someone from making expressions that are later determined to be fraudulent.
Note, commercial speech (e.g., advertisement) is considered a less protected species of speech and may be subject to prior restraint in limited viewpoint neutral circumstances.
If you're a doctor and you claim this then those are grounds for having your license stripped and being held criminally liable if people actually take you up for advice.
For one thing, rules for thee and not for me is simply terrible, and the left argued that there was fraud last time.
But more importantly, I don't even think your charity example works. Like you can be charged with fraud if you just put the money into your bank account, but if you collect for a charity and charge a 75% collecting and handling fee? Are you then collecting to a charity?
In this case it is more that somebody claims you are not collecting for the charity, you show the bank statements that conclusively prove (according to yourself) that you were collecting to a charity. Then somebody starts looking into it and claims they have evidence that you play golf with the bank teller. And that their manager is your college buddy.
And now you try to suppress the (alleged) photos.
To be clear, I don't think there was widespread fraud or voter suppression in favor of Biden. But the key difference is that normally there are disinterested parties (in particular courts) that can rule on the evidence, in politics there are by definition no disinterested parties in the US. And because of the superpower status of the US, it is even worse outside it.
This kind of dogmatic statement is good for soliciting a response, but I think it's intellectual degradation.
There are infinitely many interpretations of "democracy" and "free speech" and it's hardly a simple yes/no question. We should be free/democratic society as much as possible, and it's an ongoing (while not always successful) effort. We should discuss concrete steps.
On the contrary, I think this goes to the root of the matter. Can you trust people or not? That's a fundamental question that decides whether you can have real democracy, or only a sham. Because any kind of democracy where people are only allowed to make "safe" choices, decided on by someone else (e.g. the censors), is a sham. And that's true even if the censors themselves are directly or indirectly democratically appointed. Iran is a good example of such system taken to its logical conclusion.
I'm sorry but trust isn't a binary value. I trust people in my life to be reliable and trustworthy to a certain extent and in certain areas. I don't trust my party-animal buddies to be on time for breakfast a 8 AM, and I don't trust my stressed out workaholic friend to just chill on a Friday afternoon.
You cannot simplify trust to a single variable; you have to go case by case. And trust in the masses is partial; finding where it is and where it is not is critical.
But at the end of the day either you give everyone a vote or you don't.
A system where the most trusted people had 10 votes and least trusted people had 0.1 vote might make sense in a society of nerds voting on a software project, but not in general political application.
How would you manifest that partial trust when it comes to political power, though? The problem with that power is that splitting it up is a zero-sum game - you can give less to "the masses", but then somebody else is getting more. Who would that be, and why would you trust them?
Further, even if the citizen ostensibly advocates the wrong belief, or say are persuaded by an adversary, that belief becomes the legitimate value of the population regardless of the reason—-as democracy is purposed to obligate government to pursue the values of its citizens.
I’ve been to a country with a dictatorship with a faux democracy and no free speech. I asked the minister of journalism & communications whether he believes that free speech is possible in a democracy. He unabashedly said no, as there is no guarantee the “consensus” will be “correct”, especially given the influence of its adversaries.
There's no such thing as free speech. Speech is bought through marketing. The wealthier and more get more of it. This is inherently undemocratic, as constituents don't have equal political influence.
Dogmatic arguments suck. Is Android/Apple better?
We don't live in a democratic utopia. Instead, belief in democracy and associated values has provided maximal value creation and competitiveness generated by knowledge economies when compared to other current operating models.
That half the country is willing to dismantle what's responsible for their quality of life would be funny, if it wasn't the joy of dictators everywhere. They can point to the absurdity of mob rule as half of us bite the hand that feeds. Clearly, something is already broken.
The possibilities of targeting messaging via fb, youtube, twitter ad budgets go far beyond the freedom of speech the authors of the constitution had in mind, no?
The ability to address a snug fit tailored messaging to specific audiences is a completely new level of "talking". It subverts the traditional democratic communication, where all participants realistically had a comparable and diverse data/information set. In contrast, given sufficient targeting advertisement budget, you can very deliberately dissimimate messages which shift the electorates decisions just enough that you get the political climate and outcome you want.
That's no longer the free speech of the constitution. That's mass manipulation. Brain hacking it was called elsewhere in the discussion.
What can yt, twitter, fb do?
Banning certain content feels bad, but how can they nudge us with the tweets, videos and posts that challenge us a little, like the old news stand always carried the paper we normally wouldn't read. Until we did, once in a while. And of course the paper was not 95% letters to the editor, unredacted.
How can the fb, google ai help us to structure the flood of information in a more nuanced way? Instead of a plump ban?
FTFY: If you reject the idea that a majority are intelligent enough to think about politics for themselves then you reject the idea that they're intelligent enough to elect a capable government.
So you are attempting to make the point that the UK is not an equivalent democracy?
The house of lords can not create bills nor stop them. They can look over the elected parliament's shoulders and whine, that's it. It is a vestigial leaving of the monarchy system that, like the Monarchs themselves, the UK keeps because it's cute.
While in the US, un-elected corporations do very much the same thing, except when they whine, they get their way.
I'm sure it does but in the UK, there is no Citizens United, where the Supreme Corporate Court of America (LLC) decided $$$ = speech, more $$$ = more speech, thus infinite $$$ to political campaigns = ALL GOOD HOMIES (This isn't even satire, this is actually their logic here).
Let's get real, it isn't the fact that corps lobby goverments that is the problem, there problem is when they can give unlimited cash bribes, like that can in the US. But they can not in the UK.
When you really start examining it, the US democrazy(sic) doesn't seem all that exceptional.
"Democracy and free speech are intrinsically linked."
Yes, really, because an important part of the democratic process is that the opposition can express their ideas, criticism and disagreement without fearing fines or prison.
Yes, in the UK, this verbal playing field is narrower than in the US, but still fairly wide. Corbynistas could advocate for nationalization and open borders without being jumped on by uniformed thugs.
Look to places like Russia and Turkey to see a severely constricted field, with the opposition risking their freedom by doubting the Dear Leader.
It's not highly restricted. It's a bit more restricted than in the USA and there are plenty of brits that think that's dumb, meaningless and just creates drama whilst simultaneously distracting the police from solving actual crimes.
If you think people are so smart, why should you care if people get banned from Youtube?
Clearly smart people will discover smart videos on any web site they are located on, and there's no reason for anyone to need Youtube to find an audience.
How would one determine whether information is false or not? It's easier to publish disinformation than it is to disprove it, and while some people are clearly better at sifting truth from fiction than others, I suspect everyone has beliefs that are based upon false information.
Moreover, this isn't censorship by the government, but by Google. Why should a private company be made to disseminate disinformation when it doesn't want to? A key part of freedom of speech is the freedom not to speak, and mandating that a company publish information it doesn't want to is surely just as bad as forcing it to censor information it does want to publish.
> Well, I think here you point out to one, really, of the basic defects of our system: that the individual citizen has very little possibility of having any influence - of making his opinion felt in the decision-making. And I think that, in itself, leads to a good deal of political lethargy and stupidity. It is true that one has to think first and then to act - but it's also true that if one has no possibility of acting, one's thinking kind of becomes empty and stupid.
-- Erich Fromm
He said that 1958, in an interview with Mike Wallace. Since then the gap between rich and poor (and between rich and mega-rich) kept exploding, media have been consolidated further, people have been uprooted even more, have been shit on and lied to even more, and have even less stakes in the outcome. Just doing more of the same will never lead to different results, yet by now many use current the state of things to argue for exactly that.
By that logic its not contradictory to be intolerant of people intolerant of intolerance. Or is it only valid logic with an even number of intolerances?
This logic doesn't work in the abstract, you got to apply it to a concrete situation. Each situation is different, they are not all the same to be judged in one swoop.
Each situation is morally relative and animosity is ineffective at resolving conflict, instead only deepening the divide. You cannot fight hate with hate and achieve love. We all must be more tolerant of people and their fallibility in order to reach common ground and discover that we are all much more alike than we are different. Love the sinner, hate the sin, but teach and forgive. MLK nailed it when he said that the means are the seeds from which the ends grow. You cannot hope to bear fruit when starting from a rotten seed. It's so easy to retaliate but true power is the confidence to de-escalate.
This country's government is based on the idea that people aren't intelligent (or interested) enough to think about politics for themselves. That's what a representative democracy is.
That's not why it's a representative democracy. The reason is that direct democracy was not logistically possible at the time, and also for support if states' rights.
>Democracy and free speech are intrinsically linked.
Yes but the only "free speech" that's realistically able to be implemented is freedom of speech without fear of the government putting you jail for criticizing the government.
The other idealism of free speech implemented by commercial businesses is not possible because we (the collective "we") won't allow it to happen.
There is no broadcasting medium (including websites) in any country that doesn't have interference and pressure to remove/ban content via consumer boycotts, advertiser influence, subscribers, business' self-discretion, or decrees from government officials.
The above list of actors in society is the collective "we" that makes absolute free speech an unattainable goal. There is no business with an infinite bank account that can withstand all outside pressures to censor information. Apple has $200 billion in the bank and yet they bowed to China pressure to remove podcasts that supported Hong Kong.
If a successful/influential giant like Apple can't implement absolute free speech, what's the proposed alternative corporate structure that can do it? Nobody ever lays out a concrete plan that makes free speech possible.
[I don't mind the downvotes but I would really appreciate some replies with constructive comments explaining how commercial businesses can realistically implement free speech.]
>If you reject the idea that people are intelligent enough to think about politics for themselves then you reject the idea that they're intelligent enough to govern themselves.
Elegantly put. It seems you've triggered an avalanche of replies looking to stick asterisks or [#] footnotes and caveats to this statement to suit their biases of the day.
Bang some caveats in there and you put universal suffrage as a concept in question.
What does "Govern themselves" practically mean in this context?
Lets say that I'm an intelligent person that firmly believes in "rugged individualism" to the point where I'm an anarcho-georgist [1] and do not recognize the state as an authority for real property allocation based on a well reasoned moral philosophy.
Given the fact that there is no unclaimed territory for which I could live without falling under the purview of a state's thumb, and hence would not be allowed to "govern myself", how would I then govern myself consistent with my political philosophy?
This is obviously a hyperbolic example, but it's IMO an important exercise to explicate how a large group of people with heterogeneous values, is supposed to act/behave when a prime mover state system is (in effect) demanding no values be higher than it.
We in fact already do that and nobody bats an eye.
People are not intimate with the details of most government bureaucracies, they are not expected to be, and they're no expected to grasp such details in a way that they can make informed decisions.
And frankly that is absolutely correct. We believe in democracy to be the best system despite its flaws, and one of its flaws is that it's evident that most people have no freaking clue about how government works.
If you believe in speech for the purpose of government then you have to reject outright lies design with the intent of deligitimizing democracy.
Authoritarians always justify silencing speech for the greater good. You can reject lies with counters to those lies, or do it your way, and end democracy.
So when a lawyer or doctor is stripped of their professional license because they become professional bullshit peddlers, does that constitute silencing people or saving lives and the integrity of the justice system?
I don't have time to listen to million bullshit peddlers with an agenda.
Then don't listen to them, the same way the rest do. Doesn't mean you should get to silence them.
A bullshit peddler named Dr. Atkins was the only guy in the 90's saying that eggs were good for you. The rest of the establishment physicians said eggs were bad for you, guacamole was bad for you, all fat was bad, eat more pasta.
They were all wrong, Atkins was right (when it comes to eggs.) But if Youtube had existed, people like you would have said he was dangerous and demanded he be removed. That's the problem:
How do you know the people you think are quacks are actually quacks? What about when the establishment is wrong. The Australian doctor who claimed ulcers were caused by bacteria was silenced, mocked, and ridiculed for decades. He was right, as we know now.
Yann LeCunn was mocked and ridiculed for his neural networks. He was outright rejected from conferences, and papers ignored, despite having groundbreaking results that set new benchmarks, because the established CS folks had decided that neural nets were garbage in the 80's. They were wrong, he was right.
But it's cool, I trust you to make decisions for me on what I can hear.
There is a lot of things we do not know, even on seemingly simple topics such as nutrition.
Just watch the "calorie is a calorie" crowd crashing with "sugar is chronic toxin" crowd. Not surprisingly, Coca-Cola pushes the former view. And in a curated YouTube world, who is going to get removed for spreading misinformation?
Eloquent, but... YouTube isn't an authoritarian (over anything except their own platform), and their policy, whatever it is, isn't going to "end democracy".
Obviously, YouTube has the right to do whatever they want with their servers. My point wasn't that YouTube's policy was the end of democracy, but that the ethos espoused by the parent commenter, if shared by a majority of the electorate and leaders, results in the end of democracy.
Democracy requires trusting people to not be complete morons. Unfortunately, the polarized nature of political discourse in the US has caused people to think that the 50% of the country that doesn't think like them is too stupid to be trusted with speech and videos.
Despite the fact that my brother has been sucked into the world of Alex Jones, I don't think most Americans are too stupid to resist dumb conspiracy theories. In decades past, this brother was a radical leftist, regularly arrested for trespassing in the late 80's at nuclear power plant protests, and later a radical in Greenpeace in the early 90's. This was long before the internet. He didn't believe the moon landing was real, and always believed in dumb ass conspiracy theories. (He's 11 years older than me, and by the time I was 11, I knew he was a moron.)
My other 4 siblings aren't like him.
His new shift was, honestly, not that weird. He happened to shift to the right, but it's a horseshoe as you know, and crazy is crazy.
There have been, and always will be, highly illogical people looking to indulge their desperate desire to have symmetrical, orderly views of the universe that aren't real. The big, disastrous outcomes must always have been carefully plotted, large, grand plans (like 9/11 being a false flag operation, which my brother also believed then). My brother is one of these people, but most people aren't like that.
This huge drive for censorship didn't occur when a large portion of the American left thought that 9/11 was an inside job, because the normal people on the left knew that it was a large, but still fringe, group within their own sphere. The problem is that the same normal people on the left don't know normal people on the right. Why? Because they don't know ANY people on the right. They are in bubbles now. So they are unable to draw the same conclusion that people like my brother are a fringe on the right. To them, voting for Donald Trump is evidence of a massively delusional person, rather than the grim reality that the vast majority of Trump's voters don't like him, and simply chose a lesser of two evils in their worldview, due to a corrupt, oligarchical class at the top of both major political parties resulting in highly polarized policy positions that force them to do this.
> This huge drive for censorship didn't occur when a large portion of the American left thought that 9/11 was an inside job, because the normal people on the left knew that it was a large, but still fringe, group within their own sphere. The problem is that the same normal people on the left don't know normal people on the right. Why? Because they don't know ANY people on the right.
I think to at least some degree, this is a difference between 2001 and 2020. Normal people on the left knew people on the right, and now they don't.
I thought people were intelligent enough, until we found out that some 75M voted for x. Now x is saying that it's a global conspiracy and bringing 75M believing that lie.
YouTube can choose to keep or delete any content on their platform regardless of truth/free speech/or not. It's in the EULA. You want free speech, go outside and yell. No one is stopping anyone from free speech. What Google is doing is cleaning up content on their service according to their EULA. If you disagree with the EULA, you can gladly go use another service.
I will add, targeting a specific genre of videos looks bad, sure. It's only that we are talking about that genre and not all the others of topics YouTube has removed.
This is a non sequitur. Free speech is an ideal. We would like google to preserve free speech, even though they are not legally obligated to per the Constitution. If their EULA does not uphold that ideal, then they should change it, not compromise the ideal.
> This is a non sequitur. Free speech is an ideal. We would like google to preserve free speech, even though they are not legally obligated to per the Constitution. If their EULA does not uphold that ideal, then they should change it, not compromise the ideal.
I would like to uphold that ideal by putting a political bumper sticker on your car. Can you give me your address so I can send it to you?
If you don't have a car, I can send you a sign to put in your window or a t-shirt with a slogan for you to wear.
I appreciate your cooperation in upholding this important ideal.
I'm going to agree but offer a different focus. The agreement is in that YouTube can do this legally and that is not the problem.
But this is also about the practice of ideals. I watched a Chris Martinson video (PhD neurotoxicology, runs something of a prepping channel on YouTube because he doesn't see how the current US approach will resolve comfortably for people who trust the government) on Vimeo the other day because they'd yanked it from YouTube (it discussed the coronavirus). If it has reached the point where YouTube won't let a medical PhD talk about medical research, what use is this platform to me? Music videos, I suppose.
The reason free speech is ideal happens to be because it is better - YouTube is less fit for purpose as an information distributor, it doesn't represent a full spectrum of opinions. I already know what the official government message is - they have a website. I read it on occasion.
> Free speech is an ideal. We would like google to preserve free speech
Is this self evident? Specifically google, or all and any private platforms? It's not obvious to me that every private entity should hold the ideal of free speech -- particularly a version of free speech that means something different than protection from persecution by the state.
But your desire for Google to host certain kinds of speech against their will doesn't supersede Google's own right to decide otherwise on their own platform.
They have their own free speech rights, as well as free association and the rights of free enterprise.
I generally agree with this but YouTube is a communication channel and has a different set of expectations. Additionally they promoted themselves as an open forum in the beginning.
Ideally, discrimination should not exist in something that is promoted as a public forum.
If YouTube only wanted to serve a certain type of genre or they had the same censorship policies from the beginning then that is one thing. But they promoted themselves as open and free speech and then once they achieved critical mass they changed their mind to push their own agenda.
That is dishonest and an abuse of public trust.
As a contrasting example, censoring porn, while still not free speech, would be fine for YouTube to continue to do since they had this stance from the beginning and people chose to join the platform knowing this.
>Additionally they promoted themselves as an open forum in the beginning.
Was this before or after Google? Because Google's interest in the platform is definitely maximizing revenue and competing with first party streaming services.
Yeah, and othre mainstream media organisations - including the freaking New York Times - have successfully campaign dto get many of their remaining advertisers to pull out. To the point where, at least if the people campaigning for this were to be believed, many of the targetted Fox shows couldn't get ads at all.
If youtube wants to remove stuff they can, but they should start with removing the values they state on their about page. You can't say you give everyone a voice and then start blanket silencing people.
You seldomly see the ideals of people who believe in freedom of speech but stated so clearly. A world were corporations control all the information people consume but people can "go outside and yell" in frustration. This is the kind of freedom we must bring to Iran.
I don't even know where to start here. Common carrier laws? The post office? Court cases forcing shopping malls and company towns to allow freedom of speech? I guess these are all innovations by the Textualist/Originalists you despise.
I remember that company towns had to allow private speech, but I never heard that shopping malls had to. I always thought that any protest or disruption or unpopular speech in a mall would get you escorted out by security.
Ok what if your telephone or internet company decided the same thing? We will listen in to your private conversations and if you say something we don’t like we will terminate your contract.
Some services are so large and entrenched that they have become natural monopolies that need to be regulated. We place restrictions on what natural monopolies can do because of the effects on society as a whole.
The question that needs to be debated is if YouTube has reached the status of a natural monopoly or not. I think it probably has, but I am open to evidence on the other side.
I’m not seeing a parallel, here. Your example is mixing in a whole bunch of “invasion of privacy” which doesn’t exist in the real situation being discussed, and is going to vastly impact people’s emotional reaction.
Why not use the more natural example? “What if a television network decided the same thing? We will watch your publicly broadcast television show and if you say something we don’t like we will cancel your show.” It’s a lot closer to what’s actually happening in this case, right? And it doesn’t needlessly mix in the hot-button topic of privacy violations.
So what happens when you have a group phone call? Should the telephone company be able to listen in and censor those as it wishes?
The parallel is phone companies in the past did listen in to people’s conversation, but this was recognised as an abuse of a monopoly power. Regulations were introduced to stop this sort of activity.
What we should be discussing is how to regulate natural monopolies like YouTube, not if they should be able to just do whatever they like. Like all regulation there will need to be a balance, but society has a vested interest in ensuring natural monopolies are not abusing their power.
Even if YouTube is not abusing its power, the fact it can is a concern.
For Example, Holocaust denial is shitty and something that anyone should be allowed to ban just because they don't want to be associated with it. The problem is that you have to draw the line somewhere between "The Jews faked the Holocaust and none of it ever happened" and " The exact number of deaths in the Holocaust will never been known but it was in the millions". The first statement is clearly false, the second statement could be an attempt to downplay the magnitude of the event, but it is hard to say without context and you may never know the intent of the person who said it.
> but it is hard to say without context and you may never know the intent of the person who said it.
So? There is a certain revulsion that a lot of engineers have for situational decision making, but that's how we've been doing the law for centuries. Virtually every line that matters is hard to draw. Yet we don't paralyze ourselves by refusing to act.
If corporations can't discriminate in hiring due to an American principle of anti-racism, then we can also prevent corporations from discriminating in speech on an American principle of free speech.
They have the right to do it, and it will make the world worse off. Taibbi always has had a great guy instinct for normal Americans and how they react to elite condescension, and he's right here.
My idiot brother got twice as radicalized when they kicked Alex Jones's crazy ass off YouTube.
It made people feel better and more morally superior, especially employees. It hurt my brother. Now when he watches that asshole, he's on a website where he is guaranteed to never get the other side.
If you support this, then you don't know anything about how humans react to censorship.
Consider a utilitarian point of view. This is not about grounding people who already went off the deep end, it's about reducing the number of people who get radicalized. I think from that perspective deplatforming radicals is both ethical and effective.
Obviously I would agree if you're talking about Alex Jones. A thing that worries me is when people are banned or videos are removed when the facts in question are in dispute.
I was a very big proponent of wearing mask in the beginning of the pandemic back when the WHO and Dr Fauci had advised against it not because it was true but to manipulate the public for the purpose of preventing a rush on n95 masks.
If I had stated in a video the truth which was that the public was being manipulated to protect the masks, I would have been contradicting WHO and the video could have been banned. That's a problem.
Even before Jones was kicked off, people went to YouTube for insane conspiracy theories and ended up, due to the video recommendation system, still only seeing one side. It really didn't matter whether Jones was on there or his own platform, people watching his videos still only see his (and related) videos.
Sorry about your brother. These platforms are designed for feedback loops. That has nothing to do with free speech or a legal protection to spread libel.
People need to realize that ANY website where a user submits content for is not classified as free speech.
A tweet isn’t free speech.
A post on Reddit isn’t free speech.
These are “articles and or content provided by the public for enhancements to our platform” or whatever EULA legalese they use.
Yeah, like I said, banning a bunch of videos from one genre right now looks bad and people have a right to be upset over it. It’s still YouTube’s legal right to choose which content they want to showcase on their platform.
They hold all the power here. Content creators are at their mercy as far as compensation, views, inclusion in the recommendation engine, etc. then cry foul when the company moves in another direction. Down-leg economics. Specialized businesses based on a supply of value from another business. It’s not a sustainable model for anyone on the 3rd rung.
An individual's understanding of the world is shaped exclusively by the signals produced by their 5 senses. Intelligence governs an individual's ability to interpret those signals and draw further conclusions from those interpreted signals, but if that individual is fed a constant stream of bad yet internally-consistent data, correct application of logic and reason will lead that individual to objectively incorrect conclusions. Further, human beings aren't perfectly rational machines capable of processing the utterly overwhelming amount of signals we are exposed to at any given instant, rather we develop blinders to focus in on specific signals and we develop heuristics to reach decisions without processing all possible signals. I haven't read every whitepaper put out by every politician I've ever supported, nor has 99.9999+% of others, but we still support politicians because of our heuristics and blinders. I prefer forming my opinions from whitepapers, source documents, and a rigorous analysis of a candidate's behavior, but ultimately a heuristic I rely on is the opinion of other people or institutions that I trust, like the New York Times, the Economist, Ezra Klien, Matt Yglesias, Raj Chetty, etc.
I've decided that I don't like believing incorrect things, so I've thought a lot about epistemology and embraced a mixture of pragmatism and empiricism (where possible). I try to independently verify suspicious claims, but it took me a long time to become a competent data scientist, I'm willing to spend far more time searching for good data/signals and analyzing them than most other people, and I still can't be informed on everything which necessitates reliance on sub-optimal heuristics. Most people don't even attempt the rigor I aspire to, preferring a tribal epistemology, where they outsource opinion-forming to the thought-leaders of their tribe. If those tribal thought-leaders are telling people things like
* "Hillary Clinton is running a pedophilia ring out of the basement of this pizza restaurant" or
* "COVID-19 is a librul hoax so don't wear a mask or embrace any hard-won public health advice" or
* "Joe Biden's landslide victory isn't real, and even though we can't prove it, despite the fact that electoral systems are designed so that malfeasance would be easily detectable, you should ignore reject the result of a DEMOCRATIC ELECTION, EVEN IN STATES WHERE THE OUTCOME WAS CONFIRMED BY A HAND RECOUNT"
you'll see behavior from individuals in that tribe that's consistent with the signals those individuals perceive, but the behavior won't produce the expected outcome, because the signals don't reflect reality. Instead, it will produce mayhem, death, destruction, pain, and loss. This is empirically undeniable.
TL,DR:
Democracy depends on the votes of people. People form opinions only from the information that passes through their senses and into their mind. Garbage in, garbage out. It's possible to solve problems through rational analysis of sufficiently accurate models of reality, but models that don't reflect reality can be extremely, devastatingly destructive. If the signal "Donald Trump actually won the election and it is being stolen from him" was an accurate description of reality, it would rational to use violence to repel the usurpers trying to steal your country. But that signal is false, and spreading that signal both is eroding belief in democracy among the communities where enough members embrace that false signal, and it will in lead to unjustified murders.
What exactly does a hand recount confirm? What is the point, when the accusation had nothing to do with counting? The accusation is that fraudulent ballots were mixed in with the legitimate ones. Recounting the contaminated pile tells us exactly what?
Each state has its own laws regarding elections and recounts, but in all the states I've looked at, voters have to sign either a document at their precinct if they're voting in person, or on an envelope for their ballot if they're voting by mail. If there are more votes in a precinct than there are in that precinct's voter registration database, or if they have a mismatch of signed envelopes or sheets and ballots, or the hash on the envelope doesn't correspond to the ballot, or votes from people not registered to that precinct, or multiple signed sheets/envelopes for a single registered voter, all of this makes the "they just dumped in more ballots in" theory more unbelievable, as the amount of coordination and access needed to successfully execute that theoretical tampering without leaving plenty of evidence becomes realistically impossible.
It's fine to investigate a hypothesis if you're willing to accept the hypothesis is false. But if you continue asserting your hypothesis is true after many competent investigators, including investigators ideologically aligned with yourself and your desired outcome, investigate the system and find no evidence to support said hypothesis, instead finding evidence that repudiates your hypothesis, well, it's bad for democracy to continue pushing that hypothesis.
Witness affidavits suggest some counties in some states did not follow their own laws.
Now, unfortunately, there is no legal remedy because in most cases the irregular ballots cannot be separate from the regular ballots once they are pooled together.
A lot of election laws define the rules, but few define the remedy if they are broken.
There have been a lot of signed witness affidavits, but nearly all of them have been thrown out in court as not credible, and 0 of them have lead to victories in court.
That what I find most amusing about the “everyone should vote”. Then they complain about the stupid people and policies that get voted in.
I might be okay with “every informed person should vote”. But that “informed” part can be heavily politically interpreted.
People have largely missed the fact that politics used to be the domain of informed groups fighting for power blocs. Now it’s been “decentralized” and radicalized on Twitter and Reddit and made into glorified reality Tv entertainment by the mass media (who are clearly serving a market with high demand who treat politics like sports teams).
I’d rather have way less people voting and it go back to being mostly boring educated people topic than it’s below common denominator mess it is today.
But I’m unabashedly elitist and understand that there are plenty of well funded groups who want dumb malleable voters and as many as possible.
Since you've defined free speech as being published by Youtube, a private publisher, I guess you think democracy didn't exist before Youtube and free speech didn't before Youtube?
Democracy is also intrinsically linked to freedom of association.
YouTube has chosen to deny associating itself with certain views, as is their right. It is no different than how Fox, CNN and others may choose what to broadcast.
Don't like that? Use BitChute or PeerTube or similar.
Well, we do reject this idea. That's why we have a republic rather than a direct democracy.
The same allegory of the cave justifications for why the average individual is too stupid to rule and shouldn't be allowed to are made today. Oh, you aren't well enough educated or experienced enough? You should not govern, and move aside for those who are educated or experienced...
We have a republic, because that's how it was organized more than two centuries ago, and changing it from within the system is extremely difficult. But many states have elements of direct democracy in form of public initiatives, and they generally tend to be the states that were settled and organized later (e.g. West Coast), and did so with benefit of hindsight, after seeing many decades of how the original system works.
So I'm not sure if it's even meaningful to say that we do reject the idea. The original designers did (mostly; they weren't a hivemind, either), but we aren't them; and they weren't really representative of their entire society, either.
I actually agree somewhat and hear me out. I think these people don't realize the juggernaut they are creating. I'm a political centrist but even I see it. Suppression of the message will only amplify it.
Prediction:
Trump will run again in 2024, no doubt. Consider the implications of that for a second. He already said he will be holding a major 2024 rally in DC during the Biden inauguration. Imagine the shadow that will cast on Biden Day 1, especially if half the population thinks the election was stolen and no outlet for that opinion (even Fox News). Coupled that with the current infighting within the DNC between progressives vs the corporate establishment elitist liberals that they accuse Biden of being, and they are essentially giving Trump all the foundation he needs to become the underground anti-establishment anti-DC-swamp anti-deep-state no-lobbyist no-China-influence anti-MSM anti-big-tech real-middle-east-peace i-already-was-president anti-hunter-biden-corruption and even anti-fox-news candidate which will go viral and make him even more of outlier then he was in 2016. Trump will use all of this and more (Hunter Biden story suppression especially) as real a boogyman to point to. The totalitarian dictator analogies will no longer hold any water when it turns out he steps down peacefully but not quietly on Nov 3rd, albeit with a huge legal fuss that fizzled out. He will aggrandize himself and play that as him being a fighter to the bitter end to his base. I don't think the GOP can primary anyone good enough to beat that version of Trump and you will see a massive blue collar class vote swap from DNC to GOP.
The writing is on the wall, eventually there will be bifurcation in Big Tech and Twitter and Youtube will become the walled garden for the left of center only and largely irreverent with conservatives with things like Parlor and BitChute and others taking the exodus. It will turn into a battle for the centrists eyeballs to come to their sides platform or straddle both. The analogy of Reddit banning toxic subs only holds if everyone is ACTUALLY on reddit, most simply left. There is already talk among conservatives of alternative Reddits and Facebooks, alternatives Netflix/Hollywood companies, alternative Fox News even. Many will say "but those will be subject to the same problems of early Twitter and Reddit, how to excise the toxic elements and keep the real discussion." I would say that can EASILY be done without going down the censorship rabbit hole by simply knowing where to stop. Get rid of the illegal stuff, the spam and pornography, and leave the rest regardless of how distasteful it is. Its an Overton Window problem, you just need slightly looser boundary condition on discourse. Stop pretending to be platforms and admit you are indeed publishers to an extent. Give users the ability to filter what they don't want to see and pledge to never bias your algorithms. The cries of the left will be largely irrelevant, they will simply tell you to "go back to Twitter if you don't like it". Once the public loses faith and trust in your company or institution due to your sacrifices in credibility in the name of censorship of one sides information, there wont be any coming back from that loss of face. I'm not saying this is a good or healthy thing for public discourse, but I do think ti will come to pass.
Its questionable whether Biden will make it all the way through his term due to age and decline, and he has already said he will step down after one term. That means the DNC will likely roll with Kamala next (If they are smart they would re-primary her, but I dont think they have the will to, the backlash and collectivist guilt will be huge). Kamala who couldn't get 2% of her parties own primary vote in 2020, up against juggernaut viral version of Trump holding mega rallies and creating an alternative media empire he has already dropped hints at.
I predict the headlines of the future will largely revolve around how the MSM and Big Tech blew it and need find a way to outreach and repair their reputations and faith to the general public - and will look back at this move as a huge mistake compounding others. The landscape will look totally different.
I welcome anyones disagreement with my prediction. I will say it hinges on Trump pivoting correctly.
Much like Andrew Jackson after the election of 1824. After losing, campaigned as a populist candidate accusing Jon Quincey Adams of stealing the election and won in landslides in the next two elections.
First off, I don't think I will ever be convinced by someone calling something "un-American" or "American." That is almost always lazy thinking that tries to wrap up an emotional sentiment into some kind of conclusion. America and what it is to be American has changed and it will continue to change. We'd better hope that change is guided by reasoning about what is good or bad for this country instead of appeals to what is "American" or not.
Second, I'm rather disappointed by the defenses of free speech we see these days. They are flimsy, lack substance, and at times seem unwilling to actually argue for free speech. Is free speech actually good for anything? Is the only reason we attack attempts by people to encroach on it because it is "American"? I, for one, would like an argument for its value.
This piece is flimsy. The discussion is about youtube getting removing election misinformation. Ok, controversial. I get it.
It brings up Hunter Biden's laptop as a case where
> That news was denounced as Russian disinformation by virtually everyone in “reputable” media, who often dismissed the story with an aristocratic snort, a la Christiane Amanpour
with those lovely scare quotes around "reputable". I'm not sure what to say here. Is it better for the media to run around screaming after every single lead? Even the ones that look flimsy to them? Is it better for the US media to be so willing to report and investigate on anything and everything these days when the media making a hubub about anything is enough to have an effect?
Taibbi proceeds to ask us to indulge in a hypothetical, one where
> what would have happened if Facebook and Google had banned 9/11 Truth on the advice of intelligence officials in the Bush years
and the natural result of this is that
> it will start to make sense that Trump voters in Guy Fawkes masks are now roaming the continent like buffalo.
I'm sorry but this is incredibly lazy thinking. I'm going to stop quoting the piece because I'm tired and I have things to say. Things have changed. We've had Trump in office and he was most assuredly different than other presidents -- in good ways and bad. We've had the rise of the internet, of smartphones, of digital technology, and social media. Information flows freely. Nowadays we don't risk not having access to information, we risk being drowned in it. The "echos" in our echo chambers aren't some soft faint whispers we can't see beyond, they are roaring deluges that drown out everything else.
I'm sorry for the longer post, but the truth is that I hate these kinds of articles. They seem to just gawk at the problems we face today and do little to inform or provide perspective or argue. Taibbi in this case seems to think it's enough to post a tweet or some headlines; the reader will draw the right conclusions for themselves. It's obvious after all. Isn't it? But then that's exactly the problem we face, where everything is obvious but somehow the other guy's come to a different conclusion and but it's all wrong!
Offer some damn arguments. Try to convince people. If free speech is worth fighting for it isn't because it's some kind of "American" ideal, it's because its a good thing, a worthwhile ideal to practice, a civic habit that improves our democracy and secures it for the future. People deal with information differently these days; that landscape has most assuredly changed. Is it any surprise that free speech will need to fought for again?
I would only say that I FAR prefer an online landscape like the one we have right now, with loonies in the far right and left to be able to express themselves in all platforms, that a corporate controlled, sanitized, heavily censored Internet, it is not even close.
If you look throughout history, for every anti-establishment movement that succeeds, they're are many many more that are curbed in various stages.
History has selection bias which may suggest that anti-establishment movements do well. It has happened a lot of times, but if you look at the probability of an individual movement, it is very low.
My prediction is that this will only make the establishment much stronger and they'll be able to curb even more dissidents way before they become a thing.
“In sum, the majority of Americans are generally unable to understand or value democratic culture, institutions, practices or citizenship in the manner required”.
“To the degree to which they are required to do so, they will interpret what is demanded of them in distorting and inadequate ways. As a result they will interact and communicate in ways that undermine the functioning of democratic institutions and the meaning of democratic practices and values.”
Amusingly, the author is clearly one of those who are "generally unable to understand or value democratic culture, institutions, practices or citizenship in the manner required". The one-sided blame makes it clear that he can't graciously accept defeat.
I think that life would be better if employers honoured their wage obligations instead of drawing up elaborate networks of hidden fees and transfers which put employees' hard earned wages back into the employer's pocket. Nobody really owns anything, they have everything they think they earned on layaway.
You make that sound like a bad thing? There are some that believe (as I do) that democracy will leads to oligarchy. But then again I've been reading a lot of Rothbard recently.
As for populism in a democratic system is a symptom of politicians/political parties not being seen by their citizens to be taking actions in regards to thorny subjects such as immigration, globalisation and law enforcement.
Rothbard ended up as extreme paleocon, and his adherents such as Hans-Hermann Hoppe developed his line of thinking to its logical conclusion:
"In a covenant concluded among proprietor and community tenants for the purpose of protecting their private property, no such thing as a right to free (unlimited) speech exists, not even to unlimited speech on one's own tenant-property. One may say innumerable things and promote almost any idea under the sun, but naturally no one is permitted to advocate ideas contrary to the very purpose of the covenant of preserving and protecting private property, such as democracy and communism. There can be no tolerance toward democrats and communists in a libertarian social order. They will have to be physically separated and expelled from society. Likewise, in a covenant founded for the purpose of protecting family and kin, there can be no tolerance toward those habitually promoting lifestyles incompatible with this goal. They – the advocates of alternative, non-family and kin-centered lifestyles such as, for instance, individual hedonism, parasitism, nature-environment worship, homosexuality, or communism – will have to be physically removed from society, too, if one is to maintain a libertarian order."
(Then neo-reactionaries took it from there, ditching all the excuses to present this state of affairs as "libertarian", and correctly calling it the new feudalism.)
These conversations always go like this. It becomes tedious.
1) I don't care what Rothbard became in the end. It is completely irrelevant. Rothbard's critique of the state is an interesting perspective and some of them seem particularly apt when the leviathan of government is eroding people's rights because of COVID.
2) As for the Hoppe quote. The "neo-reactionary" you mention can be counted on one hand. The activist left (which is why that quote is on wikipedia in the first place) will take quotes and call someone alt-right based on one spicy paragraph in a book (which is exactly the trick you've tried here and I am not that naive to fall for it).
I haven't read "Democracy the God that failed" (yet) and I will decide for myself once I've read the book. I very much doubt it is a new feudalism and I very much doubt you've read the book either.
From watching him speaking. Hoppe's construction seems to be that given the choice between King and a Politician, a King would be better. His rationale for this is sound IMO. The most important part of it (for me) is a King will care about his legacy and a politician typically won't.
My own feelings is that I've never thought that democracy is effective or desirable. I've found the act of voting to be completely pointless due to the fact I have nobody to vote for in the UK that represents my interests of decreasing the state.
The only time when voting is effectives is during referendums when it is a single issue. Even then it isn't effective The UK's politicians and press did everything they could to deny the referedum result (and are still doing so btw).
I have heard arguments that the whole idea of democracy itself has been perverted during the enlightenment of those putting a Christian/Individualist perspective on Athenian ideas. But I won't pretend to know the argument well enough to have any opinion either way on it.
Bear in mind that she's a self-described "poll challenger", which is a partisan position. We can reasonably infer that she is a strong supporter of Trump and therefore is not impartial.
That's a fair point, which is why challenged ballots get marked as "challenged" and not "rejected".
It is the duty of the observers to challenge ballots.
It is the duty of the election workers to record these challenges, which wasn't happening.
The only reason her challenge should have been denied is if she was shown to be challenging ballots indiscriminately or she was preventing the election workers from doing their jobs. To my knowledge, there has not been any evidence that this was the case.
It is not the Trump legal team's responsibility to prove that this woman (and the others like her) was impartial. Their only obligation is to provide a "preponderance of evidence" that supports their case. Given the large number of affidavits that their team has provided of a similar nature, it seems reasonable that the case should be allowed to be heard in a manner that takes both sides seriously.
> Given the large number of affidavits that their team has provided of a similar nature, it seems reasonable that the case should be allowed to be heard in a manner that takes both sides seriously.
Before presenting evidence, you have to present a legal argument for which the evidence would provide factual substantiation. The vast majority of the Trump- and Trump-allied-lawsuits have failed at that and been dismissed for that reason. The number of affidavits is immaterial if you don't have a legal argument for them to support which, if the facts were on your side, would support the remedy you are asking for.
Several of the lawsuits have been dismissed on the general basis that "election laws being broken is not evidence that fraud occurred".
It's logically a true statement, and I think it's a fair opinion to have.
But I do disagree with it: The laws regarding poll observers are designed to detect fraud. It is extremely hard to detect fraud when the poll observers aren't allowed to do their job, or when their challenges are ignored.
I get that the optics are bad: throwing out hundreds of thousands of ballots doesn't make anyone look good.
But what's the point of election laws if ultimately it doesn't matter when they are broken?
I'm not an expert in election laws, but I would be very curious if this standard of proving fraud has generally been applied for previous election cases in the same way.
> Several of the lawsuits have been dismissed on the general basis that "election laws being broken is not evidence that fraud occurred".
I do not believe that is correct. I have seen none for which that is the case.
Several have been dismissed because (1) process changes that they claimed violated election laws were well-known long before the election and, if they were illegal, had a viable remedy before the election, but the actions were not filed until after the election with no good cause for delay, and were thus barred by laches, (2) they made allegations which were incoherent in the light of the applicable election law and procedures, (3) because the remedy requested was factually impossible (e.g., cases asking for the ballot count to be halted because observers were allegedly denied access and until that access was restored that reached the court after ballot counting had completed, cases seeking delays in certification of results that has already occurred, etc.), (4) because the plaintiffs failed to present evidence of the specific allegation in the case, and (5) [what appears to be the single most common reason] because, after grabbing headlines by filing the suit, it was voluntarily withdrawn by the plaintiffs. This last seems particularly common with cases filed by Trump, though Trump-allied groups have also withdrawn a lot of suits.
Are you trying to fraudulently overturn the results of the free and fair election that your comments should be downvoted because they don't contribute to the discussion and violate the guidelines?
>Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
Are you going to keep creating new accounts every time you get so much negative karma you can't post any more, as many times as Trump and the GOP have lost lawsuits trying to overturn the election, until you're at "prucomaclu50"?
>Trump And The GOP Have Now Lost More Than 50 Post-Election Lawsuits
>The Trump campaign and its Republican allies have officially lost or withdrawn more 50 post-election lawsuits, and emerged victorious in only one, according to a tally kept by Democratic Party attorney Marc Elias, underscoring the extent to which President Donald Trump and the GOP’s efforts to challenge President-elect Joe Biden’s win in the courts has overwhelmingly failed to affect the election results.
>The 50-case milestone was reached Tuesday as a state court in Georgia dismissed a Republican-led lawsuit, and the count includes both cases that courts have struck down and that the GOP plaintiffs have chosen to withdraw, such as an Arizona lawsuit that the Trump campaign backed down from because it would not affect enough ballots to change the election outcome.
>The Trump campaign and GOP’s only win struck down an extended deadline the Pennsylvania secretary of state set for voters to cure mail-in ballots that were missing proof of identification, and likely only affected a small number of mail-in ballots.
>Among the Trump campaign’s more notable losses in court thus far are the campaign’s failed lawsuit attempting to overturn Pennsylvania’s election results, which a Trump-appointed appeals court judge said was “light on facts” and “[had] no merit,” and a Nevada court that found the campaign had “no credible or reliable evidence” proving voter fraud.
>Courts have also repeatedly struck down the campaign’s allegations claiming their election observers were not able to properly observe the vote counting process, and while one Pennsylvania court did grant the campaign a win by ordering that poll watchers can move closer to election workers, the Pennsylvania Supreme Court later overturned the ruling.
>In addition to the Trump campaign, GOP allies including state lawmakers, Republican Party officials and former Trump legal advisor Sidney Powell have also brought dozens of entirely unsuccessful lawsuits, and a lawsuit brought by Pennsylvania GOP lawmakers was rejected Tuesday by the U.S. Supreme Court.
>The legal campaign is expected to continue until the Electoral College meets on Dec. 14—or potentially until January—but a “safe harbor” deadline midnight Tuesday, which ensures certified results submitted by that date can’t be challenged by Congress, will make it harder for outstanding cases to succeed.
You might ask yourself why judges that Trump himself appointed are quashing his lawsuits. The answer is that the lawsuits fail to meet basic requirements of standing, evidence, remedy, etc.
Also timely filing (laches); many challenge procedures which were well known long before the election and were delayed for no good cause until after the election results came in.
We've already asked you to stop posting nationalistic flamebait: https://news.ycombinator.com/item?id=24191922. If you keep doing it we're going to have to ban you, so please stop.
Meh, I'm not impressed. Of the dozen or so sources I looked at, half them are links to articles on right-leaning news sites or tweets parroting speech from Trump lawyers, with no links to primary documents. This isn't evidence.
One source was a video of election workers moving boxes around, with a conspiratorial narrative overlaid. This isn't evidence.
Several tweets showing "voting spikes" in favor of Biden as a result of regular counting operations. Nothing wrong here.
The remaining sources discuss "irregularities of expectation", yet none provide any plausible narrative of wrongdoing. The most plausible explanation is that it Democrat-favored voting methods were more streamlined, reliable, and less contested this year compared to years previous. Obviously this nets more votes for Democrats, but nothing practical is stopping a Republican from voting by mail, either. Also, I wouldn't be shocked if the 2020 election was actually more fair than in previous years, due to a large decrease in wrongfully-rejected ballots. I wouldn't be surprised if more Democratic votes were thrown away wrongfully in 2016 than fraudulent Democratic votes were accepted in 2020. If you're concerned about the legitimacy and fairness of U.S. elections, you can't talk about voter fraud while ignoring historical voter disenfranchisement. If someone thinks that mail-in voting was too easy this year, then their gripe is with the judicial systems that permitted this year's election rules.
Conclusion: The linked website doesn't seem to have very high standards of verification, as most items are obviously flimsy or easily debunkable.
Here's one example. It clearly wasn't decided on procedural grounds.
==========
Judge Matthew W. Brann dismissed the case with prejudice on November 21, citing "strained legal arguments without merit and speculative accusations," noting that "[i]n the United States of America, this cannot justify the disenfranchisement of a single voter, let alone all the voters of its sixth most populated state ... [o]ur people, laws and institutions demand more". He likened the Trump team argument to "Frankenstein's Monster", and characterized the requested remedy to disqualify nearly seven million votes as "unhinged from the underlying right being asserted."
Maybe procedural grounds was the wrong word but this is a good example of what I meant.
"I don't want to see the evidence that may disqualify votes because it will disqualify other votes." is basically the logic. It seems weird to me but I'm not a lawyer so I could be wrong and this is perfectly normal ruling from a judge.
Are you trying to fraudulently overturn the results of the free and fair election that your comments should be downvoted because they don't contribute to the discussion and violate the guidelines?
>Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
Are you going to keep creating new accounts every time you get so much negative karma you can't post any more, as many times as Trump and the GOP have lost lawsuits trying to overturn the election, until you're at "prucomaclu50"?
>Trump And The GOP Have Now Lost More Than 50 Post-Election Lawsuits
>The Trump campaign and its Republican allies have officially lost or withdrawn more 50 post-election lawsuits, and emerged victorious in only one, according to a tally kept by Democratic Party attorney Marc Elias, underscoring the extent to which President Donald Trump and the GOP’s efforts to challenge President-elect Joe Biden’s win in the courts has overwhelmingly failed to affect the election results.
>The 50-case milestone was reached Tuesday as a state court in Georgia dismissed a Republican-led lawsuit, and the count includes both cases that courts have struck down and that the GOP plaintiffs have chosen to withdraw, such as an Arizona lawsuit that the Trump campaign backed down from because it would not affect enough ballots to change the election outcome.
>The Trump campaign and GOP’s only win struck down an extended deadline the Pennsylvania secretary of state set for voters to cure mail-in ballots that were missing proof of identification, and likely only affected a small number of mail-in ballots.
>Among the Trump campaign’s more notable losses in court thus far are the campaign’s failed lawsuit attempting to overturn Pennsylvania’s election results, which a Trump-appointed appeals court judge said was “light on facts” and “[had] no merit,” and a Nevada court that found the campaign had “no credible or reliable evidence” proving voter fraud.
>Courts have also repeatedly struck down the campaign’s allegations claiming their election observers were not able to properly observe the vote counting process, and while one Pennsylvania court did grant the campaign a win by ordering that poll watchers can move closer to election workers, the Pennsylvania Supreme Court later overturned the ruling.
>In addition to the Trump campaign, GOP allies including state lawmakers, Republican Party officials and former Trump legal advisor Sidney Powell have also brought dozens of entirely unsuccessful lawsuits, and a lawsuit brought by Pennsylvania GOP lawmakers was rejected Tuesday by the U.S. Supreme Court.
>The legal campaign is expected to continue until the Electoral College meets on Dec. 14—or potentially until January—but a “safe harbor” deadline midnight Tuesday, which ensures certified results submitted by that date can’t be challenged by Congress, will make it harder for outstanding cases to succeed.
I did see it but it was also flagged so I couldn't reply.
The thing about those affidavits, multiple people said the same thing about different polling places which makes it more believable in my eyes than if it was just one person for each allegation.
I do think there was fraud, I'm sure there is even some amount of fraud every election. The real question should be, was the fraud widespread and did it make a difference? I don't know but the fact that people can't even debate this is concerning to me.
Are you trying to fraudulently overturn the results of the free and fair election that your comments should be downvoted because they don't contribute to the discussion and violate the guidelines?
>Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
Are you going to keep creating new accounts every time you get so much negative karma you can't post any more, as many times as Trump and the GOP have lost lawsuits trying to overturn the election, until you're at "prucomaclu50"?
>Trump And The GOP Have Now Lost More Than 50 Post-Election Lawsuits
>The Trump campaign and its Republican allies have officially lost or withdrawn more 50 post-election lawsuits, and emerged victorious in only one, according to a tally kept by Democratic Party attorney Marc Elias, underscoring the extent to which President Donald Trump and the GOP’s efforts to challenge President-elect Joe Biden’s win in the courts has overwhelmingly failed to affect the election results.
>The 50-case milestone was reached Tuesday as a state court in Georgia dismissed a Republican-led lawsuit, and the count includes both cases that courts have struck down and that the GOP plaintiffs have chosen to withdraw, such as an Arizona lawsuit that the Trump campaign backed down from because it would not affect enough ballots to change the election outcome.
>The Trump campaign and GOP’s only win struck down an extended deadline the Pennsylvania secretary of state set for voters to cure mail-in ballots that were missing proof of identification, and likely only affected a small number of mail-in ballots.
>Among the Trump campaign’s more notable losses in court thus far are the campaign’s failed lawsuit attempting to overturn Pennsylvania’s election results, which a Trump-appointed appeals court judge said was “light on facts” and “[had] no merit,” and a Nevada court that found the campaign had “no credible or reliable evidence” proving voter fraud.
>Courts have also repeatedly struck down the campaign’s allegations claiming their election observers were not able to properly observe the vote counting process, and while one Pennsylvania court did grant the campaign a win by ordering that poll watchers can move closer to election workers, the Pennsylvania Supreme Court later overturned the ruling.
>In addition to the Trump campaign, GOP allies including state lawmakers, Republican Party officials and former Trump legal advisor Sidney Powell have also brought dozens of entirely unsuccessful lawsuits, and a lawsuit brought by Pennsylvania GOP lawmakers was rejected Tuesday by the U.S. Supreme Court.
>The legal campaign is expected to continue until the Electoral College meets on Dec. 14—or potentially until January—but a “safe harbor” deadline midnight Tuesday, which ensures certified results submitted by that date can’t be challenged by Congress, will make it harder for outstanding cases to succeed.
Sounds like something a censorship advocate would say (or someone who dislikes the actual things being censored as of the current climate). Will you still believe the same way when the pendulum swings and it's the left that is being censored?
"There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that "my ignorance is just as good as your knowledge." -Isaac Asimov
or more to the point
"Reality has a well-known liberal bias" - Colbert
You're not entitled to be heard. That isn't a tenement of Democracy or Liberty. You're entitled to be able to speak (as long as you don't hurt others rights).
So what is un-american thinking that Youtube needs to allow conspiracies and crazy un-proven, and way out there content.
These topics, to be charitable, are heavily debated in biology and sociology, at least to some degree - look up "nature nurture" on Google Scholar and you'll find as much scientific debate, and recently attempts to transcend the debate, as you could read in two lifetimes. I don't think most "liberals" are even opposed to nuclear and GMO.
My point was that liberals have their own biases and can be ignorant. For example, in the nurture vs nature debate, the blank slate has been disproven for years. Everything is heritable, yet the left clings to the idea of all disparities being caused by culture because it fits their ideology. -> You can't fix nature with policy but you can if it's nurture.
Because it increases the amount of pesticides used (or incorporates them directly into the plant where they can't be rinsed off) and pollutes the natural gene pool with artificial genes. The increased amounts of pesticide run off the field and mess up the rest of the ecosystem. Now you're seeing herbicide-resistant weeds come up in these fields.
>and pollutes the natural gene pool with artificial genes
As opposed to the natural gene pool being constantly polluted by spontaneous mutations? Or is that fine because of the naturalistic fallacy?
>The increased amounts of pesticide run off the field and mess up the rest of the ecosystem. Now you're seeing herbicide-resistant weeds come up in these fields.
AFAIK this is a non-issue except to the farmers. Resistance to whatever usually comes at a cost, so in the wild the superweeds will get out-competed by their non-resistant counterparts.
> Does this argument work without the "all pesticides are bad" assumption? I searched up bt gene's use in pesticides[1] and it looks pretty safe.
Its referring to herbicide-resistant GMO crops where the pesticides are Roundup or Liberty. With regard to Bt, there's a big difference between topical application of Bt and GMO Bt corn (where the Bt is produced inside the plant so bugs are poisoned by the plant).
> "all pesticides are bad" assumption
Its more of a heuristic where we assume that things that are bad for biological organisms are bad for biological organisms because molecules don't make decisions about what to react with, they react with anything that forms the right molecular bonds. Since the biosphere has a lot of bimolecular pathways its not possible to categorically test everything and catalog all the possible reactions, we just know we can limit the amount of unwanted reactions by using the minimum amount of pesticide.
> As opposed to the natural gene pool being constantly polluted by spontaneous mutations?
They aren't in opposition because mutations don't stop when you start polluting the gene pool with GMO products.
> Or is that fine because of the naturalistic fallacy?
Its an application of the precautionary principle where we act in such a manner to create the least possibility of harm because the scope and extent of the effects are not known.
> AFAIK this is a non-issue except to the farmers.
Yes its a problem for the farmers because they can no longer rely on the herbicide to control those competitor organisms, which was the entire point of using a selective herbicide / GMO crop system.
With regard to the ecosystem disruption due to pesticide runoff, its an issue for anyone who is dependent on the biosphere because of the harmful effects on biodiversity.
>With regard to Bt, there's a big difference between topical application of Bt and GMO Bt corn (where the Bt is produced inside the plant so bugs are poisoned by the plant).
Why would there be? The wikipedia article I linked mentioned ingestion testing. It's not like they tested bt as a regular spray on pesticide, found it was safe, then approved bt gmo corn without further testing.
>Since the biosphere has a lot of bimolecular pathways its not possible to categorically test everything and catalog all the possible reactions, we just know we can limit the amount of unwanted reactions by using the minimum amount of pesticide. [...] Its an application of the precautionary principle where we act in such a manner to create the least possibility of harm because the scope and extent of the effects are not known.
But the scope and extent of the effects are not known. Glyphosate and bt have both been thoroughly studied. With your "precautionary principle" we'd all be not eating chocolate right now if it was first used as a pesticide (chocolate kills dogs, so if dogs were somehow a pest, it can be used as a pesticide against them).
>They aren't in opposition because mutations don't stop when you start polluting the gene pool with GMO products.
So mutations are intrinsically bad? In other words, if there were some way to stop naturally occurring mutations globally, you'd support it?
>Yes its a problem for the farmers because they can no longer rely on the herbicide to control those competitor organisms, which was the entire point of using a selective herbicide / GMO crop system.
It's a problem, but the alternative (not using any GMO/herbicide) is worst, since the weeds will still be growing out of control. At least if you used GMO/herbicide you have a few decades of weed-free growth.
>With regard to the ecosystem disruption due to pesticide runoff, its an issue for anyone who is dependent on the biosphere because of the harmful effects on biodiversity.
This is totally orthogonal to the discussion of GMOs. If you're worried about pesticide/herbicide run-off from farms, then you'd want to regulate it directly, rather than trying to regulate it by proxy by limiting access to technology that allows for greater pesticide/herbicide use. It's not like the threshold where herbicide levels tolerated by non-gmo crops is the same threshold that ecosystem damage occurs. Not to mention, non-gmo crops can still have their herbicide resistance raised by selective breeding.
If you're concerned about the safety of glyphosate formulations you may be interested to learn that the testing and regulatory process appears to have been captured. [0] [1]
> Why would there be?
Are you asking why there would be a difference between a topical application that could be rinsed off and a substance that is produced by the plant and is inside the plant material itself?
> But the scope and extent of the effects are not known.
Indeed, this is why the precautionary principle is at play here.
> Glyphosate and bt have both been thoroughly studied.
They've been studied enough to suggest that they are potentially hazardous depending on the dose and context.
> With your "precautionary principle" we'd all be not eating chocolate right now if it was first used as a pesticide (chocolate kills dogs, so if dogs were somehow a pest, it can be used as a pesticide against them).
That's not a valid interpretation of the precautionary principle.
> So mutations are intrinsically bad? In other words, if there were some way to stop naturally occurring mutations globally, you'd support it?
I haven't suggested that, of course mutations are not "bad", they just are. I was responding to your placement of GE products in opposition to mutations. No, I wouldn't stop evolution if I could, but that's interesting and I wonder if a population with no mutations would have significantly lower cancer rates. Its an interesting thought experiment.
> It's a problem, but the alternative (not using any GMO/herbicide) is worst, since the weeds will still be growing out of control. At least if you used GMO/herbicide you have a few decades of weed-free growth.
There are other ways to control competitor organisms besides herbicide and there are people who use conventional herbicide that don't use GMOs (and so they don't use as much herbicide). Just so you know, most farmers are not letting weeds grow out of control on their fields :)
> This is totally orthogonal to the discussion of GMOs. If you're worried about pesticide/herbicide run-off from farms, then you'd want to regulate it directly, rather than trying to regulate it by proxy by limiting access to technology that allows for greater pesticide/herbicide use.
Are you suggesting that it would be better to place limits on how much / how often herbicide could be applied to a field? Perhaps so but I'm not convinced that the laws in this case could be finely tuned enough to achieve the effect without creating a rule that was either too strict or too lenient (or perhaps both at once) for a large percentage of farmers. Without a selective herbicide/crop system, farmers have to be careful about when and how much herbicide they apply, lest they kill their own crops. With a selective herbicide/crop system, they can just cover the field in herbicide, resulting in substantially greater application amounts. Your argument that application amounts could be specified by law leaves me unconvinced that the concern with the selective systems is misplaced.
> It's not like the threshold where herbicide levels tolerated by non-gmo crops is the same threshold that ecosystem damage occurs.
Its likely that ecosystem damage is unavoidable with agriculture and so we prefer a minimalist approach, rather than meeting some arbitrary threshold value (which could only be estimated on the basis of imperfect knowledge anyway).
> Not to mention, non-gmo crops can still have their herbicide resistance raised by selective breeding.
I know some farmers who would be very happy if the same level of herbicide resistance exhibited by GMO crops was available in a non-GMO product. I'm sure its possible, perhaps we will see this someday.
This is an extraordinary step for YouTube to take.
But we live in extraordinary times; or a least, this is an extraordinary circumstance.
The president is unwilling to concede his loss. He baselessly claims the election was stolen, alleging massive fraud with no supporting evidence.
This claim of a stolen election is a form of government propaganda. It's a lie issued by government officials: the president, his administration, and his allies in federal and state offices. This propaganda is designed as cover for a naked power grab. Either to keep the president in office; or to radicalize his base to support further restrictions on the franchise, which aids minority Republican power.
Censoring government propaganda is the right thing to do. Especially this most noxious type that is designed to attack and destroy democracy. The failure of the president and his allies to respect democracy means this responsibility must fall on other actors, such as YouTube. Or at the very least these actors should refrain from promulgating the government's destructive propaganda.
Taibbi's concern that this will lead to more radicalization is exactly that: a worry and a concern. Yet the uncritical spreading of these government lies is causing damage here and now, in real time. It's bizarre that Taibbi is so focused on theoretical downsides, yet seems to discount the damage that has been done and will multiply should the lie fester.
What about free speech?
First; Free speech does not exist for itself. The purpose of free speech is truth. Free speech is a guarantor of discourse, of our ability to have free and open conversations that reveal some truth about our universe or about ourselves. And the ability for us to agree or disagree about what the truth is, what is right and what is wrong, and what is true and what is false.
Propaganda and bald-faced lies are anathema to free speech. Trump's claims are baseless self-serving lies. They shed no light, they are designed to mislead, and they hide the truth. The national discourse now centers on this big lie. In this context lies are bad not just because they are lies, but because they narrow and constrain the discourse. I'm not talking about lies that come from misinformation or misunderstanding or mistake - the goal of free discourse is to correct these kind of lies. Instead I'm speaking of the deliberate lie designed to achieve some goal, the knowing lie. If the goal of discourse is to uncover truth, the goal of the knowing lie is to preempt discourse, and by extension the truth itself.
Second; Free speech has always been bridled. Consider holocaust denial and the flat earth hypothesis. Both have approximately the same amount of supporting evidence (none or close to zero). But only holocaust denial is banned from YouTube. Why? Because there is a nexus of holocaust denial, anti-semitic violence, white supremacist groups, and racism. Flat earthers are benign by comparison (unless they start blowing up NASA buildings). Holocaust denial is dangerous because it's part of larger violent white nationalist movement.
Trump's lies about the election are of this latter type; not that they are white nationalist; but that they are part of a larger effort to subvert democracy. And have already stoke death threats an calls to violence. Individuals and corporations (like YouTube) should actively rebut and censor these lies.
Did you read the article? There is nothing in the article in support of government oversight of private companies. Rather, the article argues that private companies should choose to do the right thing, and that consumers should take their business to companies who choose to do the right thing.
I would argue that YouTube is behaving _more american_, in that they're a private business that's doing whatever they want within their rights (taking down content they deem inappropriate).
The government isn't censoring YouTube. Any Trump supporter that believes that YouTube is over-moderating is free to create their own website from which they can spawn whatever garbage content they want.
Spoiler, though: They won't. It turns out the only reason most nutjobs on Facebook/YouTube/Reddit amounted to anything at all is because they were given a free megaphone and millions of listeners. I'm not sure why private businesses have to allow such behavior?
Until slander and libel get enforced on social media sites, which would solve most of these problems anyway (I think), why does anyone think these websites _have to_ cater to every single subculture? If I were at the helm of any reasonably sized social media company I certainly would not want my site attached with fascist propaganda.
The Palmer Report is probably an even better example of just how cynical and partisan this push by social media sites to purge claims of election fraud is than the article makes clear. Back when their anti-Trump articles started to spread across social media, I did some quick searching (y'know, just basic old-school internet literacy stuff when seeing sensational articles from a publication you haven't heard of spreading virally online) and concluded their main claim to fame seemed to be having their own Snopes tag due to the bullshit they'd published. No detectable presence on respected or independent sites beyond that, not even a Wikipedia entry. All the normal hallmarks of your classic fake news site, in the pre-2016 sense of the term.
In no time at all after they started saying what Trump's opponents wanted to hear, though, Twitter had given them a verified account, their Wikipedia entry was glowing and featured prominently when searching for them, and they'd been granted a veneer of respectability by big tech companies and were being spread by the personal but work-associated social media accounts of those who worked for big tech. There was no fearmongering about social media sites spreading disnformation that undermined democracy, even though the articles were strikingly like what Trump and his supporters are pushing now, down to the specific arguments even, and every bit as dubious.
It seems that there's a strong component to this argument that such moves inflame Trump supporters. The problem with this is that it has been established that a substantial part of Trump supporters will be inflamed no matter whether the subject at hand corresponds to a real issue, or a completely fabricated one.
Thus the people who should be concerned here are the ones who try to take reality into consideration when judging news, in which case the issue here is really no worse than the coverage and editorial lines that most need media currently hold. Looking from that perspective, YouTube's policy allows by far the broadest expression of opinions in an online property without having to go with less mainstream forums.
For Taibbis argument to be taken seriously, he should address the problems that stem from a section of society to be completely dissociated from reality.
“Cutting down the public’s ability to flip out removes one of the only real checks on the most dangerous kind of fake news, the official lie." Perfect summary of the article.
The author mainly belittles the misinformation problem and doesn't get at all that we're in real trouble here. The internet is being flooded with wrong information and it gets increasingly harder for people to tell truth from fiction. Something _has to_ be done, YT can't sit and just watch it's Plattform be used to destroy democracy.
I really don't agree that this is "the latest salvo in the fight against 'domestic anti-democracy information'". Youtube's generally had a very permissive stance on information they don't think is accurate; they're taking this one, targeted action against a uniquely dangerous threat.
>One of the most critical to-do items for the American democracy movement over the next four years will be to more effectively counter domestic anti-democracy disinformation. If possible, it should be done on both the supply and demand sides. We can't ignore this issue any longer.
Hmmm...
I can think of some historical examples where governments used rhetoric oddly reminiscent of that rhetoric spoken by the ex-cia officer that tried to run for president the quote's taken from.
Germany during the 1930's
The Soviet Union during the 1920's...well...most of time existing I guess
If you believe in science, then you believe that experts and authorities can be wrong, and likely are wrong about some things. You believe that it is important to question those beliefs. What youtube is doing is against science.
If you believe in liberty, then you know that more than one thing can be right at the same time. That there is more than one way to do things. What youtube is doing is against liberty.
If you believe in science, you also believe that things can be proven wrong, and that repeating wrong things can influence people especially if you have the power to amplify your voice with money.
If you believe in liberty, you believe in the liberty of a set of an individuals (a company) to do what they damn well please with their platform.
But how can one know what is wrong and right without examining all information first? Liars and True Believers will get you every time, unless you are provided the opportunity to discover truth for yourself.
YouTube is now determining what content should be present on their platform using editorial criteria, which makes them a de facto publisher.
>If you believe in liberty, you believe in the liberty of a set of an individuals (a company) to do what they damn well please with their platform.
Not absolutely. Murder is technically an expression of liberty, but to kill is to remove the person's faculties to express liberty and therefore a wrong usage.
If Youtube wants it to be "their platform" and not part of the public commons, then they need to remove everything that isn't a YouTube Original. No UGC.
Science does not talk about right or wrong. It talks about things that are falsifiable.
If there is an idea that so far has not been falsified, it becomes the established scientific fact.
However, someone else may come up with a better idea. Do we prevent them from talking about it or collaborating with people? It may take some time and lots of communication and collaboration to get to the bottom of it. Years or even decades. Do we ban them from youtube in the interrim?
Youtube can do what they please, but what they are doing is against liberty, against science, and it is un-american.
Are the things people uploading false?
Then what's the problem.
Are these people trying to collaborate to solve the problem or are you using people who might be doing that as cover for a giant set of disinformation campaigns that are actively killing people right now?
The parent post took some linguistic shortcuts, but I'll mention for completeness (and in case you're genuinely confused) that "the existence of God" is not a concept the scientific method can or should be applied to, because it is not possible to falsify that claim. A better phrasing of the sentence would be "If there is a falsifiable idea that so far has not been falsified, it may be considered established scientific fact." Scientific progress occurs not just by discovering new tenets, but by disproving incorrect ones, and nobody's come up with an objective test for the presence or absence of the Divine.
Emotions flare because faith is a deeply personal issue, but from a lab protocol standpoint it's not a testable hypothesis, and thus not really worth arguing about.
If you believe in science you are likely a neo-religious fundamentalist. Science doesn't need to be believed. The process puts forward propositions. You either think the scientific method has purpose and helps us understand our reality or not.
Sometimes I wonder if flat earthers are fake news themselves.
You can stand on the shore of a body of water larger than 3 miles and watch boats sail over the horizon. No need for tools, geometry, or privilege, you can stand there and see it.
And even if they are real, their numbers are small even with the amount of propaganda they create.
This is a solved problem : don't BAN, but FLAG conspiracy theories and dangerous memes with a notice citing links to facts.
I've enjoyed the somewhat balanced weekly covid discussions of youtuber Dr Chris Martenson / PeakProsperity. He generally backs up his opinions with links to science studies and data, but has been censored by youtube for mentioning controversially that "HCQ has some efficacy as a prophylactic before exposure to Covid". ( He has also covered topics such as less severe Covid outcome for those taking Vitamin-D, for which there now seems to be a lot of evidence... yet why are we not seeing governments recommend and supply it more widely ? )
Are we still able to rationally discuss on youtube topics such as : Did SARS-CoV2/Covid originate elsewhere than at the seafood market in Wuhan [ there are a handful of data with earlier cases in distant locales ] ?
Are we allowed to make videos discussing the cause of collapse of WTC building 7 on 911 ? Engineering Professor Leroy Hulseys structural study at Fairbanks University argues that the canonical explanation of fire damage leading to failure of a weak point followed by cascading collapse is 'unlikely'.
Will we be able to share videos that say fracking for oil produces so much extra methane as to render the proponents criminally liable for the acceleration of global warming ?
Will we be able to discuss on youtube.com whether google.com should pay more tax than they currently do ? What body decides this ?
The very same people complaining about this also complained when platforms did exactly what you suggested. Even attempting to "fact check" is considered censorship.
This is very, very, very far from a solved problem. It is not at all clear that flagging posts does anything to limit the spread of false information. Even when we know that the information we are reading is false, we can still end up believing it, especially if we are distracted or under time pressure [1]. Yesterday, the Lawfare podcast did an interview with a researcher studying misinformation, disinformation, and mal-information which I suggest people check out if they want to learn more about the issue [2].
> "Even when we know that the information we are reading is false, we can still end up believing it"
I'm not saying human perception is the 'solved problem' - if I know something is false, then can I also believe it ? I actually think the human mind is capable of believing both proposition "X is true" and its logical negation "X is not true". eg "birds are dinosaurs" is both true and not true, inmnsho.
I want people to believe things that are true, but this would require much wider access to higher quality science education, as well as access to good information, protected freedom of speech ...
My feeling is the Twitter model - where outrageous untruths are tagged with a notice - is a good start, and much better than the path of outright censorship that youtube seems to be choosing. I'd like to see a kind of wikipedia crowd-sourced model to fact-check social media - flag a post/comment with "controversial" / "established dogma" / "noncontroversial" / "conspiracy theory" : ] - similar to how we have pseudo-public reviews of books on Amazon and goodreads. Perhaps this can be gamed, but we still have wikipedia and much of it _is_ accurate, so there is hope.
The idea that banning certain information will somehow result in it disappearing has been shown repeatedly to not work. A cursory reading of history should make this clear. You cannot turn the entire world into West Coast USA by banning everyone that disagrees with you. You’ll only create further echo chambers, both on your own platform and on the (new) platforms inevitably created by the exiled.