Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
We Can't Have Serious Discussions About Section 230 If People Misrepresent It (techdirt.com)
32 points by hn_acker on March 2, 2024 | hide | past | favorite | 28 comments


I think the argument in favor of the Florida and Texas is the the various Social Media sites are less platforms and more common carriers.

I don't buy it but it's totally crazy. When I'm DM'ing someone it's hard to say X isn't acting like a common carrier. It's the same tool used in a different way for it to be a platform.

When it's a platform the moderation may or may not be legal depending upon what the Supreme Court says but honestly - everyone wants that moderation. It's a pipe dream to think that people aren't going to get it one way or the other.

Common carrier traffic is moderated (see mail handler spam filters) but everyone knows the rules about what gets allowed and what doesn't. If common carrier traffic was moderated like platform traffic everyone would be horrified - and so it's not.


The title was too long. The original title was:

> We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting It


> But the issue with these laws was whether or not it constricted the platforms’ ability to express themselves in the way in which they moderated. That is, the editorial decisions that were being made expressing “this is what type of community we enable” are a form of public expression that the Florida & Texas laws seek to stifle.

This is taking the platforms' arguments as true, when it doesn't need to. Regardless of whether moderation is "speech", it's something Congress wanted them to do, because the alternative is that they don't in order to avoid liability and then everything is full of spam.

And "corporate moderation is protected speech" seems like a bad precedent. Even if you like it when it's Facebook, what happens when it's Comcast?


We have common carrier laws to cover cases like Comcast. Which Section 230 explicitly says social media platforms are not.

And the problem with Comcast is not necessarily that they will make speech decisions we don't like (an inherent part of free speech). The problem is that they're a monopoly and people don't have other options. So we should fix that problem instead of further limiting 1A rights.


> We have common carrier laws to cover cases like Comcast.

Not if you set a new precedent that corporate moderation is protected speech under the First Amendment and invalidate those laws.

> The problem is that they're a monopoly and people don't have other options.

But this is the same problem with large platforms. They have a dominant market position in a market with a strong network effect.


>> We have common carrier laws to cover cases like Comcast.

> Not if you set a new precedent that corporate moderation is protected speech under the First Amendment and invalidate those laws.

Why would treating a social media site's moderation as protected speech automatically make the social media site a common carrier?


> Why would treating a social media site's moderation as protected speech automatically make the social media site a common carrier?

First of all, you're asking the question backwards. If corporate moderation is protected speech then why wouldn't it be protected speech for common carriers?

The only plausible answer is that they have a dominant market position, but then so do the large social media platforms, which are the only things these laws apply to.


The dominant market position of someone who has last/mile cables to every household in an area is much stickier and more absolute than the dominant position of any social networking site.

Even if it’s hard to stop using e.g. Facebook, it’s easy to access 1000000 other sites too, and publish your own. That’s not the case with ISPs - for the most part you can have only one.


> Even if it’s hard to stop using e.g. Facebook, it’s easy to access 1000000 other sites too, and publish your own.

There aren't 1000000 major social media sites. Sites with fifty users in total are irrelevant and can't be used for the same purpose, and for the same reason most people can't feasibly start their own.

> That’s not the case with ISPs - for the most part you can have only one.

This is also not true. You can generally choose between the "cable company" and the "phone company" in addition to satellite providers like Starlink and various cellular networks. You have about as much choice in ISPs as you have in social media sites, and in both cases it's not much.


> The only plausible answer is that they have a dominant market position

Not necessarily. If the internet were as essential to average daily life today in the US as electricity (which I am not declaring true or false in this particular sentence), then it would be reasonable to treat ISPs but not websites as common carriers no matter how competitive the internet access market is.

Another answer that doesn't rely on the previous one would be that it is not reasonable to expect a social media site to deliver a user's speech as reliably as one would expect an ISP to deliver the speech. I got this idea from an excerpt on Wikipedia [1]:

> Cases have also established limitations to the common carrier designation. In a case concerning a hot air balloon, Grotheer v. Escape Adventures, Inc., the court affirmed a hot air balloon was not a common carrier, holding the key inquiry in determining whether or not a transporter can be classified as a common carrier is whether passengers expect the transportation to be safe because the operator is reasonably capable of controlling the risk of injury.

If you're referring to common carriers offering transport of physical goods or people (as cargo railroads and passenger railroads do) then a different answer could be that users can sometimes have priority over physical property owned by someone else, while treating websites as common carriers would mean giving users priority over physical property owned by someone else which the owner also uses (e.g. puts terms of service on and moderates) to express what kind of website the website is. In other words, outweighing property rights alone is easier than outweighing property rights and speech rights combined.

Moreover, I'm guessing that whether the public associates an ISP with third-party speech depends on whether the speech is protected. Suppose that a social media site is notorious for users who speak in favor of copyright infringement (copyright infringement being unprotected speech). The public might associate the ISPs those users use with copyright infringement (anecdote: Reddit, Frontier, and copyright infringement [2]). But what about protected speech? A social media site is notorious for users who speak in favor of hate crimes. Will the public be comparably likely to associate the ISPs those users use with hate speech?

But nothing I wrote above is not really an answer, since I've been speculating for the most part. I'm not very knowledgeable about the legal theory behind a common carrier designation, so here is something closer to an answer from Mike Masnick [3]:

> As you look over time, you’ll notice a few important common traits in all historical common carriers:

> 1. Delivering something (people, cargo, data) from point A to point B

> 2. Offering a commoditized service (often involving a natural monopoly provider)

> In some ways, point (2) is a function of point (1). The delivery from point A to point B is the key point here. Railroads, telegraphs, telephone systems are all in that simple business — taking people, cargo, data (voice) from point A to point B — and then having no further ongoing relationship with you.

> That’s just not the case for social media. Social media, from the very beginning, was about hosting content that you put up. It’s not transient, it’s perpetual. That, alone, makes a huge difference, especially with regards to the 1st Amendment’s freedom of association. It’s one thing to say you have to transmit someone’s speech from here to there and then have no more to do with it, but it’s something else entirely to say “you must host this person’s speech forever.”

> Second, social media is, in no way, a commodified service. Facebook is a very different service from Twitter, as it is from YouTube, as it is from TikTok, as it is from Reddit. They’re not interchangeable, nor are they natural monopolies, in which massive capital outlays are required upfront to build redundant architecture. New social networks can be set up without having to install massive infrastructure, and they can be extremely differentiated from every other social network. That’s not true of traditional common carriers. Getting from New York to Boston by train is getting from New York to Boston by train.

[1] https://en.wikipedia.org/wiki/Common_carrier#General

[2] https://arstechnica.com/tech-policy/2024/02/reddit-beats-fil...

[3] https://www.techdirt.com/2022/02/25/why-it-makes-no-sense-to...


He's grasping at straws trying to distinguish them:

> It’s one thing to say you have to transmit someone’s speech from here to there and then have no more to do with it, but it’s something else entirely to say “you must host this person’s speech forever.”

This is exactly what we ask of phone companies. As long as the customer has an account, the phone company routes their calls, forever, or until they cease to be a phone company.

Also notice how disingenuous this is. What it's meant to evoke is the unreasonable notion that they're required to keep operating their service for your benefit for free forever, when what we're really talking about is a non-discrimination rule. If they're continuing to host everybody else's 15 year old posts, they can continue to host yours. If they want to shut down their site, or delete everything more than 10 years old from everybody, they can do that too. The issue is when they want to delete you and not somebody else, solely because they don't like your opinions, or because their algorithm screws over innocent people and they don't care to fix it or have any responsibility for it.

> Facebook is a very different service from Twitter, as it is from YouTube, as it is from TikTok, as it is from Reddit.

Air travel is a very different service from trains, as it is from cars, as it is from ships, as it is from spacecraft. What does that have to do with whether planes or trains or taxis are common carriers?

If anything this is the problem. If they were all completely fungible services and you actually could just start your own with no loss of utility then it wouldn't matter what they do because you would have so many viable alternatives. The reason it matters is that you can't feasibly reach your Facebook audience via Tiktok, nor can you stand up your own Facebook instance if you have a dispute with Meta and use that to communicate with Meta's users.

Make it into that kind of fungible commodity market and then there is no problem with anyone moderating however they like -- because if you don't like someone's moderation you could swap them out without losing access to the people in the same network.


>> It’s one thing to say you have to transmit someone’s speech from here to there and then have no more to do with it, but it’s something else entirely to say “you must host this person’s speech forever.”

> This is exactly what we ask of phone companies. As long as the customer has an account, the phone company routes their calls, forever, or until they cease to be a phone company.

The phone company hosts a phone call for you and someone else. After you hang up, the phone call is over. Unless the contract promised as much, you have no expectation of later being able to ask the phone company to give you a recording of the call. The person on the other end has no expectation of later being able to ask the phone company to tell them what you said. The call is one and done. A social media post? Not so. You post, and as long as no automation removes it and no person manually removes it, the post stays and continues getting served.

As long as a customer has an active contract with the phone company, the phone company must continue to provide calls unless someone triggers an exit condition in the contract, in which case the phone company would be able to remove the account. But your average phone contract will not guarantee you a service for hosting an individual phone call forever.

Currently, any TOS you'll find from the big social media sites will allow the sites to remove any post or user for any reason or no reason at all. That's an exit condition that's always active. Everyone gets the same contract. Currently, the contract lasts 0 seconds, and possibly longer if the social media site decides to keep your posts up, doesn't decide to take them down, or forgets to take them down. If you turn certain social media companies into common carriers, then where would you draw the line relative to Facebook and a Mastodon instance? Twitter and a social media site for game discussions? Truth Social and Bluesky? Where does Hacker News fit into this? Would you turn all social media sites into common carriers, and how would you define a social media site? Keep in mind, a phone company doesn't stop being a common carrier if the phone company loses market share.

Mike Masnick's "fungibility" argument is unclear at best, but his point that transportation service is transient still stands [1]:

> Social media, from the very beginning, was about hosting content that you put up. It’s not transient, it’s perpetual. That, alone, makes a huge difference, especially with regards to the 1st Amendment’s freedom of association. It’s one thing to say you have to transmit someone’s speech from here to there and then have no more to do with it, but it’s something else entirely to say “you must host this person’s speech forever.”

...

> It is hosting content perpetually, not merely transporting data from one point to another in a transient fashion.

The average train company is a common carrier. You book a trip from station A to station B. You take the trip, and that's it. You don't get to take another trip with the same booking unless the contract said so.

Assume that ISPs are common carriers (which is not true because the FCC under Ajit Pai decided that ISPs are not common carriers; however, making ISPs common carriers is easier to justify than making social media websites common carriers). An ISP transmitting a website to you is not obligated to host the website, because transmitting bits doesn't functionally require storing them beforehand and because a large chunk of the responsibility of making sure that the website data to be transmitted exists right at this moment falls on the website owner.

Now assume that a particular social media website is a common carrier. You haven't made the case that social media sites must store posts. The social media site must allow anyone to sign up. The social media site must transmit any posts whose contents and audience have been specified, but it would be your responsibility to make sure that the social media site receives the contents to put in the posts every time someone wants to receive those posts. For example, you set a post to followers-only and send it. The post will be sent SMS-style to everyone who is currently following you. If you want followers to continually request a post, you are responsible for setting up an automated system to give the social media website the right post content i.e. you would have to host the posts to be transmitted. If the website offers to automatically read a passive folder of posts you want to repeatedly transmit, then the website has to offer that capability to everyone. However, the website can discriminate in deciding whose posts to host. The website might offer to host your posts, but doesn't have to do so for some amount of time unless the contract specifically says so, and doesn't have to make an identical offer to other users.

[1] https://www.techdirt.com/2022/02/25/why-it-makes-no-sense-to...


> The discussion was mostly about Thomas and Gorsuch’s confusion over 230

I find this distressing. If anyone should know what the law says, it should be a Supreme Court Justice. I know they have clerks and aides to help them with all that - the ability to know the whole law and all its variations and caveats is a monumental, and almost inhuman task.

A Supreme Court justice shouldn’t mix up parts a law. They should know what it says, and what it does not say.


If a case has reached the Supreme Court, it means that many judges have been confused along the way. And it says more about the law than judges.

It rather seems that the author has very strong convictions and bias.


I think if a case reaches the Supreme Court it really only means that the person prosecuting it is able to bankroll appeals far enough to get there - they didn’t get the answer they wanted. Beyond that, the court can either hear it out or push it back down.


There are restrictions on appeals, as far as I understand. You can’t just frivolously appeal anything until it reaches the Supreme Court.

And if a law is straightforward, it is a waste of money and time anyway.


They know what it says, Masnick is being disingenuous.

Section 230(c)(1) says that platforms aren't liable "as the publisher or speaker" for third party speech even if they moderate. The platforms are arguing that moderation is first party speech, implying that doesn't apply.

Masnick is presumably referring to 230(c)(2), which protects platforms against liability for moderation decisions. But the title of that subsection is "Civil liability" and parts of these laws impose administrative/criminal penalties. Which is presumably why the platforms are trying to make a First Amendment argument to begin with instead of just relying on Section 230.


> They know what it says, Masnick is being disingenuous.

> Section 230(c)(1) says that platforms aren't liable "as the publisher or speaker" for third party speech even if they moderate. The platforms are arguing that moderation is first party speech, implying that doesn't apply.

You are mixing up two kinds of speech. In the article, Mike Masnick is arguing that the user's post is third-party speech, no matter how much the platform boosts or downranks the post. The platform's decision to boost, downrank, or remove the post is first-party speech. The platform's terms of service are first-party speech.

The Florida and Texas laws unnecessarily restrict the platforms' moderation decisions, which are first-party speech protected by the First Amendment, on the basis of the contents of third-party speech i.e. holding the platforms liable as the publisher for third-party speech i.e. violating Section 230.

Strict scrutiny requires that laws which serve some state interest use a method which is least restrictive of constitutional rights in achieving that state interest [1]. Allowing social media sites to be held liable for user posts violates strict scrutiny. There already is an available remedy for users' third-party speech which does not restrict platforms' first-party right to moderate: putting liability only on the users who authored the speech.

[1] https://en.wikipedia.org/wiki/Strict_scrutiny#Applicability


> The Florida and Texas laws unnecessarily restrict the platforms' moderation decisions, which are first-party speech protected by the First Amendment, on the basis of the contents of third-party speech i.e. holding the platforms liable as the publisher for third-party speech i.e. violating Section 230.

They aren't incurring liability for hosting third party speech. Even if they left 100% of everything up they wouldn't be violating these laws. What the laws are trying to do is impose penalties for the first party act of censorship, whereas what you get from 230(c)(1) is protection for the first party act of not removing third party speech.

> There already is an available remedy for user's third-party speech which does not restrict platforms' first-party right to moderate: suing the users of the posts.

But there isn't a remedy for the third-party speaker whose posts are capriciously removed from a dominant platform, which is what the laws are trying to address.


> What the laws are trying to do is impose penalties for the first party act of censorship

Which is the part of the lawsuit which the First Amendment addresses already. The First Amendment prohibits Congress from passing laws restricting speech [1]:

> Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.

It does not require that Congress pass a law promoting speech, but can restrict Congress from removing such a law afterward. Because of the Establishment Clause, the First Amendment's restrictions on Congress apply to the federal government as a whole and state governments [2]. The First Amendment does not restrict voluntary moderation decisions by websites. The First Amendment generally restricts speech compelled by governments (including in lawsuits, but usually not when a contract is involved), since freedom of speech includes freedom of when to speak and when to not speak. The First Amendment also includes freedom to associate.

That Section 230(c)(2)(A) mentions liability for removing objectionable content does nothing to lessen the First Amendment's protections. Section 230 (c)(2)(A) is:

> (c)Protection for “Good Samaritan” blocking and screening of offensive material

> (2)Civil liability

> No provider or user of an interactive computer service shall be held liable on account of—

> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

The key idea is: "that the provider or user considers to be... objectionable". "Objectionable" can mean that the provider doesn't want to keep hosting for any reason. Social media sites can make voluntary, unilateral decisions to remove speech. The First Amendment already does that, but Section 230 offers an early dismissal opportunity should anyone sue over a social media site's voluntary, unilateral "censorship".

[1] https://www.law.cornell.edu/constitution/first_amendment

[2] https://en.wikipedia.org/wiki/Everson_v._Board_of_Education

[3] https://www.law.cornell.edu/uscode/text/47/230


> Which is the part of the lawsuit which the First Amendment addresses already.

But then why is anybody confused why the Justices are pointing out that Section 230(a)(1) doesn't have anything to say about first party speech?

> The First Amendment does not restrict voluntary moderation decisions by websites.

The First Amendment has limited applicability to commercial activity. For example, the law can prohibit phone companies from denying phone service to customers on the basis of their political views, even if the phone company disagrees with them.

> The key idea is: "that the provider or user considers to be... objectionable".

The key idea is the heading of that section, which says "Civil liability", implying that it doesn't apply to government-imposed penalties.

Let's consider a different context here. Suppose you have a state law imposing penalties on companies that discriminate on the basis of various protected classes. Some site starts banning everyone in one of those protected classes because they find them "objectionable". The state AG wants to prosecute them for it. Is that law unenforceable as a result of Section 230 or the First Amendment?


> But then why is anybody confused why the Justices are pointing out that Section 230(a)(1) doesn't have anything to say about first party speech?

I'm not confused, but I can guess why someone would be. There's historical context to the oral arguments of the Texas and Florida "anti-censorship" laws. Google is a member of NetChoice. In Gonzalez v. Google LLC [1], Google argued in the lower courts that Section 230(c)(1) made Google immune to liability for YouTube's automated recommendations of terrorist content because the motivation for the liability lay in the content of the user-uploaded videos i.e. third-party speech. The Supreme Court court didn't involve Section 230 in the ruling, but did discuss Google's Section 230 argument in the oral arguments.

The Texas and Florida laws in the case would not make platforms liable for third-party speech. But in the oral arguments for the Texas and Florida cases (Moody v. NetChoice and NetChoice v. Paxton), Justices Alito, Thomas, and Gorsuch mistakenly thought that Google's past argument about liability for users' speech (not the platform's speech) contradicted NetChoice's current argument about the First Amendment right to moderate (the platform's speech) and additionally the section 230(c)(2)(A) right to remove anything the site considers "objectionable". For example, here's Justice Alito in NetChoice v. Paxton [2]:

> Either it's your message or it's not your message. I don't understand how it can be both. It's your -- it's your message when you want to escape state regulation, but it's not your message when you want to escape liability under state tort law.

Justice Alito was conflating "message" meaning the content of the posts (authored by the users) and "message" meaning the moderation actions of the social media site (performed by the company, modifying the expression of what kind of website the website is). I wrote more about the distinction in a different comment ( https://news.ycombinator.com/item?id=39576076 ); that comment pertains to both the Justice's confusion and the main topic of the article.

The main topic of the featured article is not the Florida and Texas court cases. The article is about the misunderstandings that Steven Brill has about what Section 230 does, who Section 230 applies to, and Section 230 is necessary in the context of liability for harmful user posts.

> The First Amendment has limited applicability to commercial activity. For example, the law can prohibit phone companies from denying phone service to customers on the basis of their political views, even if the phone company disagrees with them.

The Florida and Texas laws have to pass strict scrutiny [3]. Intermediate scrutiny doesn't apply [4], and the laws would fail intermediate scrutiny anyway:

> Intermediate scrutiny applies to regulation that does not directly target speech but has a substantial impact on a particular message. It applies to time, place, and manner restrictions on speech, for example, with the additional requirement of "adequate alternative channels of communication."

...

> Intermediate scrutiny also applies to regulation of commercial speech, as long as the state interests in regulating relate to fair bargaining. Regulations for other reasons, such as protection of children, are subject to strict scrutiny.

Banning all post removals is not a "time, place, and manner" restriction. It does not relate to "fair bargaining" either. Here is strict scrutiny [3]:

> In U.S. constitutional law, when a law infringes upon a fundamental constitutional right, the court may apply the strict scrutiny standard. Strict scrutiny holds the challenged law as presumptively invalid unless the government can demonstrate that the law or regulation is necessary to achieve a "compelling state interest". The government must also demonstrate that the law is "narrowly tailored" to achieve that compelling purpose, and that it uses the "least restrictive means" to achieve that purpose. Failure to meet this standard will result in striking the law as unconstitutional.

The least restrictive means (least restrictive of moderation rights) for the government to prevent censorship is not forcing one platform to allow all speech, but waiting for multiple websites with different speech rules to show up (Truth Social, AT Protocol instances, ActivityPub instances, and personal websites) so that the sum of all platforms will allow all speech.

> The key idea is the heading of that section, which says "Civil liability", implying that it doesn't apply to government-imposed penalties.

How so? Courts impose penalties. That, and the bills are enforced by lawsuits, which are civil.

[1] https://en.wikipedia.org/wiki/Gonzalez_v._Google_LLC

[2] https://www.supremecourt.gov/oral_arguments/argument_transcr...

[3] https://en.wikipedia.org/wiki/Strict_scrutiny

[4] https://en.wikipedia.org/wiki/Intermediate_scrutiny#Free_spe...


"The blame should always go to the party who violated the law in creating the content."

This, this, a thousand times this.


Who is responsible for the output of generative LLMs? Is it the person running the model, the person or group or company which trained the model, the creator of the initial parameters, the person who regurgitates what the model produced?


Precedent in the US is very much on the side of the person, not the method.

Whether the Xerox photocopier, the Sony Betamax or (so far) generative AI, judges are ruling that the fault in using technology to infringe copyright doesn't lie in the technology, but the way the technology is used by a person.

Generative AI isn't (yet) a person, so it is neither capable of creating copyrightable works, nor in itself infringing copyright.

I wrote a research paper on it, if you're interested.

https://www.zipbangwow.com/intellectual-impropriety/


The one who posted it?


I feel like a better solution is to have Congress define where speech is protected online. "Publicly available social media websites count as public spaces". Otherwise we go down this route where "Comcast's free speech is being infringed by not allowing it to censor its users"


This comment is a modified version of a comment I made elsewhere after resolving my own previous confusion (https://news.ycombinator.com/item?id=39575873).

There are two different kinds of speech at issue in the Supreme Court case.

1. The first kind of speech is the posts themselves, the contents of which were written by users. For the social media site, this first kind is third-party speech. For the users, this first kind is first-party speech.

2. The second kind of speech is what the social media website does with the post (boosts, downranks, deletes, marks with tags, bans the user of, etc.). For the company, this second kind is first-party speech. (Less relevantly, the second kind of speech is also anything a worker at the social media company writes on the site in official representation of the social media company's views.)

Section 230 declares that the social media site cannot be held liable for the first kind of speech. Social media sites can still be held liable for the second kind of speech. Holding the social media company liable for boosting a harmful but First-Amendment-protected post would violate the social media company's First Amendment right to moderate. The right to moderate comes from the First Amendment, not from Section 230. My rule of thumb (not always applicable) is that harm which wouldn't have happened if the post content had been different is actually harm caused by the first kind of speech, and therefore liability should rest solely on the author of the post, even if the social media site boosted the post.

Suppose that I make a post about eating disorders on social media. (Posts discussing eating disorders are protected speech.) The social media site boosts my post. Some kid sees it and later develops an eating disorder (correlation, with the question of causation to be decided in court). The kid's parent (or caretaker) sues the social media site and argues that the social media site should be liable because the social media site boosted the post.

Scenario 1. If Section 230 didn't exist, then the social media company would have to go through the entire court process. The social media site argues that "First, social media websites have a First Amendment right to moderate. Second, our moderators could not be expected to foresee that a mere discussion of eating disorders would cause more harm than help. Third, the liability should fall on the user who posted the speech." (Depending on the scenario there might also be a fourth argument such as "Our algorithm made the recommendation. Since we didn't knowingly boost a harmful post then we cannot be held liable for the post." This is the "knowledge" issue that applies to just about any third-party liability case.) The social media company loses a lot of money, but the court rules that the social media company was not liable for the post.

Scenario 2. Since Section 230 does exist, then the social media company can argue that "This lawsuit attempts to hold someone online liable for distributing speech made by someone else. Section 230 says that this kind of third-party liability cannot exist [except for the exceptions: federal crimes, 'intellectual property', and electronic privacy]." (In this scenario, the "knowledge" issue is irrelevant and doesn't need to be brought up in court.) The court declares that the social media company cannot be held liable for the post, and dismisses the case early.

Either way, the social media company would not be liable. But Section 230 is still necessary to prevent social media companies from being overwhelmed with having to go through entire court cases, especially if either party brings up the "knowledge" issue. Regardless, the parent could sue me, since I authored the post. I would be the only appropriate party, if any, for the parent to sue.

(There's no guarantee that the court would actually find meaningful harm or liability from the particular post. And obviously, there could be no third-party liability on the social media website if the court in either scenario were to find no first-party liability on me.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: