Hacker Newsnew | past | comments | ask | show | jobs | submit | more danShumway's commentslogin

1Password does not support platform export/import. It works on multiple OSes -- that's not what ProtonPass is talking about here. They're talking about moving passkeys between services -- ie moving passkeys from 1Password into a different password manager. 1Password does not have a way to do that.


It's not clear to me that "platform-authenticator-bound" passkeys are a real thing. I've actually talked to someone behind the upcoming portability standard, and their standard allows for arbitrary export (encrypted, but in a form where the user can control the keys).

I did not get the implication from them that passkeys are meant to be "platform-bound", I think that's a security standard that's mostly made up, the impression they gave me was that user control over export to arbitrary clients was a strongly supported use-case in the upcoming spec.

Even Apple keys are not platform-bound today in the way you suggest (ie, bound to your Apple ID) -- Apple keys can be shared to other users using Airdrop, they're not bound to a singular Apple ID.

In practice, while passkeys are a meaningful reduction in phishing risk, they can be compromised, leaked, or phished, even in their current state -- and certainly will be phishable once portability standards land. That doesn't make them useless! But it does mean that we should stop pretending that the current lack of export is a pure security measure. The entire promise of portability is the promise that companies like Apple are not trying to create platform-bound keys.

We're at least meant to believe that these companies want passkeys to be truly portable between platforms. If "platform-authenticator-bound" is a thing, I suspect you are not explaining it correctly? Or maybe I'm misunderstanding what you mean -- but passkeys are not (intended to be) restricted to individual platforms.


I don't think this subthread is about the US, but just as a sidenote on the US, Project 2025 directly calls for porn to be made illegal and for attacking Internet providers and companies like Cloudflare that enable its access (https://static.project2025.org/2025_MandateForLeadership_FOR...):

> Pornography, manifested today in the omnipresent propagation of transgender ideology and sexualization of children, for instance, is not a political Gordian knot inextricably binding up disparate claims about free speech, property rights, sexual liberation, and child welfare. It has no claim to First Amendment protection. Its purveyors are child predators and misogynistic exploiters of women. Their product is as addictive as any illicit drug and as psychologically destructive as any crime. Pornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shuttered.

Even in a Conservative-majority court this kind of ban would likely be ruled unconstitutional very quickly, but we should be clear when talking about this kind of thing where it is that a substantial number of Conservative foundations would like to go (https://www.project2025.org/about/advisory-board/). There is a non-trivial Conservative movement to to ban porn.


> Good thing these laws specifically ban them from storing that information that they don't want to store.

Well... companies. They don't ban government agencies from retaining that data if they get access to it. But ignore that and assume for a second that they do.

The problem is that the laws kind of contradict themselves: "don't collect information that compromises privacy" and also "do this in a way that basically could only work if you collect information that compromises privacy."

Most of the bills I've read ban companies from saving this data, but require them to collect it presumably in some way that's auditable? They also provide no real mechanisms for guaranteeing that data won't be stored or will be transmitted securely between services. In Louisiana's bill (https://legis.la.gov/legis/ViewDocument.aspx?d=1289498) there is no penalty for retaining data other than that users can sue for "damages". But historically, proving damages from retained data tends to be difficult to do.

Of course the laws do not clarify how age verification is supposed to work under these restrictions, just that documents will be verified somehow: Texas's "example" of age verification explicitly refers to digital identification as information "stored on a digital network" (https://capitol.texas.gov/tlodocs/88R/billtext/pdf/HB01181F....). But sure, also make sure you don't store anything! /s

Presumably this all means that the information should be stored and transmitted until identification is done and then immediately deleted leaving no record of that identification other than that it happened (which hopefully will be sufficient evidence if the government ever accuses you of using a faulty verification method). But eventual deletion doesn't mean the information isn't still getting temporarily stored or that it's not passing between multiple hands, any of which could have rogue employees or leaks, or which could be storing "anonymized" data that turns out to later-on be identifiable.

----

One might argue that since the majority of these laws are proposed and backed by anti-porn organizations, the contradictions might kind of be advantageous to the goals of the backers -- if the law is functionally impossible to comply with and companies are forced to leave the state, well... that's exactly what these groups want anyway. Texas's AG is up-front about being pleased that porn companies are blocking the state. And despite the regular claim that this isn't about restricting porn generally, the vast majority of these bills have ties to religiously conservative groups that have public positions that porn should be banned for everyone.

But that's all besides the point: the point is it's just outright false to say that these laws don't require at least the transmission and collection of this data -- they just tack on "don't store it for too long, delete it afterwards." But that doesn't really mean anything: regular identity transmission over the web carries security and privacy risks even if companies could be trusted to reliably delete this data -- which in many cases, they can't. Advocates of these bills ignore that security researchers have an issue with collection and transmission of sensitive data in addition to storage, and so advocates point to narrow, non-specific language about long-term retention as if that solves all of the issues. It doesn't.

And to the extent that these laws include any real teeth like actual fines for data leakage, they don't really explain how companies can safely avoid those fines while still proving that their users aren't minors. It's a no-win situation.

Note as well, I'm leaving off the accusation I've seen online that this identification would need to be provided per-login/access, because I don't see any language in the bills that would suggest that to me. But of course if that were true, there would be obvious security risks from users providing that information repeatedly as part of regular access.

----

> That's a social media law, not a porn law

The law in question explicitly exempts art sharing sites unless they allow specifically porn (https://le.utah.gov/xcode/Title13/Chapter63/C13-63_202305032...):

> (H) a professional creative network for showcasing and discovering artistic content, if the content is required to be non-pornographic;

Notably, general content harmful to minors like gore, hate-speech, etc is included in that exemption. So your creative gallery site doesn't count as social media and isn't subject to these child-restrictions if you allow nazi emblems or violent imagery, it's only a problem if you allow porn.

I will concede on this that conflating the exact language of "change or bypass restrictions on access" is a straightforward misreading of the bill, bad on the article for that. But I think it's being a little coy to act like there's no overlap between Chapter 63 and SB 287. Chapter 63 clearly views pornographic content differently than it views other content that is similarly harmful to minors.

SB 287 includes language that seems (to me) to explicitly protect VPN and network providers from liability, but it is not clear whether that kind of language will continue to appear in future bills: the majority of these bills so far have been largely copy-paste templates of each other, and in other Internet restriction debates states have expressed interest in going after actors that they deem to be "enabling" illegal actions. Notably, the copy-paste template that most states have been using doesn't protect sites that allow VPN access, they just protect the VPNs themselves. It's not clear to me from the text of the bills that states wouldn't view a porn site refusing to block all VPN connections as a violation of the law if they were ever interested in pushing enforcement past state lines (which again, in other Internet speech debates states have expressed interest in pushing enforcement across state lines).

The article's example of VPN restrictions is misleading and misrepresents the bills it's talking about -- but the general concern that VPN restrictions might come in the future (most likely through targeting companies that do not block VPNs from accessing their services) is a real concern, just poorly presented in this article.


Why would they need to be auditable? That's not in any of the laws I've read, and in fact as you note, the law makes that impossible. If the law contradicts a requirement that you made up, and does not itself contain that requirement, then why would you presume that it has that requirement?

There's literally no reason to have e.g. a knowledge based auth or signed id request hit disk. It doesn't need to be saved for a "short time". It doesn't need to be saved on permanent storage at all.

"Let's assume for a moment that the law says the opposite of what it actually says". But it doesn't.

It's easy for the government to investigate whether you check IDs: open the site and see if you request ID information. Present fake info and see if you accept it. Just like they do in person.


> If the law contradicts a requirement that you made up, and does not itself contain that requirement, then why would you presume that it has that requirement?

First off, always good to be clear that there are multiple laws here, even if many of them are templates of each other; there's not "the law". Secondly, this is hiding behind ambiguity in many of these laws' language; it's easy to claim that a law doesn't specifically require that companies retain information about their efforts, but I guarantee you in any court case about this, requests for that information would come up.

It is painfully naive to assume that any company would feel safe implementing a legally required system that does not provide them with any evidence to prove that their system works or has worked in the past. The ambiguity about what many of these bills mean when they call for a "reasonable method" of identity verification is exactly the kind of contradicting language that I'm talking about above. "We didn't ask you to do X, we just put you in a situation where not doing X would be extremely dangerous."

I would argue that a State going to a company and saying, "do something 'reasonable'" with no legal guarantee or precedent about what will and won't be reasonable, and then additionally adding restrictions that make it practically impossible for any existing ID verification system online that I'm aware of to fit that requirement -- I would argue that is tantamount to an attempt to ban porn. It's a system that can't really be safely complied with. Of course companies being able to provide documentation and evidence of their prior verifications is a practical requirement for them operating in that kind of environment.

> There's literally no reason to have e.g. a knowledge based auth or signed id request hit disk. It doesn't need to be saved for a "short time". It doesn't need to be saved on permanent storage at all.

I don't see any indication in the laws I've read that this would be sufficient; where are you getting this idea from? In fact (I'll remind you), Texas's law explicitly refers to digital identification as something that gets stored and accessed as proof of identity. The bill's own language does not support the idea that identification would be completely transient and instantaneous.

So it is completely reasonable for critics to question these requirements given that nothing in the law would prevent the government from making a case that completely transient identification is insufficient. And even if it was sufficient, from a purely technical perspective it is not clear to me how this magically transient identification would work. Information transmitted between parties gets stored, that's how this stuff works -- what ID verification system are you imagining that can happen instantaneously without referencing any stored information and without any information leaving RAM? I'm not aware of one.

> "Let's assume for a moment that the law says the opposite of what it actually says". But it doesn't.

What? Every single law I referenced requires the transmission of this data and explicitly suggests sharing it with 3rd-party verification services. That's not me reading into the laws, it's just fact.

> It's easy for the government to investigate whether you check IDs: open the site and see if you request ID information. Present fake info and see if you accept it.

What system for instant ID verification that does not rely on storing or accessing stored, indexed information about an identity works like this? How do you propose that sites detect fake info without referencing that info against stored identifying information? Because advocates for these laws keep on saying this is easy and then describing systems that as far as I can tell, do not exist.

-----

I'm accommodating a little bit of a rabbit hole above, but I do need to loop back around to the more relevant point:

> There's literally no reason to have e.g. a knowledge based auth or signed id request hit disk.

Regular, consistent transmission and collection of ID information online presents security risks that are unique to remote identity verification and that are not present in physical spaces like shops and stores. Even if there existed a system that allowed this verification to happen entirely in RAM, that would not address the security points that professionals have raised. And even that magical system would necessarily require storing that information in more places -- on user phones and browsers in an easily transmissible format. It would necessarily require users to become more comfortable sharing information online that they should not be comfortable sharing online.

I'll repeat the same point I made in my previous comment:

> Advocates of these bills ignore that security researchers have an issue with collection and transmission of sensitive data in addition to storage, and so advocates point to narrow, non-specific language about long-term retention as if that solves all of the issues. It doesn't.

Pointing to retention as the only security risk in these laws misrepresents the concerns of security professionals. Ambiguous language that is inadequately explained or elaborated on within bills and that (theoretically) addresses one part of security researchers' concerns is not sufficient to dismiss their overall concerns. Regular uploading and transmitting of ID information to 3rd-parties over the Internet is more dangerous than showing your ID in a liquor store; transmission of that data necessarily requires copying that data, putting it in the hands of multiple parties, verifying their trustworthiness, and interacting with extremely complicated systems that have larger attack surfaces than a cashier looking at your face.

It's just not accurate to act like they're the same.


Sure, there are multiple laws. The ones I've read all seem similar enough to me on the points people bring up.

Identity verification is not that mysterious. If these sites are afraid to do it themselves, there are turnkey vendors for that, which e.g. banks or docusign use. All the laws I've read say sites can use third party verification services. The Utah law specifically mentions

> verification through an independent, third-party age verification service that compares the personal information entered by the individual who is seeking access to the material that is available from a commercially available database, or aggregate of databases, that is regularly used by government agencies and businesses for the purpose of age and identity verification;

i.e. KBA, which is already a thing. These companies already know facts about everyone. You claim you're person X. They ask you to tell them a fact they already know. They check your answer against their database. They don't need to store anything you tell them. I'm sure they can tweak their service to only tell the requesting site you are over 18 and not keep any records. These services know how to deal with a highly regulated environment.

The Utah law also allows the user to present a "data file from a state agency or an authorized agent of a state agency that contains all of the data elements visible on the face and back of a license or identification card and displays the current status of the license or identification card."

No need for the site to save anything. Just check the signature and age.

I don't see what makes porn sites unique vs. any other e-commerce business that requires customers to identify themselves wrt. security. Typically those actually store and sell your info.

Also many grocery stores do scan IDs when you hand them to the cashier. Who knows what they're doing with that info. Wouldn't surprise me if they retain and sell it.


> Sure, there are multiple laws. The ones I've read all seem similar enough to me on the points people bring up.

The laws are template laws, but do occasionally differ in important ways. You've mentioned before that Texas includes a financial penalty for retaining user IDs beyond verification. You didn't mention that Texas is pretty much the only state that does this, and the majority of the other bills only allow for suing for harm and attorney's fees. Harm can be difficult to prove for information retention, and these provisions rely on individual action for enforcement.

You mention later in this comment that Utah includes provisions for ID-only verification. You don't mention that Utah is (as far as I can tell) one of the only states that offers this kind of detail, most merely mentioning that "government identification" could be used for verification.

These things matter. When we treat these bills as a single unit, we run the risk of building a composite bill that theoretically addresses every concern, even though that composite bill doesn't actually exist anywhere.

----

> Identity verification is not that mysterious.

Agreed. Do you believe that the security professionals who are intimately familiar with identity verification services and who know how the current services work are just... lying? Like, what do you think is happening here? This is not something complicated where there are a bunch of debates about how ID verification can work, we know how the ID verification services today work. And security professionals are saying there's a security risk.

Does the Texas AG know something that they don't? Is there some secret new ID verification system that only lawmakers know about? Like you say, this isn't that mysterious, ID verification online exposes users to privacy and security risks. It's straightforward, this is a known risk.

The fact is, there are no identity verification services I'm aware of that I think are secure enough enough to use for this level of transaction -- and every 3rd-party ID service I'm aware of works by retaining and accessing stored information about users.

The people talking about the security risks know how existing identity verification services work. They're not that complicated. They work by collecting and transmitting and cross-referencing personally identifying data, and that process is vulnerable to attack and data misuse.

----

> i.e. KBA, which is already a thing. These companies already know facts about everyone. You claim you're person X. They ask you to tell them a fact they already know. They check your answer against their database. They don't need to store anything you tell them.

Okay, are you listening to yourself?

> They check your answer against their database.

So personally identifying information is collected and stored. And that information is linked to requests to access potentially compromising or embarrassing material on a level of granularity where those requests, if intercepted, can be used to link personal identities back to those requests. By your own admission.

I don't know, you're agreeing with me and then saying "see, that means that data doesn't have to be stored." No, you just described data getting stored and held by a 3rd-party (notably, a set of 3rd-parties that have had historically awful security and have regularly been irresponsible with those databases) and then cross-referenced with individual access requests in a way that would necessarily require personally identifying to these data brokers which individuals were interacting with which companies.

Sure, those services don't need to store your newly uploaded ID -- they already have it! But what comfort is that? They still have the ID either way. You are describing a system that can only exist by hoovering up and retaining huge amounts of data on individuals, and you're advocating that this system should be expanded.

And while we're on this subject, none of the laws I've read ban retaining records of this access or selling information about which individuals' identities are verified, even though that could be compromising or personal information. More PII and data is created during this process than just the ID you transmit, and I don't think a single law that I've read addresses that fact. But sure, the data broker that already has your ID won't store the image you sent them. That'll be a huge comfort to Texas users when those sites get hacked and leak access information about which users had their IDs verified for which services.

What you're describing is not a privacy-respecting system.

----

> The Utah law also allows the user to present a "data file from a state agency or an authorized agent of a state agency that contains all of the data elements visible on the face and back of a license or identification card and displays the current status of the license or identification card."

I avoided pushing this point too hard before, but reminder that there is no requirement in any of the laws I've read for state agents or authorized agents of the state to delete records of that request or to avoid linking those requests to individual services. The laws as written do not block government agencies from using this information to build detailed records of who accesses which services.

> No need for the site to save anything. Just check the signature and age.

This would not pass a check for fake IDs. Nor would it prevent shared IDs. The laws I've read provide no guarantee that a system that was trivially bypassed would be sufficient to ward off State action. Again with the ambiguity about what "reasonable" means, which is a major problem in these bills. "Don't violate privacy, but it has to work." Well, if all you're doing is OCR on a license and you're not cross-referencing that data or storing information about attempts, that is not a system that is hard to bypass.

Also as I mentioned above, there isn't just one law. Other laws do not go into this level of detail about what kinds of IDs are accepted or how they could be verified. Great that Utah does (although Utah's example is not sufficient to address concerns) -- that just leaves all of the other bills.

> I don't see what makes porn sites unique vs. any other e-commerce business that requires customers to identify themselves wrt. security.

Multiple things:

A) not all porn sites are e-commerce businesses, and not all platforms affected by these bills are porn sites. These bills are not typically restricted to commercial transactions -- merely accessing commercial sites requires verification, even without a business relationship.

B) e-commerce businesses with traditional verification requirements typically do not allow for anonymous usage in the first place. Many of them have extensive "know your customer" rules and are not concerned with protecting the privacy of their users -- quite the opposite, many of them are required to retain information about their users.

C) Security-wise they're not that different, and the criticism of these bills directly extends from knowledge about the security risks and bad practices of many of those e-commerce sites. Whether or not you understand the security implications, I promise you the organizations and security experts that are pushing back on these bills already understand that Flowroute exists.

Note that the theoretical instant, private identification that you seem to be proposing sites will implement doesn't exist for the companies that are relying on this verification today. Once again, I'm left pointing out that you're describing a happy-path scenario that isn't the case for any online identification system I can find. As far as I can tell, these services all store data about their users' individual identities.

----

> Also many grocery stores do scan IDs when you hand them to the cashier. Who knows what they're doing with that info. Wouldn't surprise me if they retain and sell it.

Shouldn't you check up on that before advocating that Internet ID verification is fine because it's just like local verification? Me personally, before I compared digital ID verification to local ID verification, I might make sure that local verification isn't retaining and selling all of your data, because otherwise the comparison would look awful. Have you checked to see whether security professionals have also raised alarms about local storage of ID information? Because... they have, for the exact same reasons :)

Local ID verification ideally should not involve scanning an ID, and the fact that it sometimes does anyway is worrisome. It doesn't bode well for expanded digital ID verification.

If your point is "local verification doesn't require sending information to multiple parties across the Internet and yet companies still do it anyway, and we still don't know what's happening to your data in that scenario" then... I mean, you have to understand that's not something that is likely to make anybody feel more charitable to your argument, right? That's not something that makes online ID verification seem like a good idea.

----

Once again, I'll repeat:

- Texas's own language refers to these systems as storing user information.

- There are no ID verification systems that I'm aware of for online services that work without maintaining and storing information about users.

- Addressing long-term retention of submitted information is not sufficient to address the privacy and security concerns that researchers have brought up.

- None of the bills I've read are clear that an unverifiable zero-retention policy would be sufficient to avoid liability, this seems to be something you're just reading into the text as an assumption of good will.

What you're suggesting above about retention practices and the ability of ID verification services to do this without storing customer data isn't true -- but even if it was true (which it's not) it changes nothing. Regular transmission of this kind of information is dangerous, users should not be trained to submit this kind of information casually, especially not to sites that they don't have business relationships with. The transmission and collection of this information exposes users to risks to both privacy and security.


I don't think security professionals are lying. I think "security professional" is a meaningless descriptor like "thought leader" that one applies to themselves, and they shouldn't be given any specific credibility.

At the end of the day, I agree we should have stronger data protection and retention regulations, federally even. That's an orthogonal issue to whether adult services online should require some validation that the customer is an adult. It's not the first solution I'd reach for (I'd prefer requiring metadata to make client filtering easier), but the more I think about it, the more reasonable it seems. No one throws a fit when instacart scans your ID for alcohol orders. Buying a gun online has even more stringent requirements where you need to go visit an FFL to pick up. Likewise in my area, marijuana is legal (modulo federal illegality), but delivery is not; you basically can't buy it online.

I don't see why porn is special here. The law has always banned distribution to minors before the web existed. By default, sites (commercial ones at the very least) should be criminally liable for breaking the law if they distribute to minors, just like in-person stores are. They should be proposing systems that they believe are reasonable to meet their obligation, but they are not. Instead, they've gone from at least requiring credit cards to... absolutely nothing. They've frankly brought this on themselves.

The obvious elephant in the room to me is that none of this would even be controversial if sites hadn't moved to an ad-supported model. If you're paying for it, of course they need to know who you are for billing. Again, the more I think about it, the more reasonable it seems to me that if you're going to have that business model, then fine, but you need to at least do the checks you would've otherwise done during billing.

So perhaps the issue is

> sites that they don't have business relationships with

Is simply not a good model. If a business doesn't want to establish itself as credible to its customers such that they can trust it to professionally handle their information, then maybe they shouldn't be in an adult restricted industry where they need to handle that information. If they don't want to handle that information, perhaps they can propose a system where they don't need to (I've commented elsewhere on HN[0] about an oauth-like system where the government could provide age gate tokens without knowing who the token is being issued to or even if the age required is over 18 or over 21. It's not that complicated. Why do we have no one in these industries making such a proposal to lawmakers? They've had 30 years to do it.).

[0] https://news.ycombinator.com/item?id=39183486

https://news.ycombinator.com/item?id=39191568


> I don't think security professionals are lying. I think "security professional" is a meaningless descriptor like "thought leader" that one applies to themselves

You don't believe that people who study security for a living might know more about it? Certainly software security experts should be given more credibility about software security than politicians should be given. I'm not sure I've ever run into the view before that security research is a pseudoscience.

> At the end of the day, I agree we should have stronger data protection and retention regulations, federally even. That's an orthogonal issue to whether adult services online should require some validation that the customer is an adult.

In what way is that orthogonal? The lack of data protection and retention regulations is a big part of why this stuff is dangerous. This is a little silly, you agree that the existing standards and services are not sufficient, but you don't think that's relevant to whether or not their use should be massively expanded under the direction of the government?

Of course it's relevant.

----

> No one throws a fit when instacart scans your ID for alcohol orders. Buying a gun online has even more stringent requirements where you need to go visit an FFL to pick up.

I already talked about this, not all of these sites are transactional. Also, note that porn is tied into normal political and social speech in a way that it could never be fully transactional and commercial without restricting a large portion of that speech.

Also, people do throw a fit about data privacy and about at the very least improving security for ID verification. To your point:

> I don't see why porn is special here.

It's not. These debates happen in other areas too; attempts to clamp down on hate speech, propaganda, to restrict information flow across state lines, to track copyright violations, to access E2EE messaging, etc, etc. What you're seeing is completely normal consumer advocacy for privacy, security, and free speech, but because the US is so conditioned to think that porn is some kind of special category, advocacy that probably wouldn't make you blink in other situations feels weird to you now. You may not be aware about debates in other areas of customer tracking, but even with that lack of awareness porn jumps out and you are aware of that specific debate... because everyone thinks porn is some special category.

Data scientists and security experts ruining a legislator's day by pointing out that the systems they imagine actually have huge security holes is normal. It only feels different to you because this time it's about porn.

----

> They should be proposing systems that they believe are reasonable to meet their obligation, but they are not.

So here's an interesting thing to research: they are. Every single one of these sites labels content in a way that it could be intercepted and blocked at the router layer or by parental controls on devices. They all self-identify, even in areas where they're not legally required to.

If you think that porn companies are sitting around and doing nothing, you really have not done much research in this space. They have made plenty of proposals about how to make filtering easier, but states have largely ignored those proposals because:

A) they would require pushing companies like Apple to develop competent parental controls, and that doesn't poll as well among Conservative voters,

B) the majority of these laws have backing from explicitly anti-porn advocacy groups who do not want parental controls, they want to ban porn.

----

> The obvious elephant in the room to me is that none of this would even be controversial if sites hadn't moved to an ad-supported model.

I would advise doing more research on this, there are controversies about this kind of ID requirement even for purely transactional data because it does expose people to privacy risks. I will also note that pushing an entire category of speech to require a transactional relationship would very likely be a violation of the 1st Amendment.

----

You have a couple of accusations here that are just straight-up false. Pornhub specifically called out the lack of government-backed ID services as a partial reason for their opposition for these bills, and has lobbied for states to build such a service. More importantly, Pornhub already does what you're advocating is your preferred solution: "I'd prefer requiring metadata to make client filtering easier".

Pornhub is pushing out metadata today. There's a full-on standard for it and everything (https://www.rtalabel.org/).

What is actually happening is that Apple, Android, and routers don't provide sufficient parental controls to act on that metadata. But commercial porn sites are not in charge of what Apple and Android build. The argument that these commercial sites have made is not that they should be allowed to market harmful content to children, but that when states ignore a workable solution to a problem in favor of a less practical solution with greater security, privacy, and free speech implications -- that is not a good use of legislation.

If this is the first time you're hearing about this -- I mean, I'm not surprised, like I mentioned above porn is a weird category of protected speech and as a result coverage gets weird around it. And the lobbying groups behind these bills have worked very hard to act as if porn sites are simply throwing content online or even deliberately targeting children. So it's not unexpected that you largely get one side of the story. Texas doesn't advertise that it outright ignored calls to legislate better parental controls. Louisiana doesn't advertise that there exists a labeling standard in use today that they are actively ignoring. I don't expect someone just now looking at the text of these bills to know that.

But knowing that now, you should do some deeper research on this and figure out what the status quo actually is. Unlike buying alcohol, porn is very directly speech and has been affirmed by the Supreme Court to be speech on multiple occasions. Porn is not always directly transactional content, it gets mixed into normal speech -- and a proposal to get rid of an entire monetization category is extreme. But people feel comfortable proposing restrictions that wouldn't really fly in any other speech category, because they're conditioned to believe that porn is something special and that porn companies are just sitting around happily showing dicks to kids or something.

----

You've advocated elsewhere that router-level blocks are sufficient to handle blocking for VPNs, foreign sites, etc... What porn companies are (and have been) proposing is exactly what you want. Require routers to offer parental controls that can act on the metadata that porn companies will happily attach to the content they serve. Legislate that this metadata must be attached to pornographic content. This would not only be a more private and secure solution, it would also be more effective. It would do a better job of protecting kids than a random OCR check on a drivers license.

Now that you know that, do you find it at all odd that all of these states have completely ignored that proposal and are instead pushing a solution that has obvious privacy and security risks and that is observably pushing websites to block their states? Does knowing this information help you understand what I mean when I say that these laws are less about protecting kids and more about banning porn?


They're orthogonal issues because you can address one, the other, neither, or both totally independently. We can have very strict data protection laws and also have strict id checking for regulated industries.

I'm well aware of RTA labels. I've pointed them out on similar threads. They're also not ideal (given that they're basically "yes/no" which will necessarily lead to arguments about what should be classified), but like I said, I'm inclined to prefer that kind of approach. Something like mandate commercial sites and commercial browsers (which is every major one) to implement it or something like it, with criminal liability for commercial porn sites that fail to do so.

That said, not all sites do implement it. e.g. reddit and redgifs do not, and reddit also hosts forums specifically targeted at children. Those two sites are very high traffic and are completely negligent here. Also, content can't be blocked at the router level if it's using TLS, which of course almost all of these sites do (you could potentially do SNI sniffing against a host blacklist, but even that will go away with ECH). Perhaps the "evil bit" could be used for that purpose at the IP layer so it works with TLS.

Generally, the more I think about it, it does seem "reasonable" to just say businesses dealing in adult restricted materials are liable for determining their customer is an adult (to a standard that a reasonable person would believe), and websites are not an exception unless it was e.g. a defacement. Let them figure out how to do it, and if the government can collect evidence that they failed to do so, they can charge them with distribution to minors. The sites can come up with their own system according to their risk tolerance. Basically, just raise (or introduce) the bar for negligence.

Alcohol distributors don't seem to have a problem doing this. Perhaps porn distributors can ask them for help.


> They're orthogonal issues because you can address one [...] independently. We can have very strict data protection laws and also have strict id checking for regulated industries.

We can not have secure ID checking without data protection laws. They're not orthogonal. This is the same conversation that comes up every time the government tries to mandate secure backdoors into encryption. You can't massively expand usage of an insecure technology and when it's pointed out that the current technology is insecure say, "well that's a separate issue, we don't have to worry about that right now." It's not a separate issue, you're massively expanding a technology that is currently insecure, just own it.

> I'm well aware of RTA labels.

Then why did you claim that porn industries weren't doing anything? I mean, I'm trying to be charitable here, it would be very reasonable for you not to be aware of those efforts, most people aren't aware of them. But you're saying you were?

You're telling me that when you said:

> They should be proposing systems that they believe are reasonable to meet their obligation, but they are not. Instead, they've gone from at least requiring credit cards to... absolutely nothing.

You knew that this was false -- like literally just straight-up wrong? When you commented that porn industries had 30 years to propose government ID systems to avoid handling this data themselves and hadn't... you knew that porn industries had actually proposed and lobbied for government ID systems?

So why did you say otherwise?

> They're also not ideal (given that they're basically "yes/no"

Come on, this is obviously not an issue for you because if it was, you wouldn't be supporting the current bills, which all implement binary "yes/no" classifications. We could debate whether or not broad classifications that refuse to distinguish between types of porn are good or bad, but you are currently arguing in favor of a binary classification for the purposes of liability, so I don't think that discussion would be a good use of time. Obviously you're OK with binary classification for age-verification, so this is not a real objection.

> reddit and redgifs do not

It's not clear that Reddit is liable under all of the laws proposed. Reddit hasn't pulled out of any of these states or added ID checks. Your argument against the proposal of labeling is a site who's content isn't addressed under the proposed laws.

Also, if you don't like that Reddit doesn't currently use the unlegislated standard... legislate it. Pornhub isn't lobbying to block labeling laws. You can require Reddit to use a labeling standard.

> Also content can't be blocked at the router level if it's using TLS

My sibling in Christ, you proposed blocking sites and VPNs at the router level. This was your solution to foreign porn sites that aren't covered by these laws. Now suddenly that's not sufficient?

Regardless, we use per-page metadata all over the place on platforms like iOS and Android to enable functionality based on page contents -- from device support to PWA indicators. There is no reason why these platforms can't work those same indicators into content blocking tools. And the presence of headers on landing pages for sites like Pornhub can be used at the network level to block these sites entirely, which again... you proposed doing!

Blocking per-page content is just a bonus, the current bills don't address that concern. It's a mark of the superiority of labeling that it allows a level of granularity that current bills don't.

> Alcohol distributors don't seem to have a problem doing this.

I'll repeat, porn isn't always transactional and porn is rolled into normal political and social speech in a way that prevents making it purely transactional without limiting large categories of speech. It's not the same as alcohol.

Alcohol also isn't speech. Porn is.

> Generally the more I think about it, it does seem "reasonable" to just say businesses dealing in adult restricted materials are liable...

You're allowed to think it's reasonable. The problem is if you spread misinformation while defending that position. To summarize where this thread has gone, you've suggested:

- People shouldn't worry about data collection because the laws prevent it. This is false, many of the laws have limited liability and recourse for data collection, and most only target retention of ID information, not aggregate data collection about users' browsing habits. Additionally, none of the laws limit government collection of data.

- The laws are close enough to each other that they can be read interchangeably. This is false, although the laws are templates of each other they often differ on details, and the presence of a provision in one bill does not solve problems for other bills.

- Information does not need to be stored or collected to implement 3rd-party ID checks. This is false, there are no 3rd-party ID checking services that I'm aware of that do not collect and store information about users.

- Retention laws would solve the security problems. This is false and a misrepresentation of security professionals' criticism of the bills. Retention is one part of the security and privacy risk.

- Porn companies have not proposed any alternatives. This is false, they have -- both ID systems and labeling systems. What's wild about this one is that you're suggesting you knew that this was false when you said it, which is not something I would have suggested.

- Porn verification is identical to alcohol/gun verification. This is false, most porn consumption online is not via a transactional relationship.

----

Like I said, I don't care if you support the bills, that's fine. It's a free country, you can support whatever you want. Just don't spread misinformation while you're doing so.


You can of course have secure id checking without data protection laws: the companies doing the check can just not store information about the check, regardless of whether they are required to delete it. As long as they are not required to retain it, which I have not seen anywhere, they certainly can choose not to. Here though, the laws I've looked at all specify that they must not retain it. They could have higher penalties, but they already explicitly forbid it.

Like I said several times and have said in other similar threads, I'm inclined to think RTA headers are a "better" approach. Currently they're not consistently implemented on either end (e.g. Firefox doesn't support them, sites I mentioned don't send them), but it'd be a quick win to mandate that in commercial contexts, which would include Firefox.

But you don't have to look far to find people who think the filtering problem is entirely intractable (they're in this thread). I think it's worth trying the metadata approach more with commercial mandates to implement something along those lines. I can see why people could argue that's been tried enough (filters have existed for over 20 years, and access for children is still easy), and they need something more. It's not clear that they're even wrong, though I'd like to see us try still. But the more I consider it, it really doesn't seem like that big of a deal to just do ID checks. Presumably you'd do it once to establish an account that's above the age limit. Not the end of the world.

Maybe I'm wrong about these sites' lobbying efforts. Maybe most of them have been posting on their front page big banners asking people to tell their representatives to support mandatory metadata processing/filter enablement laws. I sort of doubt it, but it could be. I do know that some major sites (e.g. reddit) don't implement the metadata or any other controls.

It's not clear what the "percent of content" in these laws means, but when I looked at dumps last year, reddit looked to be ~40% porn by posts (obviously not if you consider comments to be content for counting). It is (or was, as of last year, if dumps are accurate) pretty close to being more porn than not. Certainly for a discussion about how porn sites behave, they are a major porn site with millions of users, and they do exactly nothing to turn away minors (in fact they obviously target them) or segregate the site.

I pointed out elsewhere that routers can block common VPN protocols (e.g. ipsec or wireguard). Of course they can do almost nothing to block something going over TLS:443, and soon they won't be able to do SNI sniffing either. So network filtering of sites is not possible anymore unless they stop using TLS.

Anyway, my point about worrying about data collection and retention is that people should worry about it to the same extent they do with eBay or some small shopify-based site. They should worry about it! But they shouldn't specifically worry about porn sites. And the laws here seem to all ban retention, which is good. Perhaps they could have higher penalties, but they do ban it. Generally e-commerce sites don't have retention regulations.

It's not clear to me how governments would get any records to retain, but sure they should disallow it.

3rd parties already store data that can be used for verification. I don't see KYC laws being undone anytime soon. There's no need to record any information about a verification occurring. I'm sure companies offering KYC services who are already used to operating in regulated environments can deal with not retaining submitted information.

I don't really understand your point about "transactional" relationships. If you have a business providing a service, they can follow relevant laws for their industry. If total wine decided to place an unmonitored "free beer" keg out front where children could get to it, they'd almost certainly end up in legal trouble.

Or perhaps a more direct analogy would be if you opened an adult theater with an automated ticket machine so no one checked who was coming in. Or a Redbox that took cash and rented adult movies with no checks. That business would never fly in person. Why is it different online?


> the companies doing the check can just not store information about the check, regardless of whether they are required to delete it.

Seriously? By that logic the bills themselves are orthogonal to porn, since sites can just institute ID requirements without being required to do it. There are no 3rd-party ID services that have good privacy handling or refuse to retain information. And in the absence of government-sponsored alternatives (which companies have asked for) this is a de-facto requirement to use 3rd-party ID services that put customer data at risk.

> Here though, the laws I've looked at all specify that they must not retain it. They could have higher penalties, but they already explicitly forbid it.

False. I already covered this:

> many of the laws have limited liability and recourse for data collection, and most only target retention of ID information, not aggregate data collection about users' browsing habits. Additionally, none of the laws limit government collection of data.

----

> I can see why people could argue that's been tried enough (filters have existed for over 20 years, and access for children is still easy)

People can argue a lot of stuff, that doesn't make any of it correct. If someone argues we've tried mandating labeling for online porn and legislating parental controls... we haven't. They're wrong. They can argue it if they want, but they're arguing fiction.

----

> Maybe I'm wrong about these sites' lobbying efforts. Maybe most of them have been posting on their front page big banners asking people to tell their representatives to support mandatory metadata processing/filter enablement laws.

Pornhub has in fact literally placed large banners in certain states lobbying about this topic and asking customers to go to their representatives and get involved. I've never seen a response from the company to any porn bill in which they don't put forward the idea of device-based filtering. They are constantly qualifying their responses with stuff like "of course, we also want to keep kids safe, so that's why we support local filtering and labeling laws".

It is strictly inaccurate to claim that these bills are the result of inaction from porn companies, or that porn companies have not proposed alternatives. You claimed that these companies had done nothing; but in reality they literally built a standard for the government. And pushed for government ID verification too as an alternative to 3rd-party services! :)

----

> It's not clear what the "percent of content" in these laws means

ie, the laws are over-ambiguous and don't clarify liability to an acceptable degree. A lot of things aren't clear in these laws. It's not clear what "reasonable" means. It's not clear what "damages" are for data retention. It's not clear what "retention" means in these laws!

They're badly written laws.

> Certainly for a discussion about how porn sites behave, they [Reddit] are a major porn site with millions of users, and they do exactly nothing to turn away minors (in fact they obviously target them) or segregate the site.

Multiple of these laws have taken effect in states already. Reddit requires an ID in none of those states. No politician I'm aware of has talked about suing Reddit. If you have a problem with Reddit's handling of porn, these bills aren't doing anything about it.

Because of course they aren't, no AG is going to be so foolish as to try and force an ID requirement in order to view Reddit posts. But you know what would allow blocking porn on Reddit? Labeling requirements.

You want to know who else isn't covered by these laws? Non-commercial sites -- because placing these kinds of restrictions on non-commercial hobby sites would be far more likely to raise 1st Amendment questions (not that the laws as they exist don't already raise 1st Amendment questions). But you want to know how you could legislate filtering for smaller sites without raising those questions? Labeling requirements.

----

> I pointed out elsewhere that routers can block common VPN protocols (e.g. ipsec or wireguard). Of course they can do almost nothing to block something going over TLS:443

From https://news.ycombinator.com/item?id=39957264

> "But what about sites outside of US jurisdiction (e.g. Russia)?" Require ISPs to have a setting for customers to opt into blocking them.

Well tbf, Russian sites famously never use TLS ever ;)

Look, routers do block websites all the time, including encrypted ones. Sites can be blocked via IP, but the more direct way is to block using DNS. TLS doesn't stop 1.1.1.3 from working, and even once ECH comes in, any device that is capable of supporting ECH is also going to support setting custom DNS servers, including a local resolver managed by a router.

But maybe you don't want to use a router, fine. Maybe DNS is too hacky for you. That doesn't mean that iOS and Android devices can't also implement this kind of blocking.

----

> And the laws here seem to all ban retention, which is good. Perhaps they could have higher penalties, but they do ban it.

See above, this is false. I covered this already.

> It's not clear to me how governments would get any records to retain, but sure they should disallow it.

But they don't disallow it, do they? :) I'd like a lot of things in a theoretical version of the legislation, but unfortunately we're talking about the legislation that exists -- and the legislation that exists does not appear to bar indexing of consumer internet habits by the government (or by private businesses).

> Generally e-commerce sites don't have retention regulations.

Any site that accepts payments has defacto retention regulations, at the very least for taxes -- in practice at least. Given how ambiguous most of these laws are about enforcement and what "reasonable" means, there is a heavy incentive for sites to retain at least some user metadata even if they can't retain actual ID documents.

Also bear in mind that 3rd-party verification necessarily requires the collection and retention of information about every single person who can be verified through that service. Whether they retain the specific documents submitted or not, this is still an expansion of user surveillance -- and of course, the laws do not clearly ban collection of metadata and identifying information about user requests outside of the ID information itself -- at least, I don't think the laws couldn't be argued in a court not to cover that information.

----

> I don't really understand your point about "transactional" relationships.

Not all porn is part of a transaction at all. Porn isn't always something you buy, it's not like alcohol. It's not a purely commercial product, it's not always tied to accounts, it's not always a thing you buy or order. And forcing it into that category would hamper a lot of speech -- because porn is intrinsically tied up alongside political and social speech. Particularly where user content is concerned, porn can be extremely political, and the history of porn/decency laws in the US demonstrates that concept over and over again. Porn can not be reduced to a singular transaction in the vein of buying a beer -- not just because it may not involve an exchange of money but also because porn is speech, it is communicative, it is a thing that happens alongside and inside protected communication. Buying a beer is not like that.

Where the Internet is concerned it is actually a good thing that people can read Reddit and Twitter anonymously without making accounts and logging in. We don't want an Internet where every single site is a walled garden that requires a user account. It is a good thing that people can set up Mastodon servers that openly federate -- something that would be practically impossible if they required ID verification in order for anyone to view posts on the service. And again, it's not as simple as saying "well, but we'd only require it for porn." If you're requiring it for porn, you are requiring it for protected political speech. The implications are the same.

What people don't really acknowledge when talking about porn is that things can be inappropriate and harmful to children and also protected political and social speech that should not be restrained between adults. It cannot be reduced to a purely transactional "I would like to buy a smutty magazine" framework.

> Or a Redbox that took cash and rented adult movies with no checks.

As a sidenote, I strongly suspect that a Redbox that took cash and rented out R-rated movies would be legal in nearly every state. Did you know that it's not illegal for a parent to take a child to an R-rated movie, even one that contains sexual content? I wouldn't advise doing so, children shouldn't watch R-rated movies, that kind of content can be very harmful to them. But nobody will arrest you for it.

Did you know that compliance with movie ratings isn't legally mandated? Movie theaters actually have no legal obligation to keep children out of R-rated movies (and certainly no requirement to ask for IDs) -- the whole thing is a completely voluntary standard. Just a fun fact.

But to your broader question:

> Why is it different online?

Because mediums affect security risks and liabilities. Because it's online. Because asking for an ID to be uploaded before you look at a Reddit post has bigger security and privacy implications and as a result bigger speech implications than asking for an ID before you physically buy a beer from a liquor store. Because they're not the same thing.

There's a lot of stuff we do online that we don't do in physical spaces. In physical spaces I don't need to encrypt every single message I hand to someone else. On the Internet, we use TLS. Because mediums affect things. They always have affected things and they always will. And this is not new, newer mediums have been affecting how we write laws and regulate communication since the founding of this country.

----

We cycle back around to my previous point: you can think these laws are reasonable, it's a free country, you can think whatever you want. My problem is not whether or not you think the laws are reasonable, my problem is that you're spreading misinformation when talking about the laws.

I'm still waiting for an explanation of why you said that porn companies have done "absolutely nothing" and had proposed no standards when you apparently knew that was straight-up false and that porn companies had in fact proposed standards and advocated for them.

You're allowed to think that online IDs are no big deal; just don't say things that are provably untrue, that's all I'm asking.


Where exactly have they put forward a proposal? I checked on pornhub's FAQ and Press section, and on their parent company Aylo's site. I see nothing. What is the proposal they have? That we put some liability on sites and browser/OS vendors to implement RTA headers? Including non-commercial (e.g. FOSS) distributors? That seems like a much larger abridgement on speech, and without it, you could trivially work around the filter by e.g. running Konqueror off a flash drive.

As far as I know RTA headers date back to IE6, and have gone mostly unimplemented (neither Firefox nor Chromium have any parental control options in their settings). It's actually hard to find any information about it. I only know about it from trying to find out how the old IE parental controls dialog was supposed to work. It's almost entirely undocumented anywhere.

This article[0] claims pornhub is in favor of mandating age verification, but at the "device level" whatever that means (likely it doesn't mean anything).

So what's been proposed? To which lawmakers have they presented these proposals? Have they just gone completely ignored? Where are their press releases urging people get their solution passed?

You said they placed large banners in certain states. Why not in all states? Or are they only placing banners after they've already had regulation passed against them?

The ruling from the 00s was based on technology at the time, and considered what seemed to be the least invasive way to feasibly do it. At that time, you could actually run a network-level filter. Conceivably your ISP could do it for you. That is almost impossible now, and will be completely impossible soon (except very coarse filters like geo-bans or protocol filters).

So your remaining options are (1) do nothing, (2) put requirements on companies/customers (and use geo-network filters for sites outside jurisdiction), or (3) put requirements on end-software/device providers (and porn companies). There is of course precedent for (3) (the V-Chip), but it's not even clear that that's less onerous than (2). Especially since in the meantime there's actually been an industry that's developed and can make id verification take a couple seconds. I can answer some questions without presenting any documents to satisfy bank KYC regulations; maybe some of the wording is overly vague but it seems existing commercial systems for id verification would fit the intent for "commercially reasonable" systems. As far as I understand, laws are supposed to be vague in the sense of saying things like "reasonable" in order to allow "reasonable" to change over time.

Note also that this new crop of laws seems to all be about commercial services. Mastodon, etc. are not in scope (unless your Mastodon instance is a commercial porn site). They also have the now-standard exceptions for things with literary/artistic/political/educational value. The "porn is speech" issue is tautologically handled by saying the laws only apply to the non-speech variety.

[0] https://www.cityweekly.net/utah/utahs-elected-leaders-push-n...


> Where exactly have they put forward a proposal?

Literally a banner ad in front of Utah users, the exact thing you asked for (https://kslnewsradio.com/2003298/adult-website-pornhub-block...):

> "Please contact your representatives before it is too late and demand device-based verification solutions that make the internet safer while also respecting your privacy."

This is silly. The language here could not be clearer. You asked for a banner ad, they literally put a banner ad up saying, "contact your representatives and ask for device-based verification." RTA headers are a completed standard for content filtering. The fact that they're not implemented widely is because they're not legislated and have never been legislated.

> So what's been proposed? To which lawmakers have they presented these proposals? Have they just gone completely ignored? Where are their press releases urging people get their solution passed?

Quite frankly, this was not hard to Google. From https://www.cnn.com/2023/06/07/tech/pornhub-age-verification...:

> The video’s release coincides with a previously unreported effort by Pornhub — and its private equity owners, Ethical Capital Partners (ECP) — to convince the world’s largest tech companies to intervene in the wider debate over age restrictions for digital porn and social media. [...] In recent weeks, ECP has lobbied Apple, Google and Microsoft to jointly develop a technological standard that might turn a user’s electronic device into the proof of age necessary to access restricted online content, according to Solomon Friedman, a partner at ECP. [...] One possible version of the idea, Friedman told CNN, would be for the tech companies to securely store a person’s age information on a device and for the operating system to provide websites requesting age verification with a yes-or-no answer on the owner’s behalf — allowing sites to block underage users without ever handling anyone’s personal information. [...] “We are willing to commit whatever resources are required to work proactively with those companies, with other technical service providers and as well with government,” Friedman said.

You are wrong. Porn companies are putting effort into this. You're moving goalposts for how much effort you think is fair, but what's wild is even with you moving those goalposts, you're still wrong.

What you're right about that we haven't seen much progress in this area. Why not? Well, from the same article:

> But it is far from clear the effort is succeeding. Friedman declined to say how, or even if, the companies have responded to Pornhub’s communications. Microsoft declined to comment for this story; Apple and Google didn’t respond to requests for comment. [...] Friedman characterized the discussions as being in “early stages,” though his other remarks implied the talks may be largely one-sided.

So companies reach out to tech companies and encourage law makers to pass laws, they're ignored, and then you come along and say "well, they should have said something". They did. They have been saying something. You came along and argued that these companies have said nothing. That there's been complete silence. You're wrong.

----

> That we put some liability on sites and browser/OS vendors to implement RTA headers? Including non-commercial (e.g. FOSS) distributors? That seems like a much larger abridgement on speech

Well... it's not. It's not completely absent 1st Amendment concerns, but it certainly has less of them. Liability on the level of distributors is a much clearer 1st Amendment problem, we literally have Supreme Court precedent on the books saying that blocking distribution of porn can be unconstitutional. Congress passed laws about communication decency that got shut down -- that's why despite you suggesting otherwise, there is no federal ban on distributing adult material to children. We tried it, and the Supreme Court ruled it unconstitutional (https://archive.nytimes.com/www.nytimes.com/library/cyber/we...). And all of those issues still exist for these bills.

This is something you should do more research on; I glossed over some of your earlier comments about "it's already illegal to give porn to kids", but since we're talking about 1st Amendment challenges, I should point out -- it's not federally illegal to give porn to kids, and attempts to make it illegal have been struck down before.

> and without it, you could trivially work around the filter by e.g. running Konqueror off a flash drive.

If I may quote a wise commenter: "It's not perfect, but that's a silly reason not to do it."

Part of parental controls on an iPhone, Windows, or Android device could be restriction of installation of 3rd-party software. And it's not clear to me that legislation of non-commercial software or platforms is even required here. All of the major browsers (Firefox included) are commercial browsers owned by commercial for-profit businesses. That's even questioning whether they'd need to be regulated: like I said earlier, every one of these browsers already has controls for setting DNS settings through administrative policies.

If you're worried about non-commercial escape-hatches, bear in mind that the current bills you're championing only apply to commercial sites, which would not limit resharing, re-uploading, or access to sites that are operating outside of the US. I promise you that shady porn sites will still be available in Utah after this decision. You seem to believe that's an easy problem to solve, but you also seem to believe that it's an impossible problem to solve when we talk about filters, so who knows? You seem to believe a lot of incompatible things.

Nor do they limit VPNs and it is not that hard to find a non-commercial or free VPN or proxy. Every Google Fi Android phone ships with a free VPN that doesn't monitor what you're accessing. Apple has been pushing for mobile VPN support through iCloud as well, although their setup is a bit more limited and doesn't (yet) obscure your state. At some point your child will have a VPN available to them literally just because they have an iPhone and you buy iCloud.

Device-based filtering using parental controls isn't perfect, but it is a better solution. Because even ignoring the constitutional issues, the privacy issues, and the security issues -- I hate to tell you this, but there's porn on Mastodon. As you've pointed out, there's porn on Reddit. None of these laws target those sites, none of those sites have been sued.

The current laws being passed do not protect kids from porn. Objectively, factually -- we know this because the laws are in effect, and Reddit and Mastodon still serve porn in those states. This is not something that's debatable.

----

> but at the "device level" whatever that means (likely it doesn't mean anything).

Very, very obviously, it means age verification would be handled through parental controls on the device. This is not complicated.

Yes, a law would need to elaborate more on what parental controls were sufficient, but that's part of writing a law. You're confused at what "device level" protections are, but have no issue with laws offering zero definition of what reasonable standards of verification are?

Nevertheless, if you're genuinely somehow confused, Pornhub itself clarifies what it means by this on its own blog (where once again, it encourages users to push for alternate ID solutions): https://www.pornhub.com/blog/age-verification-in-the-news

----

> The ruling from the 00s was based on technology at the time, and considered what seemed to be the least invasive way to feasibly do it.

Citation very much needed, rulings from 00s are still established case law. Nobody at the Supreme Court has said, "hey y'all just ignore these that was back when we thought networking was easier."

Also... you can still block network requests. A general reminder that uBlock Origin, 1.1.1.3, browser-level malware blocks, and Piholes are all things that work today and are going to continue to work for the forseable future even with encrypted DNS lookups.

----

> You said they placed large banners in certain states. Why not in all states? Or are they only placing banners after they've already had regulation passed against them?

You're moving goal posts. The fact is, you claimed that porn companies had made zero efforts to propose alternatives. And that's not correct, they have proposed alternatives. You claimed that they had never come up with standards for labeling. That's wrong, they came up with standards all the way back with IE6.

But now you can move to saying that the problem is that they didn't do enough advocacy. Personally, I feel (and the constitution agrees with me) that when an alternate solution for furthering state interests exists that doesn't abridge free speech, the state is obligated to pursue it -- that's part of what strict scrutiny requires.

----

> They also have the now-standard exceptions for things with literary/artistic/political/educational value. The "porn is speech" issue is tautologically handled by saying the laws only apply to the non-speech variety.

God, please grant me the confidence of a HN commenter saying that speech distinctions are handled by a bill saying "don't infringe speech." There is no reliable test for where to draw that line, it's silly to let the government decide where that line is on a case-by-case basis, there have been multiple Supreme Court cases pointing out that the government drawing that line on a case-by-case basis is unconstitutional, and we have political leaders on the books saying that they want to use proposed Internet filtering laws to abridge LGBTQ+ rights.

You yourself aren't applying those qualifications when you think about this -- you've argued elsewhere that somewhere between 30-40% of Reddit content is porn. How much of that porn has artistic/literary/political/educational value? What percent of Reddit porn is and isn't speech? Of course, that's not an easy question to answer.


So they put up a banner after these laws went into effect, only in states affected. My original point was where were their banners during the last 20 years? Obviously people have felt there's an issue. They did not put forward their idea. Other people did (even if it's a bad one). The article you posted also claims

> In recent weeks, ECP has lobbied Apple, Google and Microsoft

i.e they were not doing it until they found themselves being regulated.

Your quote indicates that device based age verification is not filtering:

> One possible version of the idea, Friedman told CNN, would be for the tech companies to securely store a person’s age information on a device and for the operating system to provide websites requesting age verification with a yes-or-no answer on the owner’s behalf

How you get that information is not specified. The rest of the article implies the idea is your phone would store your government id. What they're suggesting seems compatible with these laws. Their suggestion is even explicitly spelled out as acceptable in the Utah law. Utah seems to already have an app for the device side to handle the id. This site seems to be a demo of how to query it?

https://mdoc-reader-external.uc.r.appspot.com/

Like now I really don't understand what they're suggesting. They seem to be happy with what's being asked of them (at least in Utah and Louisiana)? Maybe they're still upset with Texas (though where they lack an existing system, they provide stronger privacy liability for a third party), but what's the issue with Utah?

Why are they starting discussions with Apple and Google to build it? Shouldn't they be integrating with the wallet provider who already has?

Are they upset that the timeline for integration was too short or the id app was missing part of the implementation? Why don't they complain about that if so?

My read at this point is that this is more an attempt at stalling tactic. They seem to suggest they're not even actually against mandatory age verification because at this point, it seems to have already been thought through and implemented in a privacy friendly, standardized way by at least two of these states.

On the tangent, most (all?) states have obscenity laws about giving e.g. porn to kids. Movie ratings are not mandatory because they are not obscenity without artistic merit. An R rated movie will be safe. A porn movie likely not. The government doesn't decide the artistic merit question; a jury does (it is a question of fact, not law).

Arizona where I grew up has a law specifically covering vending machines like Redbox, and says that if you did want to make a porn Redbox, you'd need to have a way to ensure the customer is an adult (e.g. a membership card or token that you buy with an id check). As far as I know no one's challenged it.


> So they put up a banner after these laws went into effect, only in states affected. My original point was where were their banners during the last 20 years?

No, your original point was, and I am literally quoting you here:

> They should be proposing systems that they believe are reasonable to meet their obligation, but they are not. Instead, they've gone from at least requiring credit cards to... absolutely nothing. They've frankly brought this on themselves.

This is categorically false. Not only are they proposing alternatives, not only have they only pulled out of states that do not offer a government ID system (even though it's offered criticism, Pornhub has not pulled out of Louisiana), they also proposed systems way before this legislation took effect -- like you said, RTA standard has been around for ages.

No, Pornhub has not preemptively lobbied for it to be legislated, but that is hardly unusual and hardly a cause for criticism; companies generally don't preemptively lobby for themselves to be legislated unless they're shooting for regulatory capture. Quite frankly, usually when companies lobby against regulation, they don't put forward alternatives. It's unusual that content companies are going this far out of their way to try and help solve the problem instead of just pointing out flaws with the government proposal.

----

> Like now I really don't understand what they're suggesting.

There are several ways of approaching this: one is to do age verification using a standardized system -- ideally that system would be standardized on a federal level. Where states have such a system, Pornhub hasn't pulled out. This is the least-good solution, but it is a solution that Pornhub in specific seems to be generally fine with.

A better way of approaching this is to do age verification using a standardized system that is purely device-bound -- ie, a system where a flag is set purely locally, possibly with one-time verification through a company like Apple or Google, and where requesting websites are sent no data other than a general "yes/no" byte alongside requests. This would be a considerably better system for privacy and security, and it is the ideal that Pornhub in particular is advocating for. One reason why this system would be better is because once verified, verification data would never need to be transmitted off-device at any point. It would also not run the same risks of training customers to upload ID information to arbitrary websites, which is a large phishing risk.

Pornhub's stance on this is weaker than my own. I would prefer for this to be handled entirely through filtering. In practice, the vast majority of parents can easily enter an age into a device when creating an account, and then any standardized age verification system could pull from that parental control with no need to ever expose sensitive ID information to even Apple/Google/Microsoft. Or, even better, parents could be given the option to be more granular with their filters, relying on devices to filter specific content and pages based on their own determinations about what their children can and can't see.

Pornhub also advocates for filtering solutions, but is comfortable with verification/blocking if there are systems in place that make that secure and private.

I don't know the specifics of Utah's digital ID system, but given that Pornhub hasn't pulled out of Louisiana, I would guess the reason they have pulled out of Utah is because they believe that Utah's system isn't secure enough or comprehensive enough to meet their needs. I can only guess what the reason would be -- whether it's a lack of desktop support, or whether the app transmits more data than Pornhub would like to receive, or some other critique. Maybe they will eventually adopt that system in Utah.

But the biggest critique Pornhub has around these laws is a defacto requirement to use 3rd-party ID systems or to collect data themselves. Because they (very correctly) point out that 3rd-party ID systems have security risks, are generally run by shady companies, and generally teach users bad data and privacy habits. Again, their stance is less extreme than mine, Pornhub is only lobbying for a workable ID system, I would argue that these ID systems are inherently insecure, inherently raise 1st-Amendment questions, and as designed fundamentally do a worse job of protecting kids than labeling laws would. I would also argue that several of the states pushing these laws have directly proposed creating registries of trans and LGTBTQ citizens and that like 3rd-party verification industries, those governments themselves should also not be trusted with touching ID verification data at all (again, I would note that none of the bills bar collection of data for these purposes).

But Pornhub is OK with those systems... if they exist and are (somewhat) secure and private. Pornhub has some other critiques that I think are pretty reasonable (and that have been spelled out in the articles that I've linked), including the fact that the enforcement mechanisms (lawsuits rather than direct regulatory action) generally leave smaller and less responsible porn sites untouched and make kids more likely to visit them. And we've already covered how these laws fail to protect kids from porn spread on general social media like Reddit and on non-commercial sites like Mastodon. But the most basic critique Pornhub has is that the 3rd-party ID verification ecosystem as it exists today makes it dangerous to do this kind of verification.

> Why are they starting discussions with Apple and Google to build it? Shouldn't they be integrating with the wallet provider who already has?

A general solution here built into platforms is obviously preferable to a state-by-state solution, particularly given how bad most states are at building secure software. It makes a ton of sense to work with Apple and Google directly on this -- governments themselves should be working directly with Apple and Google on this.

----

> My read at this point is that this is more an attempt at stalling tactic.

Okay, think through this for a second. This doesn't make sense. Pornhub is pulling out of these states. Pornhub does not win in any of these interactions; there's no benefit to Pornhub to stalling, every day they stall hurts their business.

Paypal "stalls" when I try to withdraw money because they get something out of it, they get continued interest on the money they hold. Apple "stalls" on app store regulation because they get something out of it, they get continued revenue from the app store while regulators go back and forth with them. Pornhub doesn't get any of that -- they get zero revenue from these states while this is being litigated.

This does not make sense as an analysis. If Pornhub thinks that they're going to need to go back to these states, they lose more money the longer they wait. Clearly there's something else here going on other than just greed.

----

> Movie ratings are not mandatory because they are not obscenity without artistic merit. An R rated movie will be safe. A porn movie likely not.

What percentage of Reddit porn doesn't have artistic merit? This is nonsensical, you're still looking at a situation where 50 Shades of Grey and Game of Thrones are legal to show to children. That content would rightly fall under NSFW classifications on most sites, and I think most adults would agree that content shouldn't be shown to minors. By any reasonable definition, 50 Shades of Grey and Game of Thrones contain pornographic content. But it's still legal, and you're arguing that this kind of content wouldn't be covered under these laws.

> As far as I know no one's challenged it.

This does not necessarily mean that if it was challenged, it would hold up. Most of the movie industry voluntarily restricts access beyond what the law requires. What we do know is that when these laws have been challenged, particularly on the federal level, and particularly where the Internet is concerned, they've been difficult to defend and have been struck down in high-profile cases (https://en.wikipedia.org/wiki/Communications_Decency_Act)

Regulations on technological capabilities are not free from constitutional risk, but they are far less likely to run into these problems.

Now, if your point here is that these filtering laws are only going to protect kids from X-rated full-on smut with no plot, and that artistic pornography won't be covered -- then these aren't effective laws. They're not protecting kids. Yes, we have obscenity laws in the United States, but if we're going to go in-depth on those laws, we have to start with the point that "porn" and "obscenity" are not the same thing legally speaking. Porn can be obscenity, but not all porn is classified that way. You draw a bright line between R rated movies and X rated movies, but it's not the government that makes that classification, it's a completely arbitrary industry-drawn line. Where content online is concerned, there is no easy test to determine whether a pornographic piece of art or video has artistic merit -- and in fact R-ratings are not based on artistic merit or social value, only on how graphic or disturbing the content is.

Yet the laws you're championing require making that distinction on such a large scale that we would be able to tell what percentage of a website consists of obscenity. It's not realistic, it can't be done without disregarding 1st Amendment concerns.

If you're trying to protect kids from porn, it is not enough to target obscenity -- there is plenty of 1st Amendment protected pornographic speech that should never be shown to children. Which is why filtering laws in these situations are preferable; because they dodge (some) 1st Amendment concerns while allowing parents agency to filter material that would not fall under obscenity law, but that is still probably not a great thing for kids to look at it.


I suppose I conjugated my verbs poorly then; the poor agreement between "should be" and "have been" may have hinted at that, but conceded: I should have written that they "should have been".

Like I said it's quite difficult to find information about this stuff. I don't even know if RTA is what IE used. It's not clear that anyone notable ever implemented it. I don't see it referenced on bugzilla.mozilla.org. Mozilla came up with their own proposal (Prefer: safe) in 2014 and actually submitted it to IETF, and didn't reference the Rating header. Did anyone try to tell them about it? They had like a 30% market share at the time. I can't find any references to it on issues.chromium.org either. I don't see any discussions on chromium's developer mailing list archives. I don't see it on the Android archives. Did they bring it to a lawmaker? To any standards body? To anyone?

Did they even reach out to tech companies like they said?

The howto for android https://www.rtalabel.org/index.html?content=howtoandroid just says you need to agree to their terms, gives no instructions, and has... an ad for travel services. Is there even an android implementation? This seems to be representative of the effort here.

Anyway, my original point was that the whole discussion seems to be disingenuous. They say they want an on-device age verification, and they even said that specifically in response to Utah's law. But Utah explicitly allows that already.

The reporting sucks. They didn't link to the laws. Almost none of the articles about this even name the laws (e.g. SB 287) so you have to go searching for it. The reporters don't seem to bother to read the laws, even when they're only 2 pages long. That CNN article says Pornhub doesn't like Utah's law because they want on-device verification. Utah's law explicitly allows for that, and they already have a working system. It's in fact an ISO standard, and seems to have wide traction building among US states:

https://www.mdlconnection.com/implementation-tracker-map/

(Incidentally, that site seems to be exactly what it looks like when someone is actually advocating for a proposal)

Why don't the reporters ask for some clarification on what they don't like about the law? Or the system? On their face, their complaints seem to be silly.

It's also disingenuous to characterize KYC services as shady. Their main customers are banks, and they're going to undergo annual audits for SOC 2, ISO 27001, etc. because every bank requires that. Their entire business is legal compliance as a service. If the law says not to store your info, they wont.

Pornhub may not be used to people who think this way, but in the financial services sector where these vendors currently operate, compliance with the law is just an assumed baseline feature. It is entirely normal for customers to have their own security architects examine your architecture documents, have multi-month back-and-forths about how to ensure legal requirements will be met, and require annual third party audits and penetration tests of your system. A company I worked for had a system to help automate answering these kinds of questions because they come up constantly.

Service providers here also already have to deal with both retention requirements and non-retention requirements like CCPA, and figuring out which data has which requirements. Pornhub's use-case is less complicated.

They complain they don't want to store whatever info. But the laws don't say they need to, and in fact say they must not. If they need help, there are companies who sell exactly that service.

Why don't the reporters ask for clarification on what appear superficially to be contradictions?


> Like I said it's quite difficult to find information about this stuff.

Quite honestly, I don't think it is. I'm not an expert on this, I'm using the same search engines you're using. I'm able to find stuff online.

> I don't see it referenced on bugzilla.mozilla.org. Mozilla came up with their own proposal (Prefer: safe) in 2014 and actually submitted it to IETF, and didn't reference the Rating header. Did anyone try to tell them about it? They had like a 30% market share at the time. I can't find any references to it on issues.chromium.org either. I don't see any discussions on chromium's developer mailing list archives. I don't see it on the Android archives

This is a lot of critique that boils down to "browser makers and lawmakers didn't implement it." But porn companies are not in charge of browsers. I could ask the same question in the opposite direction -- lawmakers have literally entire teams of paid staff to research this stuff, they are literally required by law under strict scrutiny to research it... and like I said above, I'm able to find information when I search online. So why weren't they able to find anything?

I don't think this is an excuse, I don't think lawmakers need to babied about looking for potential solutions to bills when strict scrutiny is in play. Strict scrutiny does not say that the government should be narrow and specific and research alternatives unless nobody sent them an official proposal on letter paper in which case how were they to know, we can just do whatever, all rules are off. Strict scrutiny places an obligation on the government to do research.

----

> That CNN article says Pornhub doesn't like Utah's law because they want on-device verification. Utah's law explicitly allows for that, and they already have a working system. It's in fact an ISO standard, and seems to have wide traction building among US states:

Looking more at it, I will say that MDL looks reasonably interesting, there's stuff here that I like quite a bit. I will also say that it's not available on Windows, Mac, or Linux, and that it doesn't look like it will ever work via 3rd-party ROMs. But sure, other than that it looks promising. And maybe Pornhub will adopt it at some point, I do think this system looks like it would be an improvement over a lot of ID verification I'm forced to do for services with KYC rules. So I'm all for that.

I will also point out that it's not available in Texas. And we have talked about this, you can't treat these laws like they're some kind of composite whole where one state addressing a problem means the other states no longer have that problem. Okay, you think that Pornhub is being disingenuous about Utah? Fine. The original link at the top of this thread is about VPN usage surging in Texas, which does not implement an MDL standard.

----

> The reporting sucks. They didn't link to the laws. Almost none of the articles about this even name the laws (e.g. SB 287) so you have to go searching for it.

> [...] Why don't the reporters ask for clarification on what appear superficially to be contradictions?

This is not specific to these laws, all political reporting about bills has this problem. Every time that I want to find the original text of a bill that's being reported on by even mainstream sites, I have to search for it. Could it be better? Sure, I regularly advocate that reporters should link to bill text. Do reporters in most interviews tend to ask only softball questions (regardless of who they're interviewing)? Yes. Does that common problem get rid of criticisms of the bills? No, it doesn't.

----

> It's also disingenuous to characterize KYC services as shady. Their main customers are banks, and they're going to undergo annual audits for SOC 2, ISO 27001, etc. because every bank requires that.

I will 100% stand by my representation. Common KYC services are shady. Credit reporting services are shady. This entire information economy is shady; it doesn't matter if they're working with the government. We're only a few years out from Equifax (which is used for customer verification sometimes) leaking the financial information of nearly every single adult American in the US. But what, they work with banks? They work with the people who haven't learned how to do proper 2FA yet? They work with the people who retain massive amounts of customer information and offer credit cards that are privacy nightmares? I have bad news for you about bank privacy in the US. None of these companies have a good track record on this.

I fully stand behind my characterization of them: these services are shady and should not be expanded recklessly to other areas of our life. I think that's an easy conclusion to draw.

> Their entire business is legal compliance as a service. If the law says not to store your info, they wont.

3rd-party KYC services fundamentally can not work without storing your info. Like, by definition -- the requirement is literally know your customer. That involves... knowing them. And comparing pre-gathered information is still storing info. You can not do a "verify your identity by telling us something we already know" question without already knowing the answer to the question that you're asking.

> They complain they don't want to store whatever info. But the laws don't say they need to, and in fact say they must not.

We have been ever this multiple times already: no they do not. None of these laws ban storing metadata or linking identities to requests by these 3rd-party companies. There is nothing in these laws that clearly prevent a 3rd-party ID service from aggregating data about which users have accessed porn. None of these laws ban government storage of information (and once again, states have said that they want to have databases of LGBTQ+ citizens). The majority of these laws do not offer sufficient penalties to incentivize companies not to violate restrictions (user-brought lawsuits are not sufficient, data privacy laws get violated all the time). None of these laws clarify how long information can be retained and most don't clarify what damages a user would actually be entitled to if their information was leaked.

----

I do want to loop back around to:

> Anyway, my original point was that the whole discussion seems to be disingenuous.

These bills have problems. At their best, even if MDL turns out to be great and private -- they're still going to increase user propensity to fall for phishing attacks, they still use a selective enforcement mechanism that will let off the worst actors, they still have 1st Amendment concerns, they still don't really address the majority of porn online (I will remind you that Reddit demands verification in zero of the states that have passed this legislation), they still have insufficient protections against data retention. They still require distinguishing between obscenity and porn on a scale that is impossible to do without abridging 1st-Amendment speech, and they still hue closely to similar federal attempts to legislate porn that have been ruled unconstitutional.

And we're reaching the point where we're basically arguing over "has Pornhub done enough? Why haven't they looked at this standard? Why didn't the government look at this standard? What is everyone's intentions?"

I want to take a step back and say that even if Pornhub did absolutely nothing (which again, I would argue they did not), that doesn't change anything at all about the objections to these bills. And if we're talking about disingenuous, it feels disingenuous to have a conversation that's constantly bouncing between incompatible statements like "this protects kids", and "R rated movies like 50 Shades of Grey wouldn't be covered", and "Mastodon wouldn't be affected" -- and to have all of those problems and contradictions swept under the rug in favor of "but Pornhub was asking for it."

We can look at the laws as implemented today and look at their effects and we can say objectively and indisputably -- they are not working. A lot of porn is still available in those states. So what the heck is the rest of this conversation? You don't need much evaluation beyond: you passed the law and r/insert-depraved-porn-sub is still available in your state without age verification, so... the law didn't work.

I do still feel like you're looking at this through a lens that misrepresents what most lobbying effort and what most political reporting looks like on every issue. But you know what? It doesn't matter. You think that Pornhub should have gotten more involved, great, that's very idealistic. You want political reporting to get better, great, that's an effort I can get behind. It doesn't mean that these bills don't have 1st Amendment concerns, don't contradict themselves in talking about retention and data collection while advocating 3rd-party services that literally can not operate without collecting data, it doesn't mean the bills aren't vague. And it doesn't mean the bills work. And I'm sorry if you don't like porn companies, but these are still bad laws. I'm sorry if you think that porn companies aren't playing nice, but you're still spreading misinformation about ID verification and 1st Amendment protections as they exist today that is just not true.

What is the disingenuous thing here: litigating whether or not Pornhub cares enough about kids, or dismissing obvious problems with legislation and spreading misinformation about that legislation just because you don't feel an industry was proactive enough in preempting it? I'll loop around again to -- I don't even care if you support the laws; fine. But don't say things about the laws that are not true.


I was surprised the first time I learned that Python will actually refuse to compile:

  def fibonacci(n):
     return n
because it knows that the Fibonacci sequence doesn't work that way. It's a fantastic language.


> Treat an LLM as a confident smart person who isn’t an expert in anything, and doesn’t have access to resources to check their answer (unless you give it access).

Sure, but I don't want that person helping me with my taxes.

Like, the idea that I get an answer and then I need to do research to figure out if it's correct... The whole point -- literally the entire point -- is that I don't want to do the research. If I wanted to do the research, I wouldn't use the LLM. I would just do the research. And now I'm using the LLM and I have to do the research anyway?

If we're talking about brainstorming or getting topic overviews or helping you start research, sure, I could see that. An LLM could be useful there. But asking direct questions and getting back plausible-sounding incorrect answers? This is a domain where an LLM just shouldn't be used at all.

I've brought up this analogy in the past, but it's like people proposing an LLM as a calculator, and when it gets answers wrong they say, "well, humans get math problems wrong too." Yes, they do. Why do you think we started using calculators instead of humans?

There is a reason why I don't go on Reddit to ask for tax advice. It's cool that people can make a computer that simulates that same use-case, but that is not a use-case that was useful to simulate. If I have to read the tax forms anyway, then I might as well just read the tax forms.


The LLM can do the research itself, just current LLM's often don't by default (and require a bit of prompting).

LLM's also use calculators (and again, if you prompt ChatGPT it will use a calculator to generate the result).

These things are solvable, we are still in the 'Commodore-PET' phase of LLM's development, i.e. impressive for today but lots more to do to make it more useful for most people.


The thing is, a confident smart person who isn’t an expert in anything, who DOES have access to resources to check their answers is still not someone I want doing my taxes.

The only way that an LLM doing my taxes for me is useful is if I trust it to the same level that I trust an expert; and that includes being able to very reliably know when that person is actually confident and when they're not. I haven't seen strong evidence that this is possible with the current LLM approach.

So if we're thinking of LLMs as a confident smart person who sometimes spouts bullcrap -- that's still just not a good fit for this kind of product category. It's a good fit for other tasks, just... not for these.

It's definitely not a good fit for putting in front of problems that are already pretty-well solved. Calculators replaced human calculation, it's a solved problem, you put the numbers into the calculator. Even natural-language input into the calculator is a reasonably solved problem, it's not a task that really requires a full LLM of the scale of GPT-4. So is there value in introducing a human-like source of error into the middle of that process again? I feel like the answer is 'no'.

> These things are solvable, we are still in the 'Commodore-PET' phase of LLM's development

Sure, if we get to the point where an LLM can give such good tax information that there literally is no need to read the tax documents to confirm any of it, then great! But that's not even remotely where we are, and products like this straight-up shouldn't be launched until we're at that point.

This is sort of like selling me an scuba tank and saying, "it might not dispense oxygen so make sure you hold your breath the entire time", and when I say, "well what's the point of the tank then" the reply I get back is, "well eventually we're going to get it to reliably dispense oxygen!"

Okay, but the products are all launching now, I'm being asked to buy into this now, not eventually. The future doesn't change the fact that the current products that I'm supposed to get excited about are literally useless at a conceptual level -- that they do not even make sense given their caveats. I'm not complaining about AI chat bots or funny toys or even toy phone assistants, I'm complaining that companies like Intuit are literally launching tax assistants that give incorrect information and their response is, "well, double check what the bot says." Well that is not a tax assistant then, that's just work. The service that the chatbot is offering me is that I'll have to work more.

Fundamentally, there appears to be a misunderstanding from these companies about what a product is.


> If those communities and apps already respect their users, then do they need this author? What difference would they make?

What? Does software development just go away if you respect your users? Do people not want 3rd-party Mastodon or Matrix clients just because the orgs aren't running advertisements on their servers? Is the idea that I install Linux and everything just magically works because Linus isn't trying to sell me something?

Building services that respect users is still work. It's still a thing that people can help with and support, either directly or through 3rd-party tooling and extensions.


> Do people not want 3rd-party Mastodon or Matrix clients

No, there isn't a reasonably sized market for many 3rd party Mastodon or Matrix clients.


This is silly, of course there is. There's observably a larger market for 3rd-party Mastodon and Matrix clients than there is for a Nebula filtering tool.

Also, are you seriously going to argue this? Are you seriously going to double down on the idea that services that respect their users don't need development help or tooling support or developer ecosystems?

Have you heard of Blender? This is just straight-up silly, I don't know what else to say about it. FOSS projects and platforms have the same exact support and community needs as any proprietary service does.


> Also, are you seriously going to argue this? Are you seriously going to double down on the idea that services that respect their users don't need development help or tooling support or developer ecosystems?

No, I'm doubling down on challenging the author to be more audacious:

a) being honest with oneself and learning from failure (they call their project a "big success")

b) be more ambitious - building or contributing to yet another 3rd party Mastodon or Matrix client (both?) is just more noise in a crowded space going after a relatively small audience.


> No, I'm doubling down on challenging the author to be more audacious:

You've edited your comment since it was originally posted, but as a reminder, this is what I was responding to:

> If those communities and apps already respect their users, then do they need this author? What difference would they make?

Which is a pointedly ridiculous thing to say. No, you were not calling on the author to be more ambitious, this is silly.

---

> b) be more ambitious - building or contributing to yet another 3rd party Mastodon or Matrix client (both?) is just more noise in a crowded space going after a relatively small audience.

Donation of time and effort to make platforms more attractive that you are ethically aligned with is not just "noise in a crowded space".

As if FOSS tooling was a crowded space!! These projects are desperate for volunteers, and lack of developer time and resources is one of the biggest handicaps that these projects often face. It is unquestionably valuable for people to volunteer effort in this direction.

It might not be flashy, it might not give you a giant userbase. But some people have different metrics of success than that, and high-impact activities often aren't flashy. You bring up building new projects as an alternative to donating to existing efforts. Yeah, that sounds very attractive and flashy, but there are a half-dozen FOSS platform alternatives to every proprietary service and we don't actually need more of them. What we need is for people to focus on a couple that already exist and make them as good as possible. It's less sexy, but it matters more.


> As if FOSS tooling was a crowded space!! These projects are desperate for volunteers, and lack of developer time and resources is one of the biggest handicaps that these projects often face

thinking about eg the recent xz vuln, isn't this somewhat indicative of a licensing failure on the part of FOSS licenses? the point of licensing is to ensure the author is properly compensated for his work, and if the work is used by billion dollar companies while authors scrape by and beg for donations, it seems to me that the license isn't doing its job. the license should capture some value and give it to the author.

it's completely absurd that professional tooling is begging for volunteers. you don't see that in any other field.


That could be a longer conversation, but I think there's a subset of community-run projects that would be desperate for volunteers even if they were proprietary. In general most non-exploitative volunteer spaces are clamoring for volunteers even outside of software. Your local library is probably clamoring for volunteers (depending on the location). If you're in a rural area your school is probably clamoring for substitute teachers. And on and on.

I have opinions about the movement away from FOSS to source available licenses, but I think independent of all of that, smaller projects that fulfill important niches but that are not easily monetizable generally need help, and I don't think that would change if software licenses changed. Some projects could probably get better funding, but many would be in the same position.

I think in general there is more productive stuff to do in the world than there are people available to do it -- and I don't just mean in software, I mean everywhere. The world is held together by duck tape because a surprising small proportion of people volunteer to duck tape it together, and any effort that anyone expands towards helping them and making the world better instead of exclusively chasing whatever the next sexy high-visibility project is -- I think that's important, impactful work.


The point of FOSS licenses is keeping code free and open-source, not compensating anyone.


What does "reasonably sized market" mean in this context, and what does it imply about projects that lack them?

More specifically, why does there being some people that want those things mean something other than some people want those things? It almost sounds like you're trying to have a different conversation.


> What does "reasonably sized market" mean in this context, and what does it imply about projects that lack them?

Projects that lack a reasonably sized market are fine, but their impact will be confined. It is a tradeoff you choose to make.

> More specifically, why does there being some people that want those things mean something other than some people want those things? It almost sounds like you're trying to have a different conversation.

Because if you want to have an impact, you should choose where and how to spend your time. You should pour a different amount of effort and resources into doing something for 1 person vs. 800 people vs. 1,000,000.


Was impact something I missed that was already in this discussion?


For Matrix, there definitely is. Some of the third-party clients are just plain better than Element.


> For Matrix, there definitely is. Some of the third-party clients are just plain better than Element.

Which client do you use the majority of the time?


Not the original poster, but Cinny is better than Element


Nheko Reborn[0].

[0]: https://nheko-reborn.github.io/


Fluffychat


Anything missing from it that you wish it had?

Or something it has that you would prefer if it were gone?


Video calls are a big missing feature, that's one of the only reasons I have an alternative installed alongside it. From what I've heard, the devs have said that the reason why native audio/video calls aren't planned literally just comes down to complexity and the maintenance cost.

Matrix has been focusing more recently on trying to break some core functionality out into modules that can be imported -- encryption being the biggest example of this. Movement in that direction for video/audio would be a big deal. It's not just that Fluffychat needs help getting video working -- that would only leave them with another chunk of code to maintain. Ideally, they should be able to import a common library for this, but as far as I know, none exists that's usable for their needs.

Matrix video calls are a good example of this kind of ecosystem need in general -- Element had video calls before the spec actually supported video... because it used a Jitsi plugin. And that 3rd-party offering bought time for the spec authors to come up with more robust native support.

The problem is that 3rd-party clients offering the same support is still wildly complicated. There's a lot of room for improvement there.


i am pretty sure most people use third-party mastodon clients, if you're talking about apps. they didn't even have a first-party client until recently.


> Being "against" something is a valid goal in life, but then you really have to be very strategic about it. Being "for something" and pouring your energy into making something that people want or need seems more productive.

From the announcement:

> but I'd rather spend my energy on making the non-commercial web more attractive.

The author pretty explicitly states that they want to shift from an "against something" mentality (against disruptive content in proprietary apps, against the intended user-experience of those apps) to a "for something" mentality (building and supporting non-commercial services).

I genuinely do not see the complaint.

> With only 800 active users, letsblock.it obviously didn't have any measurable effect.

Unless the author was planning on never making a popular project, I don't think this is a good way of evaluating direction or effort. In either case, putting in a huge amount of effort to support 800 active users who might otherwise (at least partially) shift their attention to better services seems reasonable to question. If we take it that there is any value in improving experiences for a small number of people, then there is equal value in making it more pleasant for those people to use Libre services.

And of course that's even before asking about the opportunity cost. If an author can take the same amount of time they were devoting to this and instead build tools that make a Libre/Community service more attractive for 800 people, that's arguably a much higher impact activity on the health and growth of that service than wasting that effort trying to make proprietary platforms palatable.

But again, if your point here is to focus in on a mission, starting with "nothing I build will have any impact on any of this" is just not really helpful at all.

> Supporting the "non-commercial web" seems too vague in my opinion.

A general mission statement/direction is often the first step towards narrowing down product ideas. I think making a decision in a direction (ie, pivoting from doing free UX enhancements for commercial companies towards saying, "I want to benefit services that don't feel exploitative") is a good place to start. Of course over time the author will probably narrow that focus, but this at least lays out a category that they can start looking into.


> > Supporting the "non-commercial web" seems too vague in my opinion. A general mission statement/direction is often the first step towards narrowing down product ideas. I think making a decision in a direction (ie, pivoting from doing free UX enhancements for commercial companies towards saying, "I want to benefit services that don't feel exploitative") is a good place to start. Of course over time the author will probably narrow that focus, but this at least lays out a category that they can start looking into.

This further emphasizes the lack of clarity, focus, and suboptimal strategy. When I read your assessment of their previous strategy "doing free UX enhancements for commercial companies" while the person is against commerce, I do not walk away with a sense that they have reconciled for themselves what is worth going after and so the non-specific "I want to support communities and applications that respect their users and value what we have to say." I predict will likely also have no registrable impact.

I have more candid feedback for the author and that is to take a more clear look at how they evaluate themselves. They say "launching letsblock.it and keeping it running for over two years is a big success in my book." Instead they should call it what it is - a failure - and learn from it. Failure is fine, failure is great, even. They would be more successful by dreaming bigger and with more focus.


> When I read your assessment of their previous strategy "doing free UX enhancements for commercial companies" while the person is against commerce, I do not walk away with a sense that they have reconciled for themselves what is worth going after [...]

> Instead they should call it what it is - a failure - and learn from it.

I'm going to be really blunt here, it sounds a lot less like your critique is that the author isn't clear about their goals or that the author doesn't know how to evaluate themselves -- and more like your critique is that the author's evaluation of themselves and their goals doesn't match yours.

Running any project with 800 users for 2 years as a hobbyist can be reasonably called a success. This reads a lot like how VC people will come into Mom and Pop shops and say, "this business is a failure, they just have years of loyal customers in a niche, what a disgrace! They obviously haven't thought enough about their product focus."


> Running any project with 800 users for 2 years as a hobbyist can be reasonably called a success.

That's not success, and certainly not a "big success" which is the author's self-reflection.

Unless the author's goal was to run something for 2 years, accumulate 800 users, and then shutting it down.

> This reads a lot like how VC people will come into Mom and Pop shops and say

I think they set out to make a big impact. That means growth, but doesn't imply commercial success.


> That's not success, and certainly not a "big success" which is the author's self-reflection.

Again, the author is not obligated in any way to align themselves to your definition of success. And them disagreeing with your definition of success is not the same thing as them being confused or not having thought enough about what they want. It might just mean they disagree with you.

> I think they set out to make a big impact. That means growth

No, not necessarily. Growth can be a component of impact, but they are not synonymous, and many highly impactful projects never see a lot of attention or direct growth -- they enable other projects to succeed or fix some of the many diverse pain points that subsets of users for those projects have.


I wrote about this issue back in 2018 regarding specifically Chrome, and Firefox's settings are derivative of Chrome's: https://danshumway.com/blog/chrome-autoplay/

At the time, there was a lot of noise about the fact that autoplay settings were breaking parts of the web, and I do think that was a problem with Chrome's setup that never really got addressed. My article focused only on Chrome's changes.

Firefox's approach was (imo) better -- it didn't have as much of the weird AI-driven "figure out which domains users interact with" nonsense -- but Firefox's approach was still very clearly influenced by Chrome's, and I would argue that Chrome's approach was incorrect.

At the time, there were two concerns, and Chrome's approach was only intended to handle one of them:

1. Autoplay videos use a lot of bandwidth

2. Autoplay videos are disruptive (specifically, they make noise)

Chrome was worried about #2. They argued (and I agree with this) that these are two separate concerns that need to be tackled separately. But Chrome didn't really go all-in on solving problem #2, they kind of had this weird hybrid approach where they were still trying to stop the data from being streamed, but not always, and if you muted the video it would still be streamed, but if you didn't it wouldn't...

So it became the worst of both worlds.

At the time I argued (and I still think this would be a better approach) that given that the goal was entirely about stopping audio, this should have all been handled through automatic tab muting, not through a change to the web APIs.

That's not to say that blocking large amounts of data or stopping the visual aspects of videos isn't important, but the approach Chrome went with (and that Firefox has subsequently inherited) kind of does nothing well. And I think we still see the effects of that today, even in browsers like Firefox that were admittedly a bit more sensible about not adapting some of Chrome's worst ideas.

Interactions were also a sticking point: Chrome interpreted even highlighting text as a signal that autoplay should be allowed -- which is obviously very easily abusable. I generally think that the user-gesture requirement for permissions is not great; it severely limits what users can do on a page while still signaling that they don't want to grant permission for a random action like autoplaying audio.

It's a tricky problem to solve, but I also think it's a problem that's harder to solve because of some of the previous baggage we've inherited from previous efforts to solve it.


I put some of my family members on Ubuntu, it worked about as well as Windows ever did, but was hard to keep up-to-date, and every change I made to get them more recent software was a complication for future support. The end result was that they had a stable computer but one that was difficult to make do new things.

Last straw was jumping through the hoops to get proper Vulkan support on their video card for a game running through Wine. I switched them over to EndeavourOS and it's been loads easier, a lot of stuff works out-of-the-box specifically because it's a rolling release.

I think people sometimes underestimate how important it can be to be able to get OS/software patches immediately. Flatpak helps a little bit with that as well, some of the difficulty with Ubuntu was resolving dependencies -- but even with that, a lot of that difficulty came from trying to get bleeding edge or self-compiled software working with dependencies that just weren't updated yet in the Debian repos.

I spent a while worrying about whether being on Arch would mean everything would be unstable, but I've had the opposite experience, in practice it just means I have to break fewer things to get software running.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: