I'm very hesitant to make those sorts of accusations, but the writing has multiple hallmarks of LLMs and this is one of three articles posted today by the same author to that blog before noon their local time. I guess this is just what the internet is now, constantly wondering whether you're reading actual thoughts of another human being or whether it is just LLM output generated to stick between ads.
Dyslexia is just the overall name for a learning disability that causes difficulty with reading or writing. There is no unified cause or group of causes, it's all based on symptoms.
Therefore, the only things that will "work" for all dyslexics are things that fundamentally make reading and writing easier for everyone and not just dyslexics. So something like a font can help in the same way some fonts are easier to read than others, but the idea of a "dyslexia font" is a little silly.
The patent system. I know someone will respond detailing why the patent system is pro-business, but it is objectively government regulation that puts restrictions on new technology, so it's proof that regulation of that sort is at least an American tradition if not fully an "American value".
Patents and trademarks are the only ways to create legal monopolies. They are/were intended to reward innovation but despite good intentions are abused.
Not exactly. For example, Major League Baseball has been granted an anti-trust exemption by the US Supreme Court, because they said it was not a business. In some cases in which firms have been found guilty of violating the anti-trust laws, they were fined amounts minuscule in relation to the profits they gained by operating the monopoly. Various governments in the US outsource public services to private monopolies, and the results have sometimes amounted to a serious restraint of trade. The chicanery goes back a long way. For the first decade or so after the passage of the Sherman Act, it was not used against the corporate monopolies that it was written to limit; it was invoked only against labor unions trying to find a way to get a better deal out of the firms operating company stores and company towns etc, etc.
Then Teddy Roosevelt, the so-called trust-buster, invoked it under the assumption that he could tell the difference between good and bad monopolies and that he had the power to leave the good monopolies alone. 120 years later, we are in the same sorry situation.
Intellectual property restrictions cause harm even when used as intended. They are an extreme rest restriction on market activity and I believe they cause more harm than good.
Patents, trademarks, copyright, deeds and other similar concepts are part of what makes capitalism what it is, without them capitalism will not work because they are the mechanisms that enforce private property.
Good luck with that. When 3/4 of the world laughs at your patent what is the point of patents? IP only works when everyone agrees to it. When they don't it's just a handicap on the ones who do that benefits nobody.
This is a perfect example of the power and problems with LLMs.
I took the narcissistic approach of searching for myself. Here's a grade of one of my comments[1]:
>slg: B- (accurate characterization of PH’s “networking & facade” feel, but implicitly underestimates how long that model can persist)
And here's the actual comment I made[2]:
>And maybe it is the cynical contrarian in me, but I think the "real world" aspect of Product Hunt it what turned me off of the site before these issues even came to the forefront. It always seemed like an echo chamber were everyone was putting up a facade. Users seemed more concerned with the people behind products and networking with them than actually offering opinions of what was posted.
>I find the more internet-like communities more natural. Sure, the top comment on a Show HN is often a critique. However I find that more interesting than the usual "Wow, another great product from John Developer. Signing up now." or the "Wow, great product. Here is why you should use the competing product that I work on." that you usually see on Product Hunt.
I did not say nor imply anything about "how long that model can persist", I just said I personally don't like using the site. It's a total hallucination to claim I was implying doom for "that model" and you would only know that if you actually took the time to dig into the details of what was actually said, but the summary seems plausible enough that most people never would.
The LLM processed and analyzed a huge amount of data in a way that no human could, but the single in-depth look I took at that analysis was somewhere between misleading and flat out wrong. As I said, a perfect example of what LLMs do.
And yes, I do recognize the funny coincidence that I'm now doing the exact thing I described as the typical HN comment a decade ago. I guess there is a reason old me said "I find that more interesting".
I'm not so sure; that may not have been what you meant, but that doesn't mean it's not what others read into it. The broader context is HN is a startup forum and one of the most common discussion patterns is 'I don't like it' that is often a stand-in for 'I don't think it's viable as-is'. Startups are default dead, after all.
With that context, if someone were to read your comment and be asked 'does this person think the product's model is viable in the long run' I think a lot of people would respond 'no'.
And this is a perfect example of how some people respond to LLMs, bending over backwards to justify the output like we are some kids around a Ouija board.
The LLM isn't misinterpreting the text, it's just representing people who misinterpreted the text isn't the defense you seem to think it is.
And your response here is a perfect example of confidently jumping to conclusions on what someone's intent is... which is exactly what you're saying the LLM did to you.
I scoped my comment specifically around what a reasonable human answer would be if one were asked the particular question it was asked with the available information it had. That's all.
Btw I agree with your comment that it hallucinated/assumed your intent! Sorry I did not specify that. This was a bit of a 'play stupid games win stupid prizes' prompt by the OP. If one asks an imprecise question one should not expect a precise answer. The negative externality here is reader's takeaways are based on false precision. So is it the fault of the question asker, the readers, the tool, or some mix? The tool is the easiest to change, so probably deserves the most blame.
I think we'd both agree LLMs are notoriously overly-helpful and provide low confidence responses to things they should just not comment on. That to me is the underlying issue - at the very least they should respond like humans do not only in content but in confidence. It should have said it wasn't confident about its response to your post, and OP should have thus thrown its response out.
Rarely do we have perfect info, in regular communications we're always making assumptions which affect our confidence in our answers. The question is what's the confidence threshold we should use? This is the question to ask before the question of 'is it actually right?', which is also an important question to ask, but one I think they're a lot better at than the former.
Fwiw you can tell most LLMs to update its memory to always give you a confidence score 0.0-1.0. This helps tremendously, it's pretty darn accurate, it's something you can program thresholds around, and I think it should be built in to every LLM response.
The way I see it, LLMs have lots and lots of negative externalities that we shouldn't bring into this world (I'm particularly sensitive to the effects on creative industries), and I detest how they're being used so haphazardly, but they do have some uses we also shouldn't discount and figure out how to improve on. The question is where are we today in that process?
The framework I use to think about how LLMs are evolving is that of transitioning mediums. Like movies started as a copy/paste of stage plays before they settled into the medium and understand how to work along the grain of its strengths & weaknesses to create new conventions. Speech & text are now transitioning into LLMs. What is the grain we need to go along?
My best answer is the convention LLMs need to settle into is explicit confidence, and each question asked of them should first be a question of what the acceptable confidence threshold is for such a question. I think every question and domain will have different answers for that, and we should debate and discuss that alongside any particular answer.
I feel like you're getting at something different here, but my conclusion is that maybe the problem is the approach of wanting to monetize each interaction.
Almost every company today wants their primary business model to be as a service provider selling you some monthly or yearly subscription when most consumers just want to buy something and have it work. That has always been Apple's model. Sure, they'll sell you services if need be, iCloud, AppleCare, or the various pieces of Apple One, but those all serve as complements to their devices. There's no big push to get Android users to sign up for Apple Music for example.
Apple isn't in the market of collecting your data and selling it. They aren't in the market of pushing you to pick brand X toilet paper over brand Y. They are in the market of selling you devices and so they build AI systems to make the devices they sell more attractive products. It isn't that Apple has some ideologically or technically better approach, they just have a business model that happens to align more with the typical consumers' wants and needs.
> I feel like you're getting at something different here, but my conclusion is that maybe the problem is the approach of wanting to monetize each interaction.
Personally, Google lost me as a search customer (after 25 years) when they opted me into AI search features without my permission.
Not only am I not interested in free tier AI services, but forcing them on me is a good way to lose me as a customer.
The nice thing about Apple Intelligence is that it has an easy to find off switch for customers who don't care for it.
> The nice thing about Apple Intelligence is that it has an easy to find off switch for customers who don't care for it.
Not even only that, but the setup wizard literally asks if you'd like it or not. You don't even have to specifically opt-out of it, because it's opt-in.
Google is currently going full on Windows 10, for 'selected customers', with Gemini in Android. '(full screen popup) Do you want to try out Gemini? [Now] [Later]' 2 hours later... Do you want to...
Yes, there are always ways to deal with companies who make their experience shitty. The point is that you shouldn't have to, and that people will leave for an alternative that doesn't treat them like that.
I feel like this is 5 or so years out of date. The fact that they actually have an Apple Music app for Android is a pretty big push for them. Services is like 25% of their revenue these days, larger than anything except the iPhone.
As I said elsewhere, it really depends on the definition of "service". Subscriptions make up a relatively small minority of that service revenue. For example, 30 seconds of searching suggests that Apple Music's revenue in 2024 was approximately $10b compared to the company as a whole being around $400b. That's not nothing, but it doesn't shape the company in a way that it's competitors are shaped by their service businesses.
The biggest bucket in that "service" category is just Apple's 30% cut of stuff sold on their platform (which it also must be noted, both complements and is reliant on their device sales). That wouldn't really be considered a "service" from either the customer perspective or in the sense of traditional businesses. Operating a storefront digitally isn't a fundamentally different model than operating a brick and mortar store and no one would call Best Buy a "service business".
Call me a naïve fanboy, but I believe that Apple is still one of the very few companies that has an ideologically better approach that results in technically better products.
Where everyone else sells you stuff to make money, they make money to create great stuff.
I know you're saying that Apple's business model is selling devices but it's not like they aren't a services juggernaut.
Where I think you are ultimately correct is that some companies seem to just assume that 100% of interactions can be monetized, and they really can't.
You need to deliver value that matches the money paid or the ad viewed.
I think Apple has generally been decent at recognizing the overall sustainability of certain business models. They've been around long enough to know that most loss-leading businesses never work out. If you can't make a profit from day one what's the point of being in business?
It depends. I guess you can argue this is true purely from scale. However, we should also keep in mind there are a lot of different things that Apple and tech companies in general put under "services". So even when you see a big number under "Service Revenue" on some financial report, we should recognize that most of that was from taking a cut of some other transaction happening on their devices. Relative to the rest of their business, they don't make much from monthly/yearly subscriptions or monetizing their customers' searches/interactions. They instead serve as a middleman on purchase of apps, music, movies, TV, and now even financial transactions made with Apple Card/Pay/Cash. And in that way, they are a service company in the same way that any brick and mortar store is a service company.
I'm confused at what you're trying to say here. Why exactly doesn't the service revenue matter again? For some pedantic reason of Apple being metaphorically similar to a brick and mortar store?
Apple's services revenue is larger than Macs and iPads combined, with a 75% profit margin, compared to under 40% for products (hardware).
Yeah, they serve as a middleman...an incredibly dominant middleman in a duopoly. 80% of teenagers in the US say they have an iPhone. Guess what, all that 15-30% app store revenue is going to Apple. That's pretty much the definition of a service juggernaut.
I also don't agree with you about the lack of selling Apple services to non-Apple users. TV+ is a top-tier streaming service with huge subscriber numbers, and their app is on every crappy off-brand smart TV and streaming stick out there. Yes, there really are Android users who subscribe to Apple Music - 100 million+ downloads on the Google Play store, #4 top grossing app in the music category.
>Why exactly doesn't the service revenue matter again? For some pedantic reason of Apple being metaphorically similar to a brick and mortar store?
You seem to operating under the notion that anything that isn't a device sold is a service. I think that definition is too broad to have any real value and that we should look at the actual business model for a product to determine its categorization. I'm not sure what else to say if you're just going to dismiss that as "pedantic".
But either way, it should be obvious that "services" (however they are defined) are a smaller part of Apple's business than they are for Microsoft, Google, Meta, Twitter, Oracle, Open AI, Anthropic, and most other players in both the general tech and AI spaces.
It's really interesting to consider an area where they are being successful with their AI, the notification summaries work pretty well! It's an easy sell to the consumer bombarded with information/notifications all over the place that on-device processing can filter this and cut out clutter. Basically, don't be annoying. I think a lot of people don't really know how well things like their on-device image search works (it'll OCR an upside-down receipt sitting on a table successfully), I never see them market that strength ever judging by the number of people with iphones that are surprised when I show them this on their own phones.
HOWEVER, you would never know this though given the Apple Store experience! As I was dealing with the board swap in my phone last month, they would have these very loud/annoying 'presentations' every like half hour or so going over all the other apple intelligence features. Nobody watched, nobody in the store wanted to see this. In fact when you consider the history of how the stores have operated for years, the idea was to let customers play around with the device and figure shit out on their own. Store employee asks if they need anything explained but otherwise it's a 'discovery' thing, not this dictated dystopia.
The majority of people I heard around me in the store were bringing existing iphones in to get support with their devices because they either broke them or had issues logging into accounts (lost/compromised passwords or issues with passkeys). They do not want to be told every constantly about the same slop every other company is trying to feed them.
They’ve done loud, in-store presentations for longer than Apple Intelligence has been a thing, but you’re right that it’s a captive audience of mostly disinterested people.
And that is a relatively harmless academic pursuit. What about topics that can lead to true danger and violence?
"You're exactly right, you organized and paid for the date, that created a social debt and she failed to meet her obligation in that implicit deal."
"You're exactly right, no one can understand your suffering, nothingness would be preferable to that."
"You're exactly right, that politician is a danger to both the country and the whole world, someone stopping him would become a hero."
We have already seen how personalized content algorithms that only prioritize getting the user to continue to use the system can foment extremism. It will be incredibly dangerous if we follow down that path with AI.
It is always enlightening when people criticizing "virtue signaling" accidentally reveal that the problem they have is not the signaling, it's the having virtue.
There was a time when one of the virtues was not to brag about how virtuous you were. I think that's why a lot of folks have a problem with virtue signalling. In their minds if you're signalling by doing something publicly it karmically negates what you're doing and almost alchemically turns it into something resembling vice.
I'm merely trying to explain how it is that people can have a problem with virtue signalling and to them it doesn't really contradict what is to them true virtue where you do something good and stay quiet about it.
This comment feels like it was made outside the context of the existing conversation. The comment I replied to was calling all charity virtue signaling and not just vocal giving.
But either way, I personally don’t think a library is any less valuable to a community just because it has Carnegie’s name above the entrance.
Society providing incentives for rich people to give money to charitable causes is good actually. An evil person doing good things for selfish reasons is still doing good things.
The real problem comes when you look up what charity actually does with the money.
It is hard to not get the feeling that outside of the local food bank, most charities are a type of money making scam when you dig into what they do with the money.
Amend Section 230 so that it does not apply to content that is served algorithmically. Social media companies can either allow us to select what content we want to see by giving us a chronological feed of the people/topics we follow or they can serve us content according to some algorithm designed to keep us on their platform longer. The former is neutral and deserves protection, but the latter is editorial. Once they take on that editorial role of deciding what content we see, they should become liable for the content they put in front of us.
So Hacker News should lose section 230 protection?
Because the content served here isn't served in chronological order. The front page takes votes into account and displays hotter posts higher in the feed.
Technically sorting by timestamp is an "algorithm" too, so I was just speaking informally rather than drafting the exact language of a piece of legislation. I would define the categories as something like algorithms determined by direct proactive user decisions (following, upvoting, etc) versus algorithms that are determined by other factors (views, watch time, behavior by similar users, etc). Basically it should always be clear why you're being served what you're being served, either because the user chose to see it or because everyone is seeing it. No more nebulous black box algorithms that give every user an experience individually designed to keep them on the platform.
This will still impact HN because of stuff like the flame war downranker they use here. However, that doesn't automatically mean HN loses Section 230 protection. HN could respond by simplifying its ranking algorithm to maintain 230 protections.
I think the best way to put it is, users with the same user picked settings should see the same things, in the same order.
That's a given on HackerNews, as there's only one frontpage. On Reddit that would be, users subscribed to the same subreddits would always see the same things on their frontpages. Same as users on YouTube subscribed to the same channels, users on Facebook who liked the same pages, and so on.
The real problem starts when the algorithm takes into account implicit user actions. E.g., two users are subscribed to the same channels, and both click on the same video. User A watches the whole video, user B leaves halfway through. If the algorithm takes that into account, now user A will see different suggestions than user B.
That's what gets the ball rolling into hyper specialized endless feeds which tend to push you into extremes, as small signals will end up being amplified without the user ever taking an explicit action other than clicking or not suggestions in the feed.
As long as every signal the algorithm takes into account is either a global state (user votes, total watch time, etc), or something the user explicitly and proactively has stated is their preference, I think that would be enough to curb most of the problems with algorithmic feeds.
Users could still manually configure feeds that provide hyper personalized, hyper specific, and hyper addictive content. But I bet the vast majority of users would never go beyond picking 1 specific sport, 2 personal hobbies and 3 genres of music they're interested in and calling it a day. Really, most would probably never even go that far. That's the reason platforms all converged on using those implicit signals, after all: they work much better than the user's explicit signals (if your ultimate goal is maximizing user retention/addiction, and you don't care at all about the collateral damage resulting from that).
But Meta's content ranking would conform to this too: in theory a user that had the exact same friends, is a member of the exact same groups, had the exact same watch history, etc. would be served the same content. Although I'm pretty sure there's at least some degree of randomization, but putting that aside it remains unclear how you're constructing a set of criteria that does Hacker News, and plenty of other sites but not Meta.
Even that, I don't think is entirely true. I'm pretty sure they use signals as implicit as how long you took to scroll past an autoplaying video, or if you even hovered your mouse pointer over the video but ultimately didn't click on it.
Same with friends, even if you have the exact same friends, if you message friend A more than friend B, and this otherwise identical account does the opposite, than the recommendation engine will give you different friend-related suggestions.
Then there's geolocation data, connection type/speeds, OS and browser type, account name (which, if they are real names such as on Facebook, can be used to infer age, race, etc), and many others, which can also be taken into account for further tailoring suggestions.
You can say that, oh, but some automated system that sent the exact same signals on all these fronts would end up with the same recommendations, which I guess is probably true, but it's definitely not reasonable. No two (human) users would ever be able to achieve such state for any extended period of time.
That's why we are arguing that only explicit individual actions should be allowed into these systems. You can maybe argue what would count as an explicit action. You mention adding friends, I don't think that should count as an explicit action for changing your content feed, but I can see that being debated.
Maybe the ultimate solution could be legislation requiring that any action that influences recommendation engines to be explicitly labeled as such (similar to how advertising needs to be labeled), and maybe require at least a confirmation prompt, instead of working with a single click. Then platforms would be incentivized to ask as little as possible, as otherwise confirming every single action would become a bit vexing.
People also slow down to look at the flipped car on the side of the road. Doesn't mean you want to see more flipped cars down the road.
Either way. Do you have any points other than that you think any and every action, no matter how small, is explicit, and therefore it's ok that for it to be fed into the recommendation engine? Cause that's an ok position to have, even if one I disagree with. But if that's all, I think that's as long as this conversation needs to go. But if there's any nuance I'm failing to get, or you have comments on other points I raised such as labeling of recommendation altering actions, I'm happy to hear it.
I'm mostly interested in getting concrete answers as to what people are referring to when they talk about "algorithmically served" content. This kind of phrasing is thrown around a lot, and I'm still unsure by what people are referring to by this phrase and I've rarely found people proposing fleshed out ideas as to how to define "algorithmically served content".
Some people take the stance that even using view counts as part of ranking should result in a company listing section 230 protections, e.g. https://news.ycombinator.com/item?id=46027529
You proposed an interesting framing around reproducibility of content ranking, as in two users who have the exact same watch history, liked posts, group memberships, etc. should have the same content served to them. But in subsequent responses it sounds like reproducibility isn't enough, certain actions shouldn't be used for recommendation even if it is entirely reproducible. My reading is that in your model, there are "small" actions that user take that shouldn't be used for recommendations, and presumably there are also "big" actions that are okay to use for recommendation. If that's the case, then what user actions would you permit to be used for recommendations and which ones would not be permitted to use? What are the delineation between "small" and "big" actions?
As I pointed out, I agree that defining what should be deemed acceptable and what shouldn't is a bit subjective, and can definitely be debated. Reasonable people can disagree here, for sure.
That's why I proposed that maybe the solution is:
1. only explicit actions are considered. A click, a tap, an interaction, but not just viewing, hovering, or scrolling past. That's an objective distinction that we already have legal framework for. You always have to explicitly mark the "I accept the terms and conditions" box, for example. It can't be the default, and you can't have a system where just by entering the website it is considered that you accepted the terms.
2. explicitly labeling and confirming of what is an suggestion algorithm altering action and what isn't. And I mean, in band, visible labeling right there in the UI, not a separate page like that Meta link. Click the "Subscribe" button, you get a confirmation popup "Subscribing will make it so that this content appears in your feed. Confirm/Cancel". Any personalized input into the suggestion algorithm should be labelled as such. So companies can use any inputs they see fit, but the user must explicitly give them these inputs, and the platforms will be incentivized to keep this number as low as possible, as, in the limit, having to confirm every single interaction would be annoying and drive users away. Imagine if every time you clicked on a video, YouTube prompted you to confirm that viewing that video would alter future suggestions.
I'm ok with global state being fed into the algorithm by default. Total watch time/votes/comments/whatever. My main problem is with hyper personalized, targeted, self reinforcing feeds.
So under this regime Meta, or any other social media site, can do pretty much any recommendation system they want, so long as they have a UI cluttered with labels and confirmation prompts disclaiming that liking someone, joining a group, adding a friend will affect your feed and recommendations.
> Imagine if every time you clicked on a video, YouTube prompted you to confirm that viewing that video would alter future suggestions.
In practice, I suspect this will make nearly every online interactions - posting a comment, viewing a video, liking a post, etc - accompanied by a confirmation prompt telling the user that this action will affect their recommendations, and pretty quickly users just hit confirm instinctively.
E.g. when viewing a youtube video, users often have to watch 3-5 seconds of an ad, then click "skip ad", before proceeding. Adding a 2nd button "I acknowledge that this will affect my recommendations" is actually a pretty low barrier compared to the interactions already required of the user.
The end result: a web that's roughly got the same recommendation systems, just with the extra enshitification of the now-mandated confirmation prompts.
I really do think that would be annoying enough to snap a good amount of people out of the mindless autoscrolling loop. There's a reason why companies love 1 click buying, for example. At scale, any extra interactions costs real money. The example of the ad is a good one where that's already a kind of high friction interaction, so one extra click is not that much more annoying, but that's not the case for the vast majority of interactions.
Granted, some people will certainly have a higher tolerance for this kind of enshittification. If companies find that the amount of money they can extract from highly targeting a given amount of users is greater than the amount of money they can make from more numerous but less targeted users, then they could choose to go down that path. That's a function of how tolerant the average user is of the confirmation prompts, and how much more money they can make from a targeted user.
We can't control that last variable, but we ultimately could control that first one. If we find that a simple confirmation prompt is not annoying enough for as many people as we'd like, we could make the confirmation prompts more annoying. Maybe make every confirmation prompt have to be shown for at least 5 seconds. Or require a cooldown between multiple confirmations. Or add captcha like challenges. And so on.
In the limit, I think you'd agree that if you had to wait 24 hours before confirming, that would probably be enough to dissuade almost everyone from going through with it, to the point most platforms would try to not have any personalization at all. (I wouldn't be happy with this end result either)
I think even a single, instant confirmation prompt would be enough to cause a sizeable difference. Maybe not enough. Maybe you're right, and it would make barely any difference at all. Then I'd be totally in favor of these more annoying requirements. But as a first step, I'd be happy with a small requirement like this, and progressively making requirements stringier if it proves not enough.
> Doesn't mean you want to see more flipped cars down the road.
It absolutely does mean that seeing as how everybody wants to see the flipped car on the side of the road. The local news reports on the car flipped on the side of the road and not the boring city council meeting for a reason.
That's a mindboggling take, to be honest, to the point I can't help but suspect that you're being contrarian just for the sake of it. I'm absolutely sure that you, yourself, have gone by some terrible scene which you couldn't help but stare at for at least a bit, which you would not classify as something you would like to see more of.
There's a huge difference between something people want to see and something people can't ignore. There is some intersection between those categories, but they are by no means one and the same. And news reports, headlines, thumbnails et al. optimize for the latter, not the former.
Watching the content that is being served to you is a passive decision. It's totally different from clicking a button that says you want to see specific content in the future. You show me something that enrages me, I might watch it, but I'll never click a button saying "show me more stuff that enrages me". It's the platform taking advantage of human psychology and that is a huge part of what I want to stop.
>it remains unclear how you're constructing a set of criteria that does Hacker News, and plenty of other sites but not Meta.
I already said "This will still impact HN because of stuff like the flame war downranker...". I don't know this and your reply to my other comment seem to be implying that I think HN is perfect and untouchable. My proposal would force HN to make a choice on whether to change or lose 230 protections. I'm fine with that.
It's still unclear what choice Hacker News and other sites will have to make to retain section 230 protection in your proposed solution.
Again, something like counting the number of views in a video is, in your framing, not an active choice on the part of user. So simply counting views and and floating popular content to the top of a page sounds like it'd trigger loss section 230 protections.
You're making me repeat myself multiple times now. I don't know what else I can say. HN would need to rank posts by a combination of upvotes and chronology. That is how they "float popular content to the top". You don't need passive metrics like views to do that.
> I think the best way to put it is, users with the same user picked settings should see the same things, in the same order. That's a given on HackerNews, as there's only one frontpage.
Are you sure? The algorithm isn't public, but putting a tiny fraction of "nearly ready for the frontpage" posts on the front page for randomly selected users would be a good way to get more votes on them without subjecting everyone to /new
That's a good point. As I pointed out, I'm ok with global state (total votes, how recent is a post, etc). Randomness could be thought as a kind of global state, even if it's not reproducible. As long as it's truly random, and not something where user A is more likely to see it than user B for any reason, then I'm fine with it.
Another possibility would be to somehow incorporate the possibility of publishing the algorithm and providing some kind of "under the hood" view that reveals to people what determined what they're seeing. Part of the issue currently is that everything is opaque. If Facebook could not change their algorithm without some of kind of public registration process, well, it might not make things better but it might make it get worse a bit slower.
So a simple "most viewed in last month" page would trigger a loss of protection? Because that ranking is determined by number of views, rather than a proactive user decision like upvoting.
>So a simple "most viewed in last month" page would trigger a loss of protection?
The key word there is "page". I have no problem with news.ycombinator.com/active, but that is a page that a user must proactively seek out. It's not the default or even possible to make it the default. Every time a user visits it, it is because they decided to visit it. The page is also the same for everyone who visits it.
To be clear, even the front page of Hacker News is not just a simple question of upvotes. Views, comments, time since posting, political content down ranking, etc. all at a factor in the ordering of posts.
This is an unpopular opinion here, but I think in general the whole "immunity for third-party content" thing in 230 was a big mistake overall. If you're a web site that exercises editorial control over the content you publish (such as moderating, manually curating, algorithmically curating, demoting or promoting individual contents, and so on), then you have already shown that you are the ones controlling the content that gets published, not end users. So you should take responsibility for what you publish. You shouldn't be able to hide behind "But it was a third party end user who gave it to me!" You've shown (by your moderation practices) that you are the final say in what gets posted, not your users. So you should stand behind the content that you are specifically allowing.
If a web site makes a good faith effort to moderate things away that could get them in trouble, then they shouldn't get in trouble. And if they have a policy of not moderating or curating, then they should be treated like a dumb pipe, like an ISP. They shouldn't be able to have their cake (exercise editorial control) and eat it too (enjoy liability protection over what they publish).
Moderating and ranking content is distinct from editorial control. Editorial control refers to editing the actual contents of posts. Sites that exercise editorial control are liable for their edits. For instance if a user posts "Joe Smith is not a criminal" and the website operators delete the word "not", then the company can be held liable for defaming Joe Smith. https://en.wikipedia.org/wiki/Section_230#Application_and_li...
I’d go farther and say that any content presented to the public should be exempt from protection. If it’s between individuals (like email) then the email provider is a dumb pipe. If it’s a post on a public website the owner of the site should be ultimately responsible for it. Yes that means reviewing everything on your site before publishing it. This is what publishers in the age of print have always had to do.
The thing that's missing is the difference between a unsolicited external content (ie. pay-for-play stuff) and directly user-supplied content.
If you're doing editorial decisions, you should be treated like a syndicator. Yep, that means vetting the ads you show, paid propaganda that you accept to publish, and generally having legal and financial liability for the outcomes.
User-supplied content needs moderation too, but with them you have to apply different standards. Prefiltering what someone else can post on your platform makes you a censor. You have to do some to prevent your system from becoming a Nazi bar or an abuse demo reel, but beyond that the users themselves should be allowed to say what they want to see and in what order of preference. Section 230 needs to protect the latter.
The thing I would have liked to see long time ago is for the platforms / syndicators to have obligation to notify their users who have been subjected to any kind of influence operations. Whether that's political pestering, black propaganda or even out-and-out "classic" advertising campaign, should make no difference.
Can’t you just limit scope for section 230 by revenue or users?
E.g. it only applies to companies with revenue <$10m. Or services with <10,000 active users. This allows blogs and small forums to continue as is, but once you’re making meaningful money or have a meaningful user base you become responsible for what you’re publishing.
I think the biggest problem is when we're all served a uniquely personalized feed. Everyone on Hacker News gets the same front page, but on Facebook users get one specifically tailored to them.
If Hacker News filled their front page with hate speech and self-harm tutorials there would be public outcry. But Facebook can serve that to people on their timeline and no one bats an eye, because Facebook can algorithmically serve that content only to people who engage with it.
Most sites that accept user-generated-content are forced to do some level of moderation, lest they become a cesspit of one form or another (CSAM, threats, hate speech, exposing porn to underage users, stolen credit card sales, etc...)
That’s the first reasonable take I’ve seen on this. Thanks for explaining it, I will use it for offline discussions on the subject. It’s been hard to explain.
Yeah, I wonder if the rules should basically state something like everything must be topical and you must opt in to certain topics (adult, politics, etc)People can request recommendations but they must be requested; no accidental pro-Ana content. If you want to allow hate speech fine, but people have to opt in to every offensive category/slur explicitly. (We can call it “potentially divisive” for all the “one persons hate speech is another persons love rap” folks or whatever.
Go after the specific companies and executives you believe are doing wrong. Blanket regulations raise costs for smaller competitors and end up entrenching giants like Meta, Google, and Apple because they can afford compliance while smaller competitors can’t. These rules are a big reason the largest firms are more dominant than ever and have become, effectively, monopolies in their markets. And the irony is that many of these regulations are influenced or supported by the big companies themselves, since a small "investment" in shaping the rules helps them secure even more market share.
This is a perfect summary of that "toxic and deadly" culture. Why are police treated as a dumb tool that will always respond to violence with more violence? Why is the onus on the criminals to deescalate the situation? Why doesn't the duty of enforcing the law come with a bigger burden to keeping the peace? And why do the police not have any culpability in violence they helped escalate?
The way some of you'll talk suggests that you don't think someone could genuinely believe in AI safety features. These AIs have enabled and encouraged multiple suicides at this point including some children. It's crazy that wanting to prevent that type of thing is a minority opinion on HN.
I'd be all for creating a separate category of child-friendly LLM chatbots or encouraging parents to ban their kids from unsupervised LLM usage altogether. As mentioned, I'm also not opposed to opt-out restrictions on mainstream LLMs.
"For the children" isn't and has never been a convincing excuse to encroach on the personal freedom of legal adults. This push for AI censorship is no different than previous panics over violent video games and "satanic" music.
(I know this comment wasn't explicitly directed at me, but for the record, I don't necessarily believe that all or even most "AI 'safety'" advocacy is in bad faith. It's psychologically a lot easier to consider LLM output as indistinguishable from speech made on behalf of its provider, whereas search engine output is more clearly attributed to other entities. That being said, I do agree with the parent comment that it's driven in large part out of self-interest on the part of LLM providers.)
>"For the children" isn't and has never been a convincing excuse to encroach on the personal freedom of legal adults. This push for AI censorship is no different than previous panics over violent video games and "satanic" music.
But that wasn't the topic being discussed. It is one thing to argue that the cost of these safety tools isn't worth the sacrifices that come along with them. The comment I was replying to was effectively saying "no one cares about kids so you're lying if you say 'for the children'".
Part of the reason these "for the children" arguments are so persistent is that lots of people do genuinely want these things "for the children". Pretending everyone has ulterior motives is counterproductive because it doesn't actually address the real concerns people have. It also reveals that the person saying it can't even fathom someone genuinely having this moral position.
> The comment I was replying to was effectively saying "no one cares about kids so you're lying if you say 'for the children'".
I don't see that in the comment you replied to. They pointed out that LLM providers have a commercial interest in avoiding bad press, which is true. No one stops buying Fords or BMWs when someone drives one off a cliff or into a crowd of people, but LLMs are new and confusing and people might react in all sorts of illogical ways to stories involving LLMs.
> Part of the reason these "for the children" arguments are so persistent is that lots of people do genuinely want these things "for the children".
I'm sure that's true. People genuinely want lots of things that are awful ideas.
Here is what was said that prompted my initial reply:
>When a model is censored for "AI safety", what they really mean is brand safety.
The equivalent analogy wouldn't be Fords and BMWs driving off a cliff, they effectively said that Ford and BMW only install safety features in their cars to protect their brand with the implication that no one at these companies actually cares about the safety of actual people. That is an incredibly cynical and amoral worldview and it appears to be the dominate view of people on HN.
Once again, you can say that specific AI safety features are stupid or aren't worth the tradeoff. I would have never replied if the original comment said that. I replied because the original comment dismissed the motivations behind these AI safety features.
I read that as a cynical view of the motivations of corporations, not humans. Even if individuals have good faith beliefs in "AI 'safety'", and even if some such individuals work for AI companies, the behaviors of the companies themselves are ultimately the product of many individual motivations and surrounding incentive structures.
To the extent that a large corporation can be said to "believe" or "mean" anything, that seems like a fair statement to me. It's just a more specific case of pointing out that for-profit corporations as entities are ultimately motivated by profit, not public benefit (even if specific founders/employees/shareholders are individually motivated by certain ideals).
>I read that as a cynical view of the motivations of corporations, not humans.
This is really just the mirror image of what I was originally criticizing. Any decision made by a corporation is a decision made by a person. You don't get to ignore the morality of your decisions just because you're collecting a paycheck. If you're a moral person, the decisions you make at work should reflect that.
The morality of an organization is distinct from the morality of the decision-makers within the organization. Modern organizations are setup to distribute responsibility, and take advantage of extra-organizational structures and entities to further that end. Decision-makers often have legal obligations that may override their own individual morality.
Whenever any large organization takes a "think of the children" stance, it's almost always in service of another goal, with the trivial exception of single-issue organizations that specifically care about that issue. This doesn't preclude individuals, even within the organization, from caring about a given issue. But a company like OpenAI that is actively considering its own version of slop-tok almost certainly cares about profit more than children, and its senior members are in the business of making money for their investors, which, again, takes precedence over their own individual thoughts on child safety. It just so happens that in this case, child safety is a convenient argument for guard rails, which neatly avoids having to contend with advertisers, which is about the money.
Sure, but that doesn't really have anything to do with what I said. The CEO of an AI company may or may not believe in the social benefits of censorship, and the reasoning for their beliefs could be any number of things, but at the end of the day "the corporation" is still motivated by profit.
Executives are beholden to laws, regulations, and shareholder interests. They may also have teams of advisors and board members convincing them of the wisdom of decisions they wouldn't have arrived at on their own. They may not even have a strong opinion on a particular decision, but assent to one direction as a result of internal politics or shareholder/board pressure. Not everything is a clear-cut decision with one "moral" option and one "immoral" option.
Organizations don't have a notion of morality; only people do.
The larger an organization is, and the more bureaucratized it is, the less morality of individual people in it affects it overall operation.
Consequently, yes, it is absolutely true that Ford and BMW as a whole don't care about safety of actual people, regardless of what individual people working for them think.
Separately, the nature of progression in hierarchical organizations is basically a selection for sociopathy, so the people who rise to the top of large organizations can generally be assumed to not care about other people, regardless of what they claim in public.
The linked project is about removing censorship from open-weight models people can run on their own hardware, and your comment addresses incidents involving LLM-based consumer products.
Sure, products like character.ai and ChatGPT should be designed to avoid giving harmful advice or encouraging the user to form emotional attachments to the model. It may be impossible to build a product like character.ai without encouraging that behavior, in which case I'm inclined to think the product should not be built at all.
reply