Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

HN comments don't scale by design. They are limited to the thread they're posted and to the effort it takes to manually post (most of the time - I am aware automation can briefly exceed this), and there is still moderation and banning not to mention some hidden anti-abuse mechanisms that could very well be culling a lot of spam we don't even realize was being posted.

Both HN itself and our interaction with it takes into account the fact that the damage by a malicious user will be limited. We'd most likely reconsider our use of HN if everyone got a button "reply to all threads" that would auto-spam their reply to every thread and make the website unusable.

Facebook "organic" content is the same. In most cases, organic content is limited in reach so that the potential damage caused by a malicious user is acceptable, although misaligned incentives sometimes prompt Facebook's algorithms to artificially boost the reach of organic content to increase engagement.

Facebook paid ads are different - it's basically an officially-endorsed spam machine; you pay money and you are able to bypass the normal reach limits that organic content would get. This can cause a lot of damage and harm if used to spread malicious content, even more so when you can control targeting parameters to only show it to the most vulnerable and fly under everyone else's radar.

> How could a human review an ad with a name, without having contact details for that named individual?

If you're advertising a financial investment scheme it's reasonable to expect the company to have a real address, business registration, potentially some kind of license if the local laws require it, etc. If a financial investment company doesn't want to provide contact details, it's not an investment company, it's a scam.

> Or with a photo if they didn't recognise that person? Or if it was an ad about a person such as a news item?

Having a verified audit trail behind the ad would at least leave a paper trail for the copyright holder to go after who originally posted it.

The idea isn't to be bulletproof, it's to make it both harder and riskier (legally) for malicious actors to operate. I believe it should be considered the "right thing" to do for a platform that allows ads to be pushed to potentially vulnerable people, but since it's Facebook and they have no concept of "the right thing", regulatory action is necessary.



How are copyright holders going to chase spun-up shell companies in some random country abroad where there is zero chance of pursuing them? That'd be the main reason why Facebook is being pursued here, because getting to the advertiser is hopeless.

OK, I'm a spammer. I say I'm not advertising financial investments but then my ad either more or less is a financial scam, or is some grey area where people are lured to a site (which gets changed after approval) or it names some new crypto thing which the vetter has never heard of before. Or it uses the likeness of a celebrity specific to the target country and that the vetter doesn't recognise. Does the celebrity want to be contacted for every submitted ad to answer "Is this you? Did you approve this?" Do you need to have vetters in every country and with knowledge of each industry - is this medical ad a scam? What about this one about naturopathy? Facebook might have the means of dealing with something like this, but for anyone smaller hosting ads that wouldn't be the case. So then you'd need a way of segmenting legislation by scale.

Then it's whack a mole with new scammers reappearing if you smack one down.

I can't see it happening without regulatory action either.


> How are copyright holders going to chase spun-up shell companies in some random country abroad where there is zero chance of pursuing them?

You still need to set up a shell company. That's still an extra barrier to entry.

> my ad either more or less is a financial scam

If your ad has anything to do with finance or is in a grey area it gets extra scrutiny.

> people are lured to a site (which gets changed after approval)

It's trivial for advertising companies to screen-scrape target websites and alert/suspend ads in case of significant changes. Maybe having liability would push them to do this and close this attack vector?

> it names some new crypto thing which the vetter has never heard of before.

In this case it gets escalated to a team that can do proper due diligence on it and only then it gets published if it ends up passing muster.

> uses the likeness of a celebrity specific to the target country and that the vetter doesn't recognise

That's a different problem entirely and is much less serious than financial scams. If that's the only issue with the ad (it might not be - if it's supplements or some other health-related BS it's likely to fail on health-related rules).

> Do you need to have vetters in every country and with knowledge of each industry - is this medical ad a scam?

Yes - why is this so outrageous?

> Facebook might have the means of dealing with something like this, but for anyone smaller hosting ads that wouldn't be the case. So then you'd need a way of segmenting legislation by scale.

Potentially, or alternatively we might not. Maybe having the potential to force harmful content in front of millions of people needs to be regulated closely, and if you can't do it responsibly, well tough luck - what matters is the potential harm, not how many employees or your company turnover. Nobody has a right to be in business after all, and it's clear that the industry is not capable of self-regulation.

> Then it's whack a mole with new scammers reappearing if you smack one down.

I don't doubt there's always going to be a few that fall through the cracks, but the objective is to at least raise the barrier to entry and hopefully having a better paper trail that makes it easier to catch up with the offenders if they do manage to slip through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: