“It’s like a telephone company admitting their services will be used to call in bomb threats, coordinate terrorist attacks, conduct verbal abuse,”
I do agree it would be like a telephone company doing that but a telephone company would never openly admit that to their entire company. Also I doubt they would describe as necessary for growth.
Suggesting some of your users will kill people or commit violent acts of terrorism and that this is okay and normal in the name of pursing money and growth, is not a "pretty cliche startup mentality". No startup I've ever worked at thankfully has ever suggested such evil things and then say they're normal. That's not what startups are.
There are so many indirect ways to enable harm that you’d probably be hard pressed to find a company that connects people that isn’t somehow causing harm.
You’ve never worked at a startup with a product that required a moderation team?
If your product requires a moderation team, you’re under the umbrella of enabling negative content.
To give an example, people working on video games with online play are enabling a toxic environment where verbal abuse often occurs.
Are they trying to be evil? No.
But in the course of money and growth they add chat to add a social aspect and enable it. More people than not get a positive experience.
The tacky startup aspect is the fact you’d write that down in a memo. Everyone knows it, it’s not some brilliant revelation to wave up and down and act all high and mighty about.
I do agree it would be like a telephone company doing that but a telephone company would never openly admit that to their entire company. Also I doubt they would describe as necessary for growth.