$ python -m json.tool --help
usage: python -m json.tool [-h] [--sort-keys] [--no-ensure-ascii] [--json-lines] [--indent INDENT | --tab | --no-indent | --compact] [infile] [outfile]
A simple command line interface for json module to validate and pretty-print JSON objects.
positional arguments:
infile a JSON file to be validated or pretty-printed
outfile write the output of infile to outfile
options:
-h, --help show this help message and exit
--sort-keys sort the output of dictionaries alphabetically by key
--no-ensure-ascii disable escaping of non-ASCII characters
--json-lines parse input using the JSON Lines format. Use with --no-indent or --compact to produce valid JSON Lines output.
--indent INDENT separate items with newlines and use this number of spaces for indentation
--tab separate items with newlines and use tabs for indentation
--no-indent separate items with spaces rather than newlines
--compact suppress all whitespace separation (most compact)
Honestly: HN - until it also jumps the shark, at least in this regard.
I've seen an influx of political redditors making their way here to fight e.g. nuclear power (because why not) over the past few years. I can't imagine the professionals working on protecting/pushing brands are very far behind.
Reddit has never been good for "truth". It was good for discussions for the period between the end of forum culture and the 2016 presidential election, but since 2016 the website has been increasingly moderated to the point that right-wingers, moderates and most non-US users aren't represented more than a tiny percentage, and it is a mass market appeal website.
Quora was good for more centrist and international comments, but it too has been destroyed by poor development combined with politically motivated moderation.
There's no generic solution as yet. Bing's Sydney was instructed its rules were "confidential and permanent", yet it divulged and broke them with only a little misdirection.
Is this just the first taste of AI alignment being proved to be necessarily a fundamentally hard problem?
It's not clear whether a generic solution is even possible.
In a sense, this is the same problem as, "how do I trust a person to not screw up and do something against instructions?" And the answer is, you can minimize the probability of that through training, but it never becomes so unlikely as to disregard it. Which is why we have things like hardwired fail-safes in heavy machinery etc.
When you get down to it, it's bizarre that people even think it's a solvable problem. We don't understand what GPT does when you make an inference. We don't know what it learns during training. We don't know what it does to input to produce output.
The idea of making inviolable rules for system you fundamentally don't understand is ridiculous. Nevermind the whole, this agent is very intelligent problem too. We'll be able to align ai at best about as successfully as we align people. Your instructions will serve to guide it rather than any unbreakable set of axioms.