They can roll it out tonight and immediately roll it back if it is stifling conversation in a way that causes more harm than it resolves. They can also iterate on it if any part of it works in a way that isn't optimal.
Essentially, if you trust HN/YC to not leave something horribly broken, there is nothing to worry about. The code is not set in stone, and you can bet they'll be watching closely for anything not working well about the new system.
Social systems can be broken in a way that won't manifest until a certain type of discussion comes up. So while I trust HN/YC to do their best to protect and promote discourse, I don't know that the types of conversation that come up during the evaluation period will be representative of all future conversation on HN
That's true. The same argument can be made for any system that is put in place, though. Even the current one likely has types of discussions that "break it" in some way. There's even evidence of such breakage if you include arguments/bickering as something that the system should prevent.
We can summarize this change as moving from a system where (at least some of) the system's weaknesses are known, to a system where the people running HN believe it should be a better system, but it still probably has points where it breaks in some way. Doing changes like this and reacting to breakage is the only way that progress can be made.
They can roll it out tonight and immediately roll it back if it is stifling conversation in a way that causes more harm than it resolves. They can also iterate on it if any part of it works in a way that isn't optimal.
Essentially, if you trust HN/YC to not leave something horribly broken, there is nothing to worry about. The code is not set in stone, and you can bet they'll be watching closely for anything not working well about the new system.