The government will just infiltrate and control that third party by funding it. "You want us to raise your rates the next round of appropriations, here is what we expect to get 'fact checked'".
> Why is the government well suited to declare what it and isn't misinformation though?
Presumably because much of the lies and disinformation going around involves them to start with. When some nutjob starts posting about something like millions of American citizens being locked up in FEMA camps, or claiming that a proposed healthcare bill calls for the formation of a government death panel any respectable fact checking org is going to end up asking the lawmakers and FEMA about it anyway so government certainly has a role here.
Ideally, each social media platform would have their own people catching and flagging the worst examples of disinformation and that might also involve enlisting the services of both governments and vetted independent third parties.
In my limited experience on social media where I don't see any official flags for misinformation I've seen plenty of cases where it's other users stepping in and correcting outright lies and common misconceptions complete with sources. That probably works better in some spaces than others though.
That doesn't really work, though. You picked some examples of obviously-false things that someone might say about the US government (one would hope, at least), and, sure, the US government is in a decent position to refute those claims.
But let's take something we now know to be true: the NSA collecting data on US citizens. Pre-Snowden, someone could post something asserting that the NSA is spying on us. The government, being the hypothetical arbiter of what is and isn't misinformation, would of course immediately label that as misinformation.
You can't trust the government to be honest here. Sometimes they will even lie for fairly good reasons. But I don't want them marking things as misinformation (or, worse, suppressing such information) when it's true. And they certainly will do that, sometimes.
That's why misinformation labels aren't really a problem. If Snowden posts that they NSA is spying the government could flag the claim as misinformation, but flagging Snowden's post doesn't make it go away and Snowden came with evidence which the public could review. Once the public saw that spying was happening, they'd know the government lied and the next time they saw something flagged by the government as misinformation they'd be less willing to assume that to be accurate, and eventually might even start to assume that things flagged as "disinformation" by the government we're more likely to be true than not.
As long as the fact checking is transparent, there will be an incentive for the fact checker to stay honest and when they fail we should adjust our understanding of what the flag actually means.
I've gone through that already with popular fact checkers like snopes. I still consider it to be a valuable resource, but I've become aware that they allow bias to influence their findings and that they can't be trusted blindly. Really no one source should be blindly trusted and that's something we should be trying to let people know, but warning labels can still be helpful and also revealing about how adversarial our government has become. A government that can't be trusted not to repeatedly mislead the public is one that should be voted out and replaced.
Shouldn't we farm that out to a third party?