Those metadata tags were just a simple textual description and a bunch of keywords with no reference to any controlled vocabulary. This is what made them so easy to abuse. Modern schema-based structured data is vastly different, and with a bit of human supervision (that's the "third party vetting") it's feasible to tell when the site is lying. (Of course, low-quality bulk content can also be given an accurate description. But this is good for users, who can then more easily filter out that sort of content.)
One could even let this vetting happen in decentralized fashion, by extending Web Annotation standards to allow for claims of the sort "this page/site includes accurate/inaccurate structured content."
The thing is "a bit of human supervision" is difficult on a scale of ten thousand Wikipedias. It pretty much needs to be done completely automatically.
One could even let this vetting happen in decentralized fashion, by extending Web Annotation standards to allow for claims of the sort "this page/site includes accurate/inaccurate structured content."