Well I’m this case the results are pretty good locally but have pretty obvious artefacts too. Especially the synthesised road videos, look at the trees or even more at the Lane change in the linked video.
You say 'pretty obvious artifacts' but that is because you have some clue about the process and know what you are looking for and are interested enough to look.
I tried pointing out to some friends some really bad artifacts in a video we watched, and they just could not grasp it. They couldn't see what I was seeing as it didn't look out of place to them. It isn't for lack of intelligence, they just didn't care enough to understand. That pretty much describes vast swathes of the population.
You show a video using the above technique to anyone with strongly held political/ideologic beliefs and an inclination to accept 'alternative facts' over actual facts and videos using these techniques will be like a wildfire and almost impossible to refute!
I thinka layman would be perfectly capable of understanding "computers can now create fake videos so real you can't spot them". Not that that claim is quite true yet.
Chances are there would be corroborating evidence to the contrary, since it's not often the President does or says anything without multiple witnesses and cameras being involved. In that case, it might be easier to impersonate them through their social media accounts.
The real danger here, if anything, is impersonating common citizens or lower ranking government officials.
He would say it's fake, but who would believe him?
How exactly can computer scientists explain deepfakes to laymen?