It looks like the authors didn't properly handle missing values in Stata, leading to marking people with missing health information being marked as being "severely ill" instead of being excluded from the analysis.
Not OP, but to me it sounds line p-hacking aka bad science as well: If you slice a dataset en enough subsamples you will very likely find random correlations. That’s the nature of these kinds of analyses and we should be sceptical of conclusions that are based on suce analyses.
I think you and p51-remorse are discussing different parts of the article. They're saying the updated analysis is suspect because of the risk of false discoveries. I believe that's probably true in the usual way--if we study 20 subgroups with no actual effect, then we expect one to show an effect with p < 0.05. There's no mention of preregistration or anything like a Bonferroni correction to manage that risk.
You're saying the original analysis was wrong due to a coding error. I believe that's also true, but that's not what they were discussing. The variable names are inscrutable, but the article text also seems to imply that line (mis)codes divorce, not severe illness:
> People who left the study were actually miscoded as getting divorced.
So they actually found a correlation between severe illness and leaving the study. That's perhaps intuitive, if those people were too busy managing their illness to respond.
The blog post list this as the original Stata code:
> replace event`i’ = 1 if delta_mct`i’ != 0 | spouse_delta_mct`i’ != 0
and this as the correct one:
> replace event`i’ = 1 if (delta_mct`i’ != 0 | spouse_delta_mct`i’ != 0) & delta_mct`i’ != . & spouse_delta_mct`i’ != .
It looks like the authors didn't properly handle missing values in Stata, leading to marking people with missing health information being marked as being "severely ill" instead of being excluded from the analysis.
It's an unfortunate mistake, but it happens.