I still don't know how to convince myself that causal relationships exist at all and that everything isn't just things like correlations, that would go into level 1 in this hierarchy if Pearl's.
Rain causes people to be wet. This implies that if we were to God-like stop the rain, people wouldn't be wet. But if we were to prevent people from being wet (by buying them an umbrella), that wouldn't stop the rain. This is not something you can infer from correlations alone.
That's far too little logic to be a counterexample here. Rain correlates with a lot of things. If you were to stop all of those things, you could surely stop the rain as well. For instance, if rain is correlated with clouds, and you get rid of clouds, you'll get rid of rain. The fact is, there does not exist this thing you've called 'rain' that causes people to be wet and does nothing else relevant to your argument.
There's plenty of reason to believe that causal relationships are structural simplifications of correlative ones. We just don't have any great formalizations of them.
> If you were to stop all of those things, you could surely stop the rain as well.
By the way, this is not true in general. Suppose B…F are sampled from Boolean variables with bias 0.5. A Boolean circuit like A = (sum(B,C,D,E,F)>3) + (not B and not C and… not F). A is correlated with all of its causes B through F. However, stopping all of its causes, causes A to turn on.
> If you were to stop all of those things, you could surely stop the rain as well.
But, you typically can't intervene everywhere. For example, you may not have any power over the gardener who normally turns the sprinkler on. The point of interventions (like turning a sprinkler on) is that you can reason about what would happen.
"B...F" are not all the causes of A. You forgot to include the relation "f(B...F)", whose existence is also strongly correlated with A, as per the entire justification for using that '=' symbol the way you have.
If you were to stop all of those things, would that not include stopping the causes of rain, if they exist? (For that matter, is that why you can say 'surely' in that sentence?) If we thought we knew the causes of rain, and happened to be correct (both that there are causes and in what we think they are) then should we not be able to predict what, short of stopping all of those things, would stop the rain, without having observed that happen?
IIRC Pearl has a chapter on "Where does causality come from?" and "Can there be causality without time?" I don't remember the conclusion unfortunately. IMO causality is a strict consequence of the time-based physics of our universe. The future was caused by the past. Animals evolved from primordial bacteria, not the other way around.
What I mean is that it is possible that when we speak of causality, we are speaking of a specific subset of correlative structure, and that the features of those correlations that make them causal are matched with far greater significance against our usage of the term 'causal' than the features of correlative relations that do not take part in causal structure.
Like if I say "blue", you can very quickly simplify by assuming I'm talking about the color blue, rather than all the other things in the world that are also called 'blue', even though there's nothing that necessitates this simplification.
Causal models are more powerful than associational models because they support interventions. That means that you can answer questions about what would happen if you were to "do(x)". This is different than asking the question about what would happen if you were to observe "x".
Let's look at a concrete example. Suppose in the real world, the gardener leaves the sprinkler off, the ground is dry.
If you were to observe that the sprinkler is on, you would deduce that the ground is wet. If you were to observe the wet ground, you would "abduce" that the sprinkler is on.
However, the interventional question: what if I were to turn on the sprinkler requires a causal model. In this case, the direction is critical. Turning on the sprinkler, wets the ground. Wetting the ground, does not turn on the sprinkler!
Causal models also have better invariants: unobserved causal children (say WetGround) leave their parents (say Sprinkler and Rain) independent. This is not necessarily true in an associational model. If Rain is correlated with WetGround is correlated with Sprinkler, then even when WetGround is unobserved, Rain gives you information about Sprinkler. Since our model has no other elements (such as a common cause between Sprinkler and Rain), then this transmission of information is totally wrong.
I couldn't tell if you were arguing against the utility of causal models or whether the real world exhibits causal relationships. If it's the latter, then it is ultimately a leap of faith based on our intuitive understanding of the world. All scientific experiments rely on the notion of cause and effect and interventional reasoning (experiments).