It’s the opposite really - even if you have really safe parts, the overall system will be more likely to fail than any particular part.
Say you have 10 independent critical components each with a 0.99 probability of not failing. Well the probability of nothing failing in the system is 0.99^10 ≈ 0.9. So your collection of parts each with a 1% chance of failure has a 10% chance of some critical component failing overall.
Of course it’s more complicated because in real life nothing is really independent, and failure of one component will be coupled to failure in another. This makes simple solutions like redundancy not necessarily helpful (and sometimes even detrimental).
The problem is that any system that's built around cheap redundant parts will gradually degrade to it's minimum viable state because the humans who maintain it will eventually be lazy| greedy| stupid enough. So it also has to fail early by design before reaching that state.
There is in software, but in hardware it's a chain with links and the weakest part will be the one to cause the chain to fail. Hence the focus on individual parts reliability and quality.
You make me think about how nuclear weapons tests are currently done. they are detonated underground, the soil above the device collapses on top, containing the majority of the fallout. it's a reliable and simple system and poses the question, could a nuclear power station be built in that way, because it wouldn't require the development of new technology.
Redundancy also adds a lot of complexity, as fail-over mechanisms aren't simple either. That added complexity then turns into additional possible error sources.
A friend of mine builds a component for a satellite system and the FDIR mechanisms need to be chosen very carefully, as adding more fail-safes can actually make the system overall more error prone.
There's an interesting blog post on from AWS about that topic [0]. Turns out adding more fallbacks and fail-safes is actually discouraged there.