Imagine being more eager to review a thousand lines of code, which of course you can follow all the code paths, rather than just pull up the gui of a network monitor.
I’ve worked on projects that logged locally and only transmitted every ~60 days when they detected the right network (eg public wifi). So unless you monitor it continuously and permanently this isn’t true.
I know open source projects that update their code every 30 days. Unless you're continuously and permanently monitoring every patch of every library then this isn't true.
Run the Debian Stable version and you're spared such churn. The version you're running may lag the current one by a few points but that is a small price to pay for relative stability (as in 'know your daemons'). Security fixes are backported but new functionality is not. While not a perfect guarantee - remember the weak key debacle - this strategy does provide a stable baseline which, in contrast to proprietary software [1], can be audited for telemetry/data leaks/etc.
[12] yes, yes, yes, it is possible to run that proprietary tool through Ghidra (et al) to look for nasties as well but this is far harder, you don't just run a diff between two binaries.
i mean, sure... but lets ignore whatever malware project you were working on. :)
lot of businesses live or die on the trust of their customers. don't they? arc's product is aimed at power users. surely if they were collecting telemetry and then trying to hide the fact they were transmitting it would be a critical blow when discovered.
so while i totally agree that they _could_ operate like that, in most cases there is very little to be gained and a lot to be lost by being intentionally deceptive.
so this will kind of diverge on what we consider as "proof", but i don't think that the software would need to be permanently monitored for a reasonable assurance.