Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are not unpreventable.

You can detect both the triggered behavior and "hey this looks like a logic bomb" with static analysis. Yes, you'll never trigger this with some dynamic analysis of the app. But "hey, some code that does things associated with malicious or otherwise bad behavior is guarded behind branches that check for specific responses from the app developer's server" is often enough to raise your eyebrows at something.



The static analysis should trigger "Explain why you're doing this" as a criteria for approval.

But that would probably require some actual human code review, which costs $$s.

Apple could offload that to the developer in the form of review surcharges.


No need for human review, they can just reject anything suspicious.


I feel like "not suspicious" would eventually become an impossible bar.

You'd find a code pattern that was being used, and declare it suspicious.

Rinse and repeat, as people are still going to try getting around the rules.

Eventually you're left with some weird subset of a subset of a language that's legal to write iOS apps in.


In this case suspicious code is anything that achieves a fairly narrow subset of possible outcomes so I doubt it would come up much.

It’s a common fallacy to assume infinite worlds result in every possible world but 1, 10, 100, … is an infinite series but is only covering ~0% of possibilities.


Okay, but this is now a policy and procedure choice. The original claims were that these are undetectable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: