Security researchers posting attack apps disguised as games have discovered how to get malicious apps through the closed software-security screening process Apple uses to approve software posted to its App Store.
The researchers also demonstrated attack codes that allow third parties to take control of iOS devices by uncovering a user’s PIN and app passwords, as well as take pictures and send text messages without the user realizing anything was wrong.
The hack uses “private function calls to gain privileges that are not intended for third-party developers,” according to Jin Han of A*STAR Institute for Infocomm Research and the Singapore Management University. “We found a way to bypass Apple’s vetting process so that our apps, embedded with proof-of-concept attacks, could be published on iTunes.”
Apple tries to filter malicious applications from its App Store by requiring all third-party developers submit code to a testing and evaluation process whose details Apple keeps secret. Even if they’re approved, third-party applications are able to assume only limited privileges and run within a sandbox that limits their access to system resources and prevents them giving themselves privileges that would provide broader access.
“It was generally believed that these iOS security mechanisms are effective in defending against malware,” according to a paper, lead-written by Han, in a recently published issue of Applied Cryptography and Network Security.
“Our proof-of-concept attacks have shown that Apple’s vetting process and iOS sandbox have weaknesses which can be exploited by third-party applications,” stated the paper, which is a follow-on to an October comparison of iOS and Android security that revealed three quarters of applications ask for access to a higher number of secure APIs when they run on iOS than the exact same applications running on Android.
In November, Hewlett-Packard Co. reported that more than 90 percent of iOS apps have exploitable security flaws.
Tests on more than 2,000 apps used by 600 customer organizations in 50 countries showed that 86 percent were vulnerable to SQL injections or other attacks, and 97 percent inappropriately accessed private information on an iOS device, HP reported.
Han and the team from the Institute for Infocomm Research shared the results with Apple security before going to press, and described in their paper how users can limit the damage from the exploits, as well as how Apple should fix them.
The ongoing issue, according to Han, is whether secret, closed-source application development and security checks are more effective than a more open process at finding security flaws.
“A cryptosystem should be secure even if everything about the system, except the key, is public knowledge,” Han wrote. “I think the same principle applies to operating systems.”
The ability of any user to check Android’s code or of any security researcher to write code to reinforce a weak spot in the mobile OS’s security is a crucial advantage compared to more closed processes like those used by Apple, according to a Dec. 2 post by Android security engineer Adrian Ludwig on the official Android blog.
Contributions from the security community add more security to the “KitKat” version of Android (v 4.4) and extend the security of SELinux – the core OS kernel on which Android is built – Ludwig wrote.
KitKat includes a reinforced sandbox designed to keep apps from taking more privileges than the owner has allowed or accessing private data without permission, an ability that comes from switching SELinux into a role in which it enforces security rather than providing it where it is requested.
Security on both iOS and Android are suspect enough that the Pentagon’s Defense Information Systems Agency (DISA) has disallowed both operating systems on Defense Dept. networks, according to a Dec. 3 story on NextGov. Only BlackBerry phones and Playbook tablets are authorized for the purpose.