iOS Security; iOS 9.0 or later
Comments on this paper (Sept 2015 version)
This is a high level but not content free document.
A key noun phrase is “an attacker in possession of a device”.
While important, that is not what my pages are normally about.
Here are a few comments as I read.
This secure boot chain helps ensure that the lowest levels of software are not tampered with and allows iOS to run only on validated Apple devices.
The descriptions leading up to this do not explain how iOS is limited to Apple devices, except perhaps thru obscurity.
Perhaps the code is encrypted by a key known by devices from Apple.
Indeed this seems clear from comments on GID on page 10.
I presume that the signed code is loaded from unprotected device storage, or alternately from “Itunes”.
That seems to serve the ‘valid device’ goal.
During an iOS upgrade, iTunes (or the device itself, in the case of OTA [over the air] software updates) connects to the Apple installation authorization server and sends it a list of cryptographic measurements for each part of the installation bundle to be installed (for example, LLB, iBoot, the kernel, and OS image), a random anti-replay value (nonce), and the device’s unique ID (ECID).
This is obscure.
How does the device learn of a “part of the installation bundle to be installed” so that it can send a “cryptographic measurement” thereof?
Asked otherwise why is said “part of the installation” not already signed and ready to go?
Perhaps some combinations are not considered valid and the “installation authorization server” performs some sort of compatibility check.
Perhaps combinations once thought secure are discovered to be insecure.
This service would constitute a sort of “certificate revocation list”.
Two subsequent paragraphs explain how this may prevent reversion to an earlier kernel with known vulnerabilities.
It [Secure Enclave] provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has been compromised.
This makes it much harder to steal private or session keys.
It may effectively solve the covert key leak problem.
It does not do much to prevent the plain text end of a cipher service outide the secure enclave from being taken over and abused.
How does it decide how to respond to a message “now sign this: … .”?
When the device starts up, an ephemeral key is created, entangled with its UID, and used to encrypt the Secure Enclave’s portion of the device’s memory space.
I wonder how well they guard against replay from a compromised ‘device memory’?
How does the Secure Enclave ensure that the data resulting from a fetch is what was most recently stored there, as contrasted to earlier content at the same location?
This is not the paper to answer such questions.
The next paragraph mentions an ‘anti-replay counter’ (for files) which is the sort of protection that is probably too expensive for the interface between DRAM and cache.
The Secure Enclave is responsible for processing fingerprint data from the Touch
ID sensor, determining if there is a match against registered fingerprints, and then enabling access or purchases on behalf of the user.
That is a scary amount of software to be running in the Secure Enclave; perhaps they use L4 to contain it.
This technology reads fingerprint data from any angle and learns more about a user’s fingerprint over time, with the sensor continuing to expand the fingerprint map as additional overlapping nodes are identified with each use.
Apple elides the important issue of whether this ‘expansion’ makes admission harder or easier.
Indeed what are the failure modes?
This is a general problem with fingerprints; they are not well evaluated.
It smells of security theater.
The listed situations requiring a passcode ameliorate this.
If Touch ID is turned off, when a device locks, the keys for Data Protection class Complete, which are held in the Secure Enclave, are discarded. The files and keychain items in that class are inaccessible until the user unlocks the device by entering his or her passcode.
This is obscure.
Let me guess what it means.
When you ‘unlock’ your device, it ‘derives’ from your passcode a symmetric key with which it can read and write your non-volatile data.
When the ‘device locks’ that key is expunged from the device.
The phrase “Data Protection class Complete,” is defined on page 12 under “Data Protection classes”.
Page 8:
The nonce is signed with a Secure Enclave key shared by all devices and the iTunes Store.
Of what use is a key that is shared so widely?
Perhaps they mean “Apple devices” by “devices”.
The app is only notified as to whether the authentication was successful; it cannot access Touch ID or the data associated with the enrolled fingerprint.
Can the app discriminate among the enrollees?
Page 12:
A large iteration count is used to make each attempt slower.
The iteration count is calibrated so that one attempt takes approximately 80 milliseconds. This means it would take more than 51⁄2 years to try all combinations of a six-character alphanumeric passcode with lowercase letters and numbers.
I presume that ‘80 milliseconds’ refers to the CPU time for the iPhone to try a guessed passcode.
Custom hardware might be faster.
Each class [data protection class] uses different policies to determine when the data is accessible.
I await a description of ‘accessible to whom?”.
Page 13: I take it that when the ‘per-file’ key (is available somewhere in plaintext), depends only on the ‘Data Protection Class’ of that file and not more frequent application decisions.
The app defines the class upon creation.
Keychain items can only be shared between apps from the same developer.
Arghh: too narrow and too broad!
Bad like ‘same origin policy’!
Page 19:
Unlike other mobile platforms, iOS does not allow users to install potentially malicious unsigned apps from websites, or run untrusted code.
At runtime, code signature checks of all executable memory pages are made as they are loaded to ensure that an app has not been modified since it was installed or last updated.
This represents the view that good people don’t write bad code and that we know good people when we see them.
Keykos does require trusting code but typically a small identifiable set,
All third-party apps are “sandboxed,” so they are restricted from accessing files stored by other apps or from making changes to the device.
I suppose that that means that Apple’s apps run with infinite ‘app privilege’.
Page 20:
For example, a return-to-libc attack attempts to trick a device into executing malicious code by manipulating memory addresses of the stack and system libraries.
If maliciously crafted packets from outside can defeat
Page 25
For example, apps are not allowed to utilize health data for advertising.
I am glad of that.
I am not glad that Apple reserves to itself such decisions, excluding the user.
Nor am I aware of whether there is a way for the user to learn of entitlements.
It is curious that a document titled “iOS Security Guide” omits “entitlement” from its glossary.
Google “Secure Enclave coprocessor”, “CTR_DRBG”, “AES-XTS”, “RFC 3394”.
Issue: frequently written non-volatile storage.
Perhaps “Effaceable Storage”?
Another threat: “Is this really my iPhone that wants me to enter my passcode?”.
Here is some “Effaceable Storage” detail.
Perhaps effaceable storage need not be often written and circumvention of ordinary “wear leveling” suffices.
You can brick your iPhone only a limited number of times; we already new that.
It does not serve for replay attacks on encrypted RAM.
Keychain
Many apps need to handle passwords and other short but sensitive bits of data, such as keys and login tokens.
The iOS keychain provides a secure way to store these items.
I find this a peculiar perspective.
If the ordinary storage of the app is vulnerable, then the behavior of the app is similarly vulnerable.
This may not be an axiom but I can’t think of any exceptions.
If the app behavior is at risk, but the app is still able to wield the authority preserved by the keychain, then how is that authority not vulnerable.
The only answer that I see is that a secret key could not be used to decipher text except when the phone is on.
A private RSA key could not be used to sign nor to decipher a message when the phone is off.
This is an obscure advantage.
It is a violation of abstraction and may thus introduce more problems than it solves.
“… the securityd daemon determines which keychain items each process or app can access.”
That is a bit vague.
Entitlements
Thank you Apple for not calling them capabilities (as in POSIX).
While similar, there are vital differences.
App developers express their intent to access Safari saved passwords by including an entitlement in their app.
I think that this is how the developer tells Apple what authority the app expects to use while running.
I suppose that the ‘install package’, signed by Apple, indicates such authority to be granted as the app runs.
In the middle of a section titled “Runtime Process Security” I see:
Access by third-party apps to user information and features such as iCloud and extensibility is controlled using declared entitlements.
Entitlements are key value pairs that are signed in to an app and allow authentication beyond runtime factors like unix user ID.
Since entitlements are digitally signed, they cannot be changed.
Entitlements are used extensively by system apps and daemons to perform specific privileged operations that would otherwise require the process to run as root.
This greatly reduces the potential for privilege escalation by a compromised system application or daemon.
There is an implicit assumption here that entitlements never change: that any formal analysis assumes some fixed set of entitlements.
This is one big difference from capability structures as suggested in Lampson’s paper “Dynamic Protection Structures”.
Another difference is that entitlements are protected by crypto rather that some small simple software function.
Entitlements defer to software too to interpret the crypto.
Entitlement pays the crypto cost unless the results are remembered by privileged software consulting privileged memory.
I suspect that this is the case.
Dynamic protection (capabilities) allow security patterns that far transcend what is possible with static protection.
See the literature.
Systems such as entitlements exclude from any formal analysis the real mechanisms whereby entitlements are created.
Capabilities specifically include the creation of capabilities in formal analysis.
Entitlement seems to require judgement by Apple about the code, but ultimately about the character of the author.
Capability solutions do require trust of some code, but vastly less.
Signing code may play a role for code on whose discretion you must depend and whose author you know something of.