There is an interesting article HERE that describes the new security features of iOS7 and Mavericks. It also asks some interesting questions that still need answering.
As seen on the Hacker news, there is currently a way to bypass the iPhone lock screen (iPad with SIM too?) running iOS 6.1.x
I had to change the steps listed in “The Hacker news” slightly for it to work:
-Go to emergency call, push down the power button and tap cancel.
-Dial 112 and tap green and inmediately red.
-Go to lock screen, by pressing the power button
-Go to passcode screen, by pressing the home button
-Keep pushing down the power button …1…2…3…seconds and before showing the slider “turn off”…tap the emergency call button and …voilá!
-Then without releasing the power button press the home button and let go…
From there you gain full access to the phone application and can change/add/delete contact, as well as use the phone to make phone calls but you cannot stop a call you started with that technic.
It all started with some findings published by Trevor Exkhart on his website a few weeks ago.
He found that a Californian based company called Carrier IQ (CIQ) had develop a software that was acting as a *key logger* and was installed by default on many different mobile devices: Android, Blackberry, Nokia Phones, iPhones (iOS 3.x to 5.x), and also tablets.
The important point here, is that this software is intentionally installed/provided by the devices manufacturers or network carriers. It is quite amazing how widespread the use of that spying software is (the BBC reported 140 Million devices). This is not limited to only one type of device or provider. What they collect might be different (apparently much less on iOS than Android), but it shows a systemic desire from companies who make and sell those devices to gather usage and user information.
This is what I would call, the Facebook syndrome!
The official stance from CIQ was that their software was only used for improving the “network experience” by providing some information back to carrier and phone manufacturer such as signal strength, network information, etc.
They explicitly stated that they “do not and cannot look at the contents of messages, photos, videos, etc., using this tool”.
This is not what you would say from a software that logs all the key pressed on your device…
Again, it is important to note that by default their software is not hidden (there is a visible check-mark in the status bar) but this can be modified by 3rd parties. And it is being modified!
One example given by Trevor is Verizon in the US, although you can opt out, by default the phones they sell will record and transmit (?) the following personal user information: any URL accessed, including potential search queries and the location of the device. This is what could be considered as a significant personal privacy invasion.
So how did CIQ reacted to Trevor’s post?
By sending him a Cease and Desist letter on the 16th of November!
They claimed Trevor was in copyright infringement (because of some of their publicly available training material having been referenced) and making false allegations.
As reported on The Register on the 24th of November, they eventually withdrew their legal threats thanks to the legal help of the EFF, who nicely summarizes the case on their website, and also to a new post showing exactly what Trevor meant by calling CIQ software a “root kit” (I called it a “key logger earlier”, but root kit is more accurate and also has wider security implications).
Trevor’s second CIQ article, goes into details as to why CIQ software is indeed a root-kit. With a video showing the different steps required to reproduce his tests. It also describes how the data is collected even if you are off the network and, at least on an HTC phone, the data is not really anonymised.
Since then, another mobile phone hacker has published some findings about CIQ, this time confirming that Apple has included CIQ software in all its iOS version from iOS3 to the latest iOS5. However, it seems that the information logged on the Apple devices is much less than what is logged on Androids’: no URL nor SMS and the location is only sent if you have allowed for it to be, furthermore, that information is not transmitted by default but only if the user manually choose to send diagnostic information to Apple.
All this has generated an increasing level of noise and attention:
- Apple made a statement that although they were not using that software from some time, they will remove it completely in a future iOS update;
- a US Senator, Al Franken, who had previously voiced some privacy concerns about location tracking, has requested CIQ for some explanation;
- The Register has asked CIQ for comments, and will post an update whenever they get a response.
- The BBC is running a story on their website stating that CIQ has been installed on other 140 Millions devices!
- The Guardian is reporting that, apparently, UK carriers do not use CIQ, I wouldn’t be surprised if we soon learn otherwise…
- The latest response from CIQ can be seen here
As pointed out in a ViaForensics article, it is not clear when and if the data CIQ logs on the phone is always transmitted or just remains on it. And if transmitted, to where? But if it is being transmitted, I have a little story for you…
A few years ago I went on holiday and decided to take an international data plan, I had an iPhone 3G at the time, and I did monitor my data consumption every day with the built-in iOS bandwidth statistics. I stopped using data on my phone when I reached 90% of my allowed and pre paid consumption.
I was therefore very surprised when I was charged for going over my data allowance by a good margin! How could I have miscalculated my data consumption by so much!? After complaining to my provider they eventually claimed that the built-in iOS bandwidth statistics were only showing average figures and were not accurate. I also read in some forum at the time, that Apple claimed their figures should be taken as an estimate only. With that in mind, I decided not to pursue further, accepted to pay the extra fee and promised myself never to use data roaming again.
Now, it would be interesting to know if all the network data generated by CIQ is counted in those mobile OS network bandwidth statistics or if, like the information it gathers, they are also hidden from view.
After all, if the provider goes at length to hide the data they collect from you, they probably don’t want you to see that sealed fat envelop leaving your phone!
If that’s that case, how legal is this?! not only spying/gathering user information is questionable but doing so could be at the expense of the user! Couldn’t it be considered as a hidden cost to their service? could it explain the unexplainable extra fee I had to pay?
So I have three final comments to make:
- Mobile device companies are like any others, they want users’ personal information, but unlike others, they have full control of the device you discuss you life on.
- Opting for usage statistics, should be just that, an optional choice! and it should be made clear that it could result in extra cost, especially when roaming!
- If CIQ data consumption is also hidden from mobile OS(es) statistics then this is an extra hidden cost to the user
Another attack on the iOS security has been published today and there are two recurring themes to the attacks I described in previous posts, namely: weaknesses with the Keychain and iOS encryption implementation.
But this time they have been used differently and seem to provide an attacker access to any passwords stored on an iOS device, even if it is passcode protected.
One main difference in this attack, is that the attacker would only requires the iOS devices and nothing else (as opposed to the relevant synced PC with previous attacks).
It also seems to prove Zdiarski’s concerns over the iOS encryption controls to be true.
The attack used some jailbreaking techniques to access the iOS device boot/ram, bypassing the passcode and using the OS to run a script to access the local keychain and all the passwords it may contain (email, VPN, web apps, etc)
It seems that the encrypted data is not linked to the user passcode, which means that if someone can bypass the passcode, even if the data is in theory still encrypted, the attacker uses the iOS device itself to decrypt the data for him!
When I said it was “bad, but not that bad”. Now, it may be THAT bad! ;)
All the details, video and whitepaper, are available here:
Regarding my previous post I wanted to mitigate some of the risks I was describing.
In a nutshell, it is bad, but not that bad! :)
There is indeed a forensic issue with the escrow keybag feature, but because it requires the attacker to have both the targeted mobile device and the computer used to sync it with, That attacker would first need to break the computer’s security to access its filesystem.
Because that computer is used to sync the mobile device, most of the information it contains is likely to be on the computer as well.
For example, email accounts are likely to have been setup both on the computer and the mobile device, office files are likely to have been created on the computer, etc.
Therefore gaining access to the computer’s filesystem is likely to already give you access to most of the mobile device’s data.
Having said that, there is no garantee it will always be the case and some information such as call history, text messages, internet history, etc would only be available on the mobile device (and its hopefully encrypted backup).
The point is that although the Escrow Keybag can indeed be used to bypass a mobile device protection and is therefore a security risk, it should be put into context with the security risks related to successfully gaining access to it in the first place.
In other words, it is bad, but not that bad!
Full Disk Encryption
The statement that I reproduced about the level of security offered by the iOS full disk encryption control should also be put into a wider context.
Jonathan Zdiarski claims it was inadequate because it automatically decrypts data once requested for it, the way I undertand it is that its level of security is therefore dependant of the strengh of the passcode used and of the device’s OS security (sandbox, access control, etc).
But this is also true for any full disk encryption control, on any plateform.
If you gain knowledge or access to the passcode you can then access the data.
And if you get a malware running on your full disk encrypted device, it would not be prevented to access any data associated with your credentials.
I therefore do not believe this is an Apple specific security risk.
In other words, it is bad, but not that bad!
I have been researching how Apple has been implementing their full disk encryption control over the weekend and what I found puzzled me:
Although technically Apple provides a hardware full disk encryption solution, from a traditional security sense of the term, there is no full disk encryption available on the iPhone/Ipad currently! It sounds like a paradox? let me explain…
The closest analogy I can think of, would be if someone was selling you a house and claiming that the full house was protected with alarms in each room. The only problem, is that the alarm would only work when nobody was in the house… meaning the only protection your house effectively had was a simple front door key.
A few websites have been running a story today on an upcoming attack announcement/demo in next week black hat conference.
Instead of targeting the OS or a specific app, that attack would target bugs directly in a component used to send and receive calls, a baseband chip. Although technically it is still a software attack, the code used to control that chip, it would bypass any security measures in place at the OS level, and would especially be out of Apple/Google control. Such attack could be used to intercept calls or spy on a phone user by activating its phone microphone…
But then surely you would also need to find a bug in the microphone chip? Or elevate your privilege at the OS level from the baseband chip bug?
Anyway, eavesdropping on calls would at least be possible.
What makes this news interesting is both that duplicating a cell tower is becoming easier/cheaper (about $2k) and that you can’t secure and control everything, even in close systems such as iOS devices. Until they start manufacturing every single component, phone manufacturers will have to rely on a multitude of other vendors; all with different security agendas.
Now, if I was working for a security state agency I would invest in some key communication component companies… As hacking is becoming more and more lucrative/political, how long until the “bad guys” start thinking alike… but then you would call me paranoid ;)
Below is a link on the first website I read that story today: