Protecting iOS Source Code

January 8, 2019
No Comments

Share:

Protecting iOS Source Code

Hackers exultantly publish their successful iOS cracks online with a triumphant splash, rejoicing in the mayhem among the casualties, and basking briefly in the admiration of fellow anarchists. Apple responds, eternally one step behind in the battle, with security patches to iOS. The recent lockscreen bypass is widely illustrated on hacker sites with instructional videos. Apple recently added a fix for the GreyKey passcode hack which was revealed a few months before. The dramatic news story of a security breach typically features the feared loss of customers’ credit cards and personal data. But the untold story about how hacker antics lead indirectly to the loss of iOS source code is one of our focuses here on the front lines of protecting IP.

Another important weakness which leads to the exposure of iOS source code is internal security failure. Developers are clever, and they can easily create backdoors for entry by unauthorized interlopers – including themselves! They can also code backdoors to look accidental and hide their own tracks. More abundant, though, are the truly unintentional failures when coders do not comply with well-known best practices for securing iOS source code. Developer shortcuts top the list of issues to resolve in order to protect Swift and Objective-C source code on iOS devices. These include:

  • Hardcoded OAuth credentials
  • Scripted logins for automation testing
  • Compiled apps which contain API keys
  • Memory safety bugs, dangling pointers
  • Continuous integration platform weaknesses
  • Versioning platforms without code scanning
  • Insecure code repositories

In the interest of preventing the theft of iOS source code we will also delve into code security-centric technologies such as obfuscators and hardware level encryption. Obfuscation of source code can prevent rebuilders from decompiling and reverse engineering an app while it is running on a device. However, these methods have remarkable disadvantages, and a kind of prognostication is required to achieve the delicate balance between functionality and security which these tools can provide.

Decompiler Versus Obfuscator

The holy grail of the rebuilder is to decompile a running iPhone app and convert the machine code back into human readable iOS source code. Your app and all the valuable secrets of your business can be downloaded from the App Store and decompiled with shocking swiftness. The counterattack of obfuscating code can delay the rebuilder’s plan. But this step will also render constructive debugging by your own engineers difficult or impossible. Companies now also add debugger detectors to code which partly prevents the use of debuggers like Superdb and GDB. But this is where the story gets seriously complicated.

Hackers inevitably attach debuggers to interface and interact with a running application. The debugger provides an internal view of the device and enables the hacker to track the algorithms as they use memory and send API calls. GDB for example is used to decrypt binary files on iOS devices. Although an app downloaded from the App Store is already encrypted, a hacker can use GDB to decrypt it by simply dumping the virtual memory of the device running your app. Later we will discuss hardware level encryption to remediate this problem, but hardware level encryption is offered on few devices and the devices are costly.

After downloading an encrypted app like Siri for example, it lives in the device as Siri.app/Siri.

The executable file is in Mach-O file format. This file type contains app specific attributes including: architecture type, linked dynamic libraries, virtual memory addresses, and more. A hacker can use the otool command to parse these attributes out of Mach-O files and create a human readable version of the code. The following otool command reveals the location of the running app in memory:

> otool -l Siri

Siri (architecture armv10):

Load command 5

cmd LC_SEGMENT

cmdsize 736

segname __TEXT

vmaddr 0x00001100

vmsize 0x0002da00

Otool is the gateway to converting a running app to source code. The procedure is so common that thousands of hacker sites routinely explain how to decompile an iPhone app. The following otool command is a query which returns the encryption attributes of an app:

otool -arch armv10 -l Siri | grep crypt

Here, grep is the familiar regex utility invoked for the search. The output from this command will tell the rebuilder the precise beginning hex address and size in bytes where the app is loaded in memory. Knowing this he can load the app on an unlocked device and connect GDB to fetch the binary from memory. It’s really that simple. Readers curious about the actual decompiling process will find thousands of instructional sites with the full set of commands explained in rigorous detail. Many of these sites are actually SEO content for hacker platforms like otool.

Enter the Obfuscator

With decompiler tools like class-dump, cycript, and Clutch readily available, it is necessary to prepare countermeasures against iOS source code reengineering. A number of obfuscators have appeared recently to fill this niche. One noteworthy example is iOS Class Guard, which is designed to integrate with standard IDE platforms for Objective-C developers. Most such obfuscators seek to complicate the decompiling procedure by taking these measures:

  • Control-flow alteration
  • Class, label, and method renaming
  • String encryption
  • Code virtualization
  • Debugger detection
  • Add junk classes

The question is, does the obfuscator make life more difficult for you or for the rebuilder? The conclusion of most computer science dissertations on the subject is clear. Obfuscators cannot prevent decompilers. They can only delay the inevitable. For this reason, the consensus on securing iOS source code is that we need a broad spectrum of preventative measures including but not limited to code obfuscation. As a result, a variety of security are popping up everywhere along the CI pipeline today. Let’s survey the other best prospects.

Security-centric Versioning and Repository Platforms

New iOS source code security platforms appear under the sun every day, and the task of staying abreast of emerging technology is bewildering. One such arena of emergent tech is the security-based versioning platform. This platform contains tools which scan source code for potential security weaknesses. These scanners act somewhat like antivirus programs. When a developer codes an API key or secret token into an app the platform sends an alert to a project leader and halts the continuous integration pipeline script. Source code scanners are abundant and here are a few noteworthy examples operating on popular servers today:

  • Source Clear – A Node.js scanner
  • Clousseau – A GitHub repos scanner
  • Seekret – A popular BitBucket security scanner
  • Snyk – Yet another Node.js security scanner

Proguard is an open source solution in the category of managing source code security in bundled and scripted builds. Of course, there are numerous subscriber platforms which are actually built on Proguard source with a cloak and dagger stapled on for concealment. Let’s look at an outline of the important features in these platforms to see which might help thwart the next hacker attack. Security apps in this category typically secure source code via these measures:

  • Automatically scan source code (Jenkins-like trigger)
  • Encryption of source code
  • Trigger alerts in CI / CD pipeline
  • Enforcement of Memory safety coding
  • Encryption for data stores
  • Static Code analysis

The implication when implementing such source code security enforcement is a mix of voluntary developer compliance combined with systems to enforce compliance. Developer training toward awareness and compliance with security standards will also unify developer teams on the security theme. Developer adoption of these best practices will always be the ultimate factor, at least until true AI automatic programming becomes a reality!

A Panoply of Secure Source Code Interventions

Security features are found in a plethora of components throughout the development cycle today. Hackers leave no stone unturned as they sift byte by byte through petabytes of code. If Jenkins detects a code change and triggers the CI pipeline, then every component in that pipeline must be scripted. At every step of the way there is a nook or a cranny to wedge a hook or an exploit. Each tool added to the CI pipeline must be scripted into the bundle and build. Automation testing for example, which often spins up a hundred virtual machines to test the UI of a web app, will often contain test suites with user credentials in cleartext files sitting on insecure staging servers and repos. But in response to every identifiable breach a new app arises to prevent it. This is a literal war of apps. Even the OS itself is gradually assimilating security features to equalize.

For example, OpenBSD is an operating system which implements server based automatic encryption protocols. This is a great way to keep cleartext source code secure in repository servers. Scanners automatically look for scripted passwords and secret keys when code is pushed to a CI repo. Source code maintained in a private repo on OpenBSD is a useful layer of assurance against accidental exposure. OpenCVS is another open source repo solution intended to work in conjunction with OpenBSD.

The Ultimate Mystery of Protecting iOS Source Code

How do you know when your iOS source code has been stolen? That’s the question which lurks in silence and darkness. Would you guess that the machine learning platform in an expensive subscriber SaaS like Mathematica is not original code? Subscribers might be surprised to learn that they are paying a premium for an open source program built by Apache! This example serves to illustrate that we use a hodgepodge of apps every day without know what is under the surface. If another company gained access to your proprietary algorithms and secretly built them into another product, how would you know?

There are apps available such as CodeMatch which seek to reveal stolen IP in the form of algorithms. It’s a complex project just learning how to operate CodeMatch. And with hundreds of competitors in every niche who has the resources to dedicate to sifting through thousands of modules looking for an app that looks suspiciously similar? The prospects are daunting, but we must remain vigilant. We must evaluate as many as possible of the avenues which lead to the loss of trade secrets or intellectual property in the form of iOS source code.

Today, essentially all of a company’s uniqueness must be coded and expressed in web and mobile apps. A company’s secret recipe is encoded ingeniously into its algorithms and delivered to servers, phones, and IoT devices. If a rebuilder takes an interest in your code it is usually nothing personal; hackers are most often looking for a trick to sell on the “Dark Web.” In other words, a hacker may not have any particular interest in your code other than to crack it and sell it for a profit.

And if your source code happens to unlock a channel to medical records for example, your app will become a hot target. Here we are talking about an indirect threat to your source code which is around several corners.

Eyes in Every Keyhole

Nothing else in the modern world rivals the sinewy and circuitous path a hacker will take to achieve his nefarious goals. An accounting app was hacked because it provided indirect access to medical records. That’s right, the target was medical records, not credit cards. Because a hacker can use medical records to create lucrative fake insurance claims. The trouble at hand in this scenario is that diverse hacker activities expose iOS source code along the way. It’s the ultimate opportunism and they will scoop up any data they can shake from the tree by hook or by crook. And they will sell the source code too on the Dark Web if there is a buyer!

The point is that although encrypting source code is one of many necessary steps, we also must consider the serendipitous events which lead to the exposure of iOS source code in order to engineer a comprehensive security portfolio. The use of obfuscators is a necessary step to delay and frustrate the rebuilder’s efforts to decompile your app into source code. But code obfuscation is not sufficient defense. We also need a remediation strategy in the event of actual source code loss. Now we are a step ahead of the hackers.

 

 

 

 

 



Read other posts like this:


Source Code Security Highlights of 2019 Report
Top Data Breaches of 2019: Half-Year Review
Top Data Breaches of 2018 Report
What to Do if You Suspect an Employee is Stealing Code
Can I Monitor my Employees? A Legal Guide
2018/19 Data Security White Papers: Key Insights