The iPhone XS was released on Friday, and I believe the most significant change from the iPhone X is not the new gold color or even the new screen size. It is the new A12 Bionic system on a chip, and specifically the new Neural Engine that is a key component of that chip. The next generation Neural Engine is an 8-core chip dedicated to performing machine learning inference quickly and efficiently. Apple continues to emphasize on-device machine learning, and I believe this new chip shows consumers this is the future of personal computing.
Improved User Experience
Most users will almost immediately notice the impact of this new chip in the iOS 12 features. Shooting photos and video on the new iPhone XS will utilize several neural networks to create better photos and enable users to adjust the simulated depth on portrait mode photos even after they are taken. FaceID also takes full advantage of the improved speed to unlock your phone even faster. In addition, all of the various system features are faster and more energy efficient than before, which includes everything from QuickType suggestions on the keyboard to TrueTone display adjustments.
Developers can easily take full advantage of this remarkable advancement just by using CoreML. Since CoreML is optimized to use the latest Apple hardware, developers leveraging CoreML for on-device machine learning will get 9x faster inference using 10x less battery on the new iPhones without having to rewrite any code. This will also allow us developers to utilize even more complicated models for improved accuracy and precision.
Speaking of machine learning models, macOS 10.14 is now available with support for CreateML. CreateML is a new tool to train machine learning models, and I recommend checking it out. If you prefer another tool like Google’s Tensorflow it is easy to convert models to the CoreML format. Or if you prefer to download already trained models there are resources for that as well.
On-Device Machine Learning
Companies looking to add machine learning features to their apps can increasingly use on-device machine learning as opposed to having to process data on a server and send the results back to devices. This approach not only lets machine learning features work faster, but also in offline environments. Perhaps most importantly because user’s data no longer needs to be sent to the server for processing, this approach is much better for security and privacy.
Apple’s new 7 nanometer A12 Bionic system on a chip with its next-generation Neural Engine is the key to understanding Apple’s plans for the future of computing. The increase from 600 billion operations per second in the Neural Engine of the A11 to 5 trillion operations per second represents a remarkable year over year improvement. This hardware will not only provide faster and more secure user experiences for the next generation of features and apps, but also new hardware like augmented reality glasses.
I’m really excited both as a developer to be able to build faster and more powerful on device machine learning features today and as a user for the future hardware that faster dedicated on device machine learning chips will enable.