This year, I had the opportunity to attend WWDC 2017, and it was quite possibly one of the most exciting years Apple has had in awhile. Apple updated all of their software platforms and gave us some cool, new frameworks and hardware. As a developer, I am especially excited about the Core ML and ARKit frameworks.

Core ML is Awesome

Core ML is all about making it easy for developers to incorporate on-device, machine-learning inference into their apps. Google recently announced a version of Tensorflow called Tensorflow Lite available in Android O that similarly does mobile inference, but only for Tensorflow models. I am most excited about the potential of the Core ML .mlmodel file format as an open standard for communicating learned machine-learning models. Apple released a Python package called coremltools to convert existing models from Keras, Caffe, scikit-learn, libsvm, and XGBoost to the .mlmodel format. Once you have a .mlmodel file representing your trained model, you can just drop it into Xcode 9 to generate an interface that takes the correct inputs and produces the correct outputs. The interface even supports recurrent neural networks via a state parameter that can be passed from iteration to iteration.

Related: What is a recurrent neural network?

Built on top of Core ML is Vision. Vision is a computer vision framework for iOS, macOS and tvOS that contains built-in models for many common computer vision tasks like facial keypoint detection, object tracking, text and rectangle detection, alignment and classification. Vision works with both single images and sequences of images and makes it very easy to add your own Core ML models to do custom classification or other arbitrary computer vision tasks.

ARKit is Cool

When Apple releases iOS 11 this fall, iOS will suddenly become the largest augmented reality platform in the world. ARKit lets developers easily create augmented reality experiences that work with real world surfaces and lighting. ARKit integrates with SceneKit, SpriteKit, Metal 2, Unity and Unreal Engine. The magic of ARKit is based on something called Visual Inertial Odometry that combines tracking with the camera and the motion data from the accelerometer and gyroscope. This data is processed by ARKit to automatically adjust the position and orientation of the camera in real time to create the illusion of a virtual object having a fixed size and position. This illusion is further enhanced by the surface and lighting detection that allows your digital content to interact naturally with a surface and to use real world light sources as light sources in your digital scene.

The most impressive thing about ARKit is that it runs on only a fraction of the A9 or A10 CPU leaving developers with full access to the resources they need to add complex graphics or computer vision on top of the augmented reality interface. While ARKit isn’t necessarily creating a new possibility for developers, it is enabling features and dramatically simplifying very complex tasks and making them available for developers to easily integrate into their apps.

Image uploaded from iOS (21) (1)-802274-edited.jpg

Tons of new hardware

There was more hardware at this WWDC than at any other one I can remember. While I have decided to wait to update my 15′ MacBook Pro until I can get 32GB of ram, the fact that Apple updated their CPU’s a mere 221 days after releasing the TouchBar is a very positive sign, and I sincerely hope the rapid pace of updates continues. In addition to a CPU update for the MacBook Pro, the iMac was given a substantial update, the iMac Pro was announced and even the MacBook Air got a few extra megahertz.

Related: An iOS Dev’s Reaction to the Macbook Pro TouchBar

The new iPad Pros, along with the iPad specific features in iOS 11, show that Apple is really committed to making the iPad a powerful computing platform. In my experience, the new dock is a much better multitasking experience compared to the previous implementation. Now with the new dock, I know the location of my most used apps and there’s even a space on the right for recently used apps that aren’t already in the dock.

Finally, the HomePod looks like an interesting product. I will reserve final judgment until we get closer to the release, but I can imagine how a smart speaker that sounds awesome could be compelling. One of the things about the device that is still unclear is what Siri domains will be available. I believe that if Apple doesn’t add some kind of SiriKit integration to HomePod or open it up to developers in some other way, then it will never be able to really compete with Alexa or Google Home in terms of the number and variety of tasks it is capable of performing.

Not Enough Siri

The only real disappointment of this WWDC the small number of new SiriKit intents and no mention of a framework for incorporating arbitrary new skills into Siri. There are a few new intents for SiriKit like lists, notes, reminders and QR codes, but I was expecting things like music, podcasts, news, commerce or recipes. One of Apple’s biggest strengths as a company is its developer ecosystem, and by not taking full advantage of that for Siri, they are unnecessarily putting themselves at a competitive disadvantage.

Overall, I think this was the best WWDC in years. Not only did I get some cool pins, but Apple is really focused on new software technologies like machine learning and augmented reality and new professional hardware like the iPad Pro and iMac Pro.

Leave a Reply

Your email address will not be published. Required fields are marked *