In anticipation of the Golden Master version of iOS 11 announcement, I’d like to explore a very exciting new framework being introduced in this version: ARKit. ARKit is a framework for enabling iOS developers to easily build Augmented Reality applications. In this post, you’ll learn various applications of the new framework to projects that affect developers across industries.

In the past few years, I haven’t  been too excited or inspired by Augmented Reality (AR) technologies. But, I’ve seen demos of a few enterprise use cases that were certainly interesting. My favorite used ultrasound to create a 3D model of a suspicious package enabling bomb squads to see through packages with impressive accuracy. The majority of these were enterprise-focused and painfully practical. I’d yet to seek out first-hand experience with the more consumer-oriented HoloLens and Oculus Rift until Apple’s announcement of ARKit. I suddenly had an easy-to-use, high-level API for creating AR experiences on a platform and language with which I’m familiar. I joined the ARKit channel in my dev community’s Slack and started to learn and explore.

I’ve seen some very interesting and inspiring results while having a lot of fun in the process. Many technologists in our community are now delving into the 3rd dimension for the first time. This brings a whole new layer of adventure as we learn new technologies such as Unity and Blender. While many use cases may be little more than a proof of concept until we have a hands-free, glasses platform, I’m very excited to see what creative and innovative applications we will see over the next year. In the meantime, I’d like to share some of what I have seen so far. 

Game Engines

As we move into the 3D space, game engines are really bringing scenes to life. The two most popular are Unreal 4 and Unity. The Unreal 4 engine had the spotlight at WWDC during the below demo which really starts to open your mind to what AR gaming can be. In this table-top example, you (as the player) choose what angle to view the scene, which could evolve into an interesting game mechanism of its own. In contrast, this Unity project shows how a player can be “inside” the scene by using life-sized models. If your goal is horror or suspense, it certainly has a more intimate effect to see your adversaries in your own home.

Click for Unreal Engine Demo

The line between the augmented and virtual reality blurs with some of the projects using a “portal” effect. There are quite a few of these out there and the basic premise is that there is a door-like portal where you can see a glimpse of an alternate reality. Once you pass through the portal, you’re now in the alternate world and viewing the portal gives you a glimpse of immersive reality. I would be remiss if I didn’t mention the fantastically executed recreation of A-ha’s Take On Me. Just watch it if you haven’t seen it. A few of these projects quickly switch from augmented to virtual once you pass through the portal, highlighting how the technology can span from subtle augmentations to entirely immersive.

ARkit-ios11-announcement

Creating a scaled model

In general, the majority of projects start by having the user either manually position the object in 3D space or by detecting a flat, horizontal surface upon which to place the object. This frames the technology in a very localized, subjective manner: on my table, in my living room, etc. There are a couple projects that demonstrate how augmented reality can extend beyond what is currently visible to the user.

One of my favorite examples of this is a project aimed at creating a ‘to-scale’ model of the solar system. It was thoroughly entertaining to watch the story unfold as the creator wandered around a Soma with an iPad Pro looking for Mercury and wrestling with the speed of its orbit. In this augmented world the miniature Earth was the size of a baseball and 3.67 miles somewhere out there he should find Pluto.

Combining AR and GPS demonstrates how AR can assist with navigation by placing the path line and arrows physically on the route. This has the potential for public transit as well. Atlanta’s MARTA, during a recent hack-a-thon, highlighted this as a valuable problem to be solved. We occasionally can use a little help knowing which side of the train or the station to exit, especially in foreign cities.

There is currently minimal awareness of the real world beyond detecting flat surfaces in ARKit. Yet, there are a couple projects that combine the Vision and ARKit frameworks with the effect of adding some semblance of real-world awareness. This neat example uses Vision to track your thumbnail and draws a 3D line a set height above a previously detected plane. If you switch modes it will increase the height as you move your hand. This works because at the location of the detected thumbnail there is also a detected plane that has the necessary depth information.

The Limits of Real-World Awareness

Here, we begin to see the limitations of the real-world awareness. For starters, the plane detection only works for horizontal planes. *Vertical* plane detection was in the first documentation release but subsequently removed. Let’s pretend for a moment we have vertical plane detection and we want to create a similar app that detected faces and put storm trooper helmets on them. We’d first need there to be a wall that we can detect behind the people. We’d then decide a fixed depth relative to that wall such as “half a meter before the wall.” Then the people must stand half a meter away from the wall. This is because there is no high-level API for detecting the depth of detected objects using Vision. In the WWDC talk ‘Capturing Depth in iPhone Photography’ we learn that the iPhone 7 Plus’s dual camera can capture this depth information. Unfortunately, these concepts have not yet been merged for easy consumption in the iOS frameworks.

There was an interesting class name leaked from Apple’s home pod firmware: ARFaceAnchor. Anchors are currently a simple point in 3D space. If you place an object on a table and move the table, for example, the point will not move with it. A face on the other hand usually does not remain still. My hope is that this will be the beginning of the merging of AR and Vision tracking features where an Anchor can be a point on a real-world object that is tracked and the anchor moves with the object. The next few months will certainly be interesting to follow. In the meantime, I’ll be looking out for the opportunity to try a HoloLens. 

Jon Day

Software Engineer at Stable Kernel

Leave a Reply

Your email address will not be published. Required fields are marked *