ARKit Makes Augmented Reality Shine on iPhone

ARKit at WWDC 2017

ARKit at WWDC 2017

At long last – and as expected for this year’s WWDC – Apple has made its entry into the world of augmented reality (AR). With what it’s calling the “world’s largest AR platform,” Apple thinks it can take over the world of mobile AR, and wants to do so with its new ARKit.

ARKit is a collection of tools for developers who want to implement AR features in their apps. The first question to ask is why this is a good idea. Not all apps need AR; what’s the point of a note-taking app ‘flinging’ a shared to-do list across the room? That’s a lot of code for little gain – and zero added functionality.

Nonetheless, those developers who could prove good stewards for AR are lining up for ARKit. There’s a lot of complexity to it, but three main tenets to ultimately keep in mind.

The first is a doozy, and powers ARKit almost entirely. Visual Inertial Odometry (VIO) is basically what puts the “AR” in ARKit. It’s a complex series of operations that maps both your surroundings and planes that may be suitable for virtual object interaction – while also keeping track of where those objects are in relation to your device and the space you’re in. Here’s how Apple describes it:

ARKit uses Visual Inertial Odometry (VIO) to accurately track the world around it. VIO fuses camera sensor data with CoreMotion data. These two inputs allow the device to sense how it moves within a room with a high degree of accuracy, and without any additional calibration.

And here’s a tweet with a GIF, because it’s hard to grasp this if you don’t see it:

<script async src=”//platform.twitter.com/widgets.js” charset=“utf-8″>

Let’s examine what’s going on in that GIF. As you can tell, ARKit is using camera data to ‘see’ the scene and an ARFrame class to start a tracking session. It knows the camera’s position because the ARCamera class tracks the device’s position relative to things in the scene. The lines representing an object at the center of the table could be anything, like a vase, and keep their relative space to the camera by utilizing the ARAnchor class.

The ‘vase’ is anchored to the table, which ARKit knows is a flat surface because of the ARPlaneAnchor class. Thanks to these anchors, the ‘vase’ is stuck to the table, just as it would be if it were a real object you could touch; the table is a solid object ARKit knows it can anchor to, and the ‘vase’ can be any sprite you like.

Now if the lines really were a vase, lighting would play a role. ARKit has what Apple is calling ‘lighting estimations,’ which will “estimate the total amount of light available in a scene and applies the correct amount of lighting to virtual objects.” Placing a white vase in a dimly lit environment would mean the object would be more grey, just as it would be in real life because of the lighting issues.

That’s done via a single class, RLightEstimate. The details tell us why Apple saddled the class with an ‘estimation’ trailing tag. It assumes lighting “associated with a captured video frame in an AR session.” So if lighting details change, your scene may not be immediately reflective of those changes. It can estimate luminosity via an ambientIntensity property throughout the scene, though.

Interestingly enough, much of the heavy lifting for ARKit is offloaded onto the GPU. Though it needs an A9 or A10 SoC to function, ARKit uses Metal and SceneKit to do a lot of the detailed work. Apple says Unity and Unreal Engine will also work with ARKit.

Apple showed off a still-unreleased IKEA app using ARKit that helps you place furniture in your home’s space (the better to see if you really want those pieces). Previews of an updated “Pokemon Go” that leverages ARKit show just how much better AR can be with the platform working under the hood. LEGO even showed off a neat app that provides exploded views of models so you can see what-goes-where (something like that could be transformative for the company if it wants to get away from printed instructions in the future).

But ARKit is nascent, and we still haven’t seen its full utility beyond a few demo apps. It’s also quite limited. Relying on the iPhone’s camera(s) is a smart move, but those cameras may not have the power to track multiple flat planes in a large room, for instance. All of Apple’s demos include limited space, like a table or corner of a room. That’s fine for gaming, but you likely won’t be able to see what redesigning your entire home with IKEA furniture might look like.

Post a Comment

Your email address will not be published.