Nexus Interactive Arts, an immersive media division of VFX production studio Nexus Studios, have used Apple’s ARKit working on an iPhone 7 in an experiment that creates basic inside-out positional tracking and pass-through AR for a Google Cardboard headset.

Announced last month at Apple’s annual Worldwide Developer Conference (WWDC), ARKit is an iOS 11 tool allowing developers to create AR applications thanks to the device’s computer vision capabilities. With ARKit, iOS 11 devices are able to map surfaces in real time, and allow users to superimpose digital objects onto the physical world—replete with interactive animations and dynamic lighting.

Using ARKit, the team reports their inside-out positional tracking solution clocks in “at around 60 frames per second,” or right around mobile VR’s current target framerate. This, according to the team, means Apple has created the foundations for a cheap, but still ultimately reliable positional tracking solution for mobile VR headsets.

In the video, they demonstrate inside-out positional tracking for VR and pass-through AR by touring a conceptual ‘art museum’ in a park. When in VR, walking close to a boundary like a tree results in a point cloud materializing into the otherwise closed-off experience—essentially acting as a guardian system to keep you from bumping into things as explore the infinite (or sufficiently large) tracking volume afforded by the device’s machine vision. In the AR demonstration, the digital skybox is lifted to reveal digital scenery affixed to the park’s trees and landscape.

SEE ALSO
Google Brings ARCore 1.0 to Multiple Flagship Smartphones, Google Lens Preview Soon

The AR headset capabilities presented in the video, while an impressive use of ARKit, are less useful in this case because of the lack of stereoscopic vision afforded by the iPhone 7’s monoscopic rear-mounted camera. The developers aren’t couching this as a verified AR headset solution however, but rather showing the versatility of ARKit itself.

Allowing developers free reign to create applications for AR—and thanks to this experiment, now free-roaming VR experiences currently puts Apple back into competition despite its lack of discrete AR/VR headsets.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • pdnellius

    This is too cool. Really wish they’d share the source to play with it.

    • Firestorm185

      Yeah, I’m interested in seeing when an Apple-branded MR HMD kit comes out with some of this tech in it (upgrade tech, obviously).

  • beestee

    Google Cardboard Camera app can do some basic quasi-3d re-projection from a mono RGB source through photogrammetry…a lot of the number crunching for that has to happen in the cloud though, so there would have to be a calibration phase for the space.

    • Lucidfeuer

      Photogrammetric VSlam reconstruction is so cheap and overkill though. AR should just be about tracking objects in the most efficient way for 3D overlay/positioning, while external cameras should just do the pass-through environment screening.

      In fact that’s the reason why Tango has been stalling: the idea that you have to reconstruct your real-world environment in cheap 3D point-cloud while dual-camera set-up are already capturing it is stupid. That’s why they just killed Google and Oculus on the AR front with ARkit, they just used VSlam tracking very efficiently without the need for useless processes (as long as we don’t have camera see-through environment screening in the headset, which is another complicated challenge of perspective adjustments) while Tango is nowhere to be seen and Oculus vaporwared 13th Labs techs which already worked great 3 years ago.

      It’s actually funny that those tech companies aligned themselves on technology retention marketing strategies without providing sufficient incentives for their respective market, because even though Apple is a champion of it, it just took them the straight-forward implementation of modern AR tracking to burn their opponents years-old competitive advantage.

      • I totally agree with you Lucidfeuer, something Occipital figured out in 2015. They have gone a little further and allowed for positional tracking with prerecorded meshes of the environment, similar to Hololens. In my opinion this provides the best of both worlds and still allows occlusion and much better tracking required for viewing with face or eyeglass viewers, which magnify the image as well as the motion & jitter.

  • Well, nothing new. ARKit is the foundation of what will be the AR glasses by Apple. And with Vuforia was already possible to do AR inside cardboards (so with positional tracking)

  • A couple of years ago I modified a dual camera shader to work with an early version of Occipital’s inside outside tracking plug-in for Unity and created “OR Surgery” for the the iPhone 6 back in 2015. Three weeks ago I used it for ARKit (see image), which worked but felt the instability in tracking which is magnified in a headset does not come close to Hololens or even Occipital’s Bridge Engine for iOS. I am hoping that the horizontal dual camera layout in the iPhone 8 will help in tracking, but personally I can’t wait to test out Occipital’s CORE sensor (an improved dual camera Structure sensor for OEMs) which I will build into the NEODioPLAY “Stealth” along with their refined Bridge Engine SDK.

    Not saying that ARKit is not great, and the fact it is easy to work with and already implemented in Unity and Unreal says a lot about Apple’s clout but from a mixed reality standpoint, it will still needs dedicated hardware for rocksteady tracking.

    Oh here is the latest link to a test of mine baking in lighting, AO, and shadows to create a much more realistic model for viewing in ARKit. https://youtu.be/lrYnogOydjo

    https://uploads.disquscdn.com/images/9b4b6ef872144ec1ae78e0443f4fef0d553158c555c3fa65f3eb0929a24333ca.png