Magic Leap, the mysterious AR startup with a multiple-billion dollar valuation, still doesn’t have a headset to show the world, but in a recent paper published by Magic Leap researchers entitled Toward Geometric Deep SLAM, we get a peek into a novel machine vision technique that aims to bring the company closer to their goal of creating a robust standalone AR headset.

Authored by Magic Leap researchers Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich, the paper describes a tracking system powered by two deep convolutional neural networks (CNNs)—a type of artificial ‘brain’ used for image processing. Called MagicPoint and MagicWarp, the researchers contend the two CNNs allow for a system that’s “fast and lean, easily running 30+ FPS on a single CPU.”

Here’s the quick and dirty: According to the paper, MagicPoint operates on single images and creates 2D points important to the purpose of tracking, with these points destined to be fed into a simultaneous localization and mapping (SLAM) visual algorithm. Comparing their network to classical point detectors, the team discovered “a significant performance gap in the presence of image noise.”

image courtesy Magic Leap

Because calculating the shape of objects as they move around isn’t an easy task—it could be either the object or the viewer moving—MagicWarp’s job is to use a pair of these images containing the 2D points generated by MagicPoint to essentially predict motion as it models the world around it. The MagicWarp SLAM algorithm does this in a different way from traditional approaches because it only uses the point’s location and not the more complicated ‘local point descriptors’, a term used in computer vision jargon that describes a thing containing coded, unique identifying information.

SEE ALSO
ILMxLab's John Gaeta Joins Magic Leap as Senior VP of Creative Strategy

Tested using physical and synthetic data, the two convolutional neural networks are said to be capable of running in realtime. “We believe that the day of massive-scale deployment of Deep-Learning powered SLAM systems is not far,” the authors conclude.

If your brain isn’t already spinning, check out the full paper here.

So while we don’t have a clear idea of exactly when Magic Leap will have a public prototype of their light field display-packing headset, or what CEO Rony Abovitz teases as “small, mobile, powerful and pretty cool,” we’ll take anything we can get after more than 3 years of waiting. Anything but their post-cool, pre-factual marketing campaign, that is.

 

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • VR Geek

    Very likely magic leap will only ever be just another AR HMD option in a sea of similar devices.

    • At this point that’s their best hope

    • Nick Cannon

      I’m ready to swim

    • NooYawker

      I’m sure many years ago the stuff they were working on was revolutionary. AR was unknown. Today, they’re just another company working on AR.

    • Xilence

      But how would investors feel about their money being burned? It has to be the king, it has to be the iPhone of the market. Otherwise, we can write it off as a failure.

  • Get Schwifty!

    Really tired now of hearing of these guys and nothing substantial is released…. if they truly don’t have anything it worries me that it might have a halo effect against investors in AR/VR/MR for other more legitimate projects. They always say the worst crooks don’t carry a gun, they just wear suits…

    • And they are all in the swamp. The biggest swamp happens to be located somewhere very popular as a moat full of alligators, snakes and crocodiles :)

      All guarded by men in suits.

      But there are swamps everywhere.

  • NooYawker

    This feels like the longest biggest kickstarter campaign.

    • Lucidfeuer

      Except with billions of “unwarented” investments even for an AR HMD…

  • psuedonymous

    “a system that’s “fast and lean, easily running 30+ FPS on a single CPU.” Compared to SLAM running on CPUs of a decade ago, that’s pretty dang slow. On CPUs of today, it’s hilariously glacial. Even worse, the paper cites a mere 320×240 image as taking 150ms to process! They also have only shown performance data of their system working on synthetic images, rather than on real-world camera data.

    • NooYawker

      Maybe they should continue to be secretive. The info coming out is pretty horrendous.

    • kalqlate

      The difference being that Magic Leap’s method of SLAM is not running any of the popular traditional SLAM algorithms; it’s running two deep (many layers) convolutional neural networks. If you know anything about deep neural networks and machine learning (in this case, deep learning), you’ll know that a well-architected, well-trained deep neural network has the ability to outperform any traditional algorithm in terms of accuracy and generalization. That it runs slower than a traditional algorithm on older harder should be expected… for now. Tomorrow’s technology will surely correct that, and the future belongs to machine learning, not traditional, human-crafted algorithms.

      • chuan_l

        Yes , but are they making a headset —
        For next year or 10 years time ? The performance here wouldn’t work for real time spatial tracking. Feels more like a response to VPS and publishing this paper makes no sense. Puff , puff , and pass !

        • kalqlate

          You’re confused. The environment-registering SLAM doesn’t have to run at the frame rate of the visuals to the eye. 30 FPS for PROBABILISTIC environment SLAM is more than enough, even for quick head movements. Here… read this paper to see how a different neural network architecture is giving accurate real-time results with as little as 6 to 10 FPS: https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-3-W2/31/2013/isprsannals-II-3-W2-31-2013.pdf . The key in this and the Magic Leap case is that the algorithms are generalizing and probabilistic. Just like human vision processing, we’re not tracking landmarks 100% of the time. Throughout our lifetime of experience, our brain neural nets become experts at probabilistic prediction. Sure, our vision systems make mistakes every now and then, but extremely rarely.

          • chuan_l

            No it doesn’t but its still 320 x 240 at 6 FPS —
            For spatial mapping and object detection , and thats when you have either a ” plane , sphere or cube ” as the inputs. Compute will get better but my point is its stuff that probably wont be in the actual Magic Leap next year ? Thanks for the link.

          • chuan_l

            Large scale direct SLAM with stereo cameras —
            https://www.youtube.com/watch?v=oJt3Ln8H03s

      • David

        It depends on whether the network gathers new training data as it goes, otherwise it might make more sense to put this onto a neural network on a chip, as hardware. I know at least one company is working on that avenue (I don’t recall which one though) and the result would be a low-power neural network that they might be able to integrate into a headset, taking the load off the GPU so that it can perform more important tasks.

  • John Leonard

    So all the investment they are getting is for R&D and to produce a white paper?
    hmmmmmm. It seems to me they should produce the backend technology and work with VR partners who already have working hardware to implement their technology.

  • Lucidfeuer

    I fail to see the significant upgrade this brings to SLAM since pairing with real-world based IR tracking points this already exist. Also using “real-time CNN” for that…? Seems like a stretch for an AR HMD even if paired with a dedicated AI chip like the HoloLens 2.

    I’m more interested in seing what they’re bringing on the lightfield glasses front since that’s the actual unpractical set-back of AR glasses. If Magic Leap doesn’t differentiate and actually “leaps” over the other existed glasses/HMDs then this wouldn’t even get the benefit of having been actual tech company instead of a tech money laundering scheme.

  • Jefferson roads

    You guys clearly have no idea the importance of this research because you don’t even mention what SLAM is! Slam is simultaneous location and mapping which is extremely hard to accomplish computationally from the amount of data one needs to store to simultaneously map an unknown environment and give feedback about your location in this environment. Coming from the autonomous community, the ability to do this processing on a single core processing is amazing! Imagine trying to do real time image processing on a raspberry pi! Thats why nvidia released the TX1 with enhance capability to process image data. The true significance of this research is that this crucial process of generating a map of an unknown environment, place objects in that environment, remember the location of the objects based on visual features of the environment can all be now run on a headset.

    • Xilence

      I don’t very much like Nvidia for various reasons but I am very happy that they are working hard to make products that help the VR/AR industry alongside the AI industry. The AI part is more with smartcars, which is quite amazing.

      • Harmen

        This is not about nvidia bashing, it’s a 100 or 1000 (10.000?)fold decrease in processing requirements for inside out positional tracking.

        This can also run on a dedicated network chip which even further decreases power consumption by a factor 100.

  • Kevin Williams

    I am so concerned that the more they play “guess what we have” the more they are damaging the AR market.

    I am sorry, but I see them as Snake-oil merchants and until they show something “better than Hololens can do now” – they look like a scam*.

    *=personal view of KW and not company position

  • brubble

    … company claims they will revolutionize transportation!! (sounds incredible bullshitted out on paper)….unveils a skateboard.
    Pfft, Magic Leap…. sure is.

  • Xilence

    Billions of dollars later, I expect great things. No really, I do.

  • Pixel VR

    How come nobody is talking about how they made this public after buying DACUDA (dacuda.com) two months ago, a SLAM company working on SLAM for the last 2 year. DACUDA tech works and now they get to say they came out with it, how convenient!

    https://techcrunch.com/2017/02/18/confirmed-magic-leap-acquires-3d-division-of-dacuda-in-zurich/