New Magic Leap Video Shows Explosive and Tantalising Concept Gameplay

13

We reported recently that the mysterious, massively funded and expertly hyped Magic Leap suffered a minor PR setback as the company cancelled two key public talks, TED being one. Well the company has just released a video that was set to be shared at the event that aims to ramp the hype further still.

Magic Leap recently cancelled two public ‘appearances’ that were to be given by executives CEO Rony Abovitz and long-time game designer Graeme Devine, now Chief Creative Officer of the company.

Along with a Reddit AMA, the execs had also planned a TED talk on the secretive technology. The company seems to have responded to the reaction these cancellations provoked in the press and the community – that the company were again retreating to stealth mode – as it’s released the video it was due to share during its TED talk.

The video represents the closest we’ve yet come to understanding the kinds of experiences Magic Leap plan to deliver with the tech. In this case, starting out with some intriguing AR user interface ideas, before escalating quickly in to a full scale robot war, complete with AR weaponry.

The video looks to be rendered as a concept presentation, as is to be expected, lightfield based augmented reality experiences would be tough to capture. That said, the company now famous for its masterful skills at hype generation, state in the video’s description on YouTube:

Unfortunately, we couldn’t make it to TED, but we wanted to share one of the things that we’d planned to share at the talk. This is a game we’re playing around the office right now (no robots were harmed in the making of this video).

Also interesting to see the prominence of the Weta logo, presumably ‘The Hobbit’ SFX house had more then a a little do with the visuals on show here. The question is: how closely does this video mirror the kind of experience that Magic Leap can actually deliver once the technology is finally shown to the public. As ever, we’ll have to wait and see.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • AJ@VRSFX

    The Kinect famously showed us what it might be able to do one day. This feels similar. This video shows me everything that I want Magic Leap to be, yet we know they can’t deliver this experience yet, so it irritates me that they’re showing it to us. FreddieW was making videos that look just as great as this three years ago. It doesn’t mean a thing if the tech doesn’t work.

    I want to see what their “lightfield technology” can do today, not what they might be able to do three years from now. In their last Reddit AMA Rony said they are “confident” they will be able to deliver the primary breakthroughs that make such an experience possible, specifically non-QR anchoring and integrated occlusion. In other words, that’s the goal and they haven’t achieved it yet. They haven’t shown it working because it doesn’t work well enough. I’m ready to get off the hype train unless they pony up some real demos. They shouldn’t have started the hype so early if they’re not ready to demo today.

    Developers appreciate being part of the Skunk Works. Oculus let developers in on the action years before the tech was ready for consumers, and the trust they established has radiated throughout the web. At the very least, we appreciate being allowed to look behind the curtain. Hype has a short shelf life.

  • Don Gateley

    Can you find out definitively whether this video is a mock up or in any way shows the real capability? If the former, more of the same meh. Anybody in CGI production could make that.

    • kalqlate

      Well, not definitively, but that the YouTube video begins to expand before the fingers separate indicate that either their interface intuits the next gesture or it’s a mock up.

  • Reckless1966

    This has to be a mockup. There are things going on here that I don’t believe are even technically possible yet. Well, let me re-phrase that…they might be technically possible but they would require extensive hardware, sensors, and very high end software far beyond what could be done in any consumer level device.

    In order to have objects appear to be sitting on the floor, tables, etc, you need to have an accurate 3d map of the office space and all objects in it,then you need to accurately know where the user is and the position of their head to show the objects with the correct perspective, size,etc. Without multiple external sensors placed around the room (a la Vive) this would be impossible. And in order for things to work as seamlessly as shown here would require VERY high fidelity positional information for everything in the environment. (They even have a mech break through the actual wall so it looks like there is now a giant hole in the office wall.)

    He “picks up” a virtual gun. He’s not holding any kind of device in the real world that could be used for tracking. He doesn’t have any kind of markers on his hand. So this would indicate the system would need to have very accurate hand and finger tracking too. And then every time he shoots, his hand recoils. I suppose it could be programmed that it shoots every time he does the recoil motion (so he doesn’t have to try to press a virtual gun trigger…which doesn’t exist) And somehow he is able to “hold” the virtual object in his hand it and it reacts just like a physical object where his finger goes through the trigger opening etc.

    This looks cool as hell…but also seems that it would be almost impossible to pull of for many years still.

    • Jacob Pederson

      I see the guns in the first shot, suggesting they are controllers not virtual objects; however, this is clearly a mock-up. Stuff like this hurts VR because it sets expectations unrealistically high for the uninformed consumer. When Joe Cell Phone puts his Gear VR on for the first time and doesn’t see this game, he is going to be mighty disappointed.

      • Reckless1966

        You are right. I only watched it once and I was looking at the UI for gmail and the other stuff they were showing. I didn’t even notice the guns were already on the desk in the background. I just saw them when he walked up and they have the graphics around them and thought they were 3d models.

        But even without that issue, the other issues I mentioned are still huge problems with doing something like this. I agree with you, don’t do mockup videos that are well beyond what the real thing looks like because it builds up false expectations for the technology.

    • kalqlate

      You wrote: “In order to have objects appear to be sitting on the floor, tables, etc, you need to have an accurate 3d map of the office space and all objects in it,then you need to accurately know where the user is and the position of their head to show the objects with the correct perspective, size,etc. Without multiple external sensors placed around the room (a la Vive) this would be impossible. And in order for things to work as seamlessly as shown here would require VERY high fidelity positional information for everything in the environment. (They even have a mech break through the actual wall so it looks like there is now a giant hole in the office wall.)”

      Sure, the video is a mockup, but don’t forget that Google is heavily invested, and don’t forget that Google has Project Tango integrated in smartphone and tablet form factors, the tablet of which I hear is now available as a developer’s kit: https://www.youtube.com/watch?v=H07VrVozNZQ.

      Therefore, you can probably scratch your above paragraph from your doubts. Magic Leap is scheduled to demo SOME of the technology at Manchester International Festival July 2 – 19. Let’s see how much of what is depicted in this video they have then.

      • Reckless1966

        I hadn’t heard about Project Tango. It sounds cool looking at their project page for it, but it doesn’t really give a lot of details. Even so, I’m skeptical if this system could be anywhere near what they are showing without multiple external sensors of some kind like Vive has to deal with line of sight issues / occlusion. But we’ll see. Things seem to be moving at a breakneck pace in this area right now so maybe my skepticism will be proven wrong. (I hope I am wrong…the better it ends up being the happier I’ll be.)

        • kalqlate

          You said: “Even so, I’m skeptical if this system could be anywhere near what they are showing without multiple external sensors of some kind…”. Did you watch the Project Tango video that I supplied the link for?: https://www.youtube.com/watch?v=H07VrVozNZQ. Project tango is a personal LIDAR system. For any particular VR experience, if you are not concerned with occlusion, you are good to go without any pre-mapping. If you want your experience to be aware of every nook and cranny in your environment, you do a walk-through and let project tango create a detailed 3D map of everything. As demonstrated, this will only take as much time as it takes you to walk around until all desired areas of the environment are shown captured and mapped. Thereafter, you can begin your VR experience instantly or share it with others who will be using the same environment at the same or different time.

          • Reckless1966

            No I missed the video…was reading on a phone. Watched it now. Impressive. Makes me wonder why the PS4 camera and Kinect are not a lot better than they are considering how this tech is running on tablets and mobile phones.

            I didn’t realize that level of mapping could be done in real-time like that. Pretty exciting stuff.

          • kalqlate

            As revealed in the video, only recently, with foundation on the Nvidia Tegra K1, did they reach the level of usability shown. You can bet that Microsoft and partners are soon to announce their next iteration of Kinect and that the new tech will find its way into versions of Windows phones and tablets to compete with products that integrate Project Tango. At some point, I imagine that there will be some tie-in to their HoloLens AR visor to give added functionality to that device toward what is shown here in this mockup for Magic Leap tech. Interesting, if I recall correctly, Apple bought the company that produced Kinect technology. Interesting days ahead.

  • Sky Castle

    I’m not too excited about AR as I am with VR.

    • kalqlate

      Understandable. But also understand that AR is something that you’ll eventually (perhaps within two to five years) wear most of the day. Being able to navigate the real world naturally but having media and information that is important to you mapped onto the real world and capable of following you around wherever you go will be a boon to your productivity and leisure activities.