Varjo today announced its new Varjo Reality Cloud platform, a device-agnostic VR meeting platform which can scan and share physical spaces in real-time using the depth sensors on its latest XR-3 headsets. The company doesn’t have a specific release date for the tech just yet, but is presenting a glimpse of what it plans to deliver in the future. Road to VR got an early look at the new tech.

Varjo Reality Cloud

Varjo is building a virtual meeting platform not unlike we’ve seen before, but with one key difference: the company plans to leverage the wide field-of-view depth sensors on its Varjo XR-3 headset to allow users to easily capture their surroundings and use it as the basis for virtual meetings. What’s more, beyond just a static capture, the Varjo Reality Cloud platform will continuously update the portion of the environment that’s in the headset’s view while the meeting is happening. That means if there’s something real and relevant in the host’s environment—like a book, product, or even another person—the virtual viewers will be able to see that thing moving and updating in real-time (as long as the host is actively looking at it).

The idea melds well with XR-3’s existing high-quality passthrough capabilities. During normal use of the headset it’s easy to toggle on the headset’s pass-through view to see the environment around you, which means if you were sharing your local environment through the Varjo Reality Cloud, it could seem like others in the meeting were standing right in the same room as you.

SEE ALSO
AmazeVR Secures $9.5M to Bring Immersive Concerts to VR

Varjo thus likens its Reality Cloud platform to ‘teleportation’, though I wouldn’t say it goes that far just yet.

Hands-on With the Prototype

I got to see an early prototype of the Varjo Reality Cloud in action during a meeting with the company in Silicon Valley. Using the Varjo XR-3 headset, I was shown a pre-recorded example of a Varjo Reality Cloud meeting space with a person standing in the center of the room talking, gesturing, and showing me some objects from around the room. While most of the room around me was static, the person was essentially being ‘filmed’ by an XR-3 headset, which meant their movements (and anything in a certain area around them) were being updated in real-time.

To be clear, the environment I was seeing wasn’t just flat or even 180 footage, it was an actual volumetric space, and so was the person that was standing inside the room. And while I could definitely make out the specific person I was looking at and the room around me, in this prototype phase the fidelity leaves a lot to be desired. The room scan and the person in front of me were assembled from a splotchy point-cloud of colored dots—far from the incredible quality of several of Varjo’s photogrammetry demos that I’ve seen in the past.

While it’s almost certain that the Varjo Reality Cloud won’t look as good as careful pre-captured photogrammetry any time soon, the company says that what I was looking at is merely a proof of concept, and that improvements in fidelity are expected as they move forward with development.

SEE ALSO
Popular App Lab Game 'Warplanes: WW1 Fighters' Coming to Oculus Store Soon

One important part of that ongoing development will be moving the whole thing into the cloud. While the demo I saw was a pre-recorded example of the Varjo Reality Cloud, ultimately the company plans to stream the captured environments from the cloud to any participants in the room, leaving the bulk of the computing to be done in the cloud. To do so at the highest possible quality on its ultra-high resolution headsets, the company says it has developed a foveated compression algorithm to cut the stream down to just “single megabytes per second.” My understanding is that that algorithm specifically takes advantage of the eye-tracking that’s built into Varjo headsets.

Device Agnostic

But Varjo headsets aren’t the only devices that will be able to join the Varjo Reality Cloud. While it’ll take an XR-3—with its equipped depth-sensors—to capture and stream the environments, the company says that it’s taking a device-agnostic approach to participants. The company expects that participants joining Varjo Reality Cloud sessions could be on computers, smartphones, tablets, and other VR headsets too.

Continued on Page 2: Cool, but Revolutionary? »

1
2

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Jan Hovora
  • kontis

    It’s impressive that small companies are beating the CV/AI giants at their game, but let’s be real. We all know that FB/Google/Apple could release the same thing 5, even 10 years later and there is 99% change they will be more successful in less than a month. And they will also extend to Enterprise, because why not?

    There is also the 99% chance that if this thing is a successful business used by many people it’s because it was acquired by one of those megacorps.

    The enhanced version of winner takes all paradigm in the digital world is quite painful to observe.

    Unfortunately the Tesla story of new independent big player happens very rarely nowadays (and even they were close to be acquired by Google and then Apple).

    • Blaexe

      5 years later? I’m sure all the big companies could release a significantly better version today given the freedom to develop a $5000 headset supported by a high end PC.

      • Daisy Hannah

        I am gaining 80 dollars/hr for freelancing on the computer.~nr257~ I’ve not at all realized that it’d possible however my close buddy collecting $25,000 just in four weeks easily doing this easy job and also she had satisfied me to join…~nr257~Try it out on following website, you have nothing to lose… >>> http://asq.kr/2jyF7VRWuDX0i

    • Armando__

      To be fair Varjo has 100M funding, not a small company or small startup by any stretch.

    • Lucidfeuer

      Of course they are, giant corporations like those you mention are not “businesses” anymore but speculative funds. Apple made $270B in revenues (from various sources) and yet has a market cap of $2.2T which is almost 8 times their real value. The difference is companies operate on risk based operations (revenues can rise and fall) while investors leeches don’t, and will be paid according to projections without doing any work or producing any real value, even if that means stalling your cash pool (to not lower your valuation), borrowing money to banks (despite having billions in funds) in order to pay investors for a missed results…which is beyond irrational.

      Anyway, to the point: corporations put less and less money into R&D, hiring, active investments (year-over-year growth was somewhere around 30-40% in these categories around 2012, departure of Jobs, and has declined and stagnated between10-15% up to now), they aggressively cut more and more on operating, production and products costs while maintaining prices to rise their margins more and more.

      The results being: these corporation are hands tied while being bled by scum investors parasites, meaning they’re stagnating and slowing down on CR&D, in fact they pretty much either wait for easy “papers” implementation or straight-up steal from smaller companies, and if you add to that more and more kafkaesque administration, hiring and micro-management, you can be sure no new significant tech, or innovation, lest revolution will come from these corporations.

      Which is a good and a bad news: good because these companies have ideally never been so vulnerable to sudden competition (despite the huge barrier of entry and predation on smaller companies due to f*** up legislation and no enforcement), and bad because it is also a testimony of the current unregulated economic situation.

  • Armando__

    Volumetric cameras aren’t able to capture most of the optical properties of the environment. Think of why modern video games have realtime lights and various maps besides a color map: Specular, diffuse, transulcency, refraction, volumetric scattering, emissive surfaces, just a few things in real life not preserved during depth capture. Pretty good to capture a basic room but anything else including humans are going to look off.

    • Lucidfeuer

      They can with some tweaks (for exemple: using polarisers, a research paper used that to capture normals of somewhat flat surfaces).

  • I agree with you, Ben. It looks supercool, but I still don’t get completely the value. Or maybe they have a roadmap we still don’t get completely: reconstructing an environment using the camera stream of the headsets in the room may be part of a patent that will be used by all future XR glasses.