X

Image courtesy Meta

Meta Offered a Glimpse into the XR R&D That’s Costing It Billions

    Categories: AR ResearchNewsVR Research

During the Connect 2021 conference last week, Meta Reality Labs’ Chief Scientist, Michael Abrash, offered a high-level overview of some of the R&D that’s behind the company’s multi-billion dollar push into XR and the metaverse.

Michael Abrash leads the team at Meta Reality Labs Research which has been tasked with researching technologies that the company believes could be foundational to XR and the metaverse decades in the future. At Connect 2021, Abrash shared some of the group’s very latest work.

Full-body Codec Avatars

Meta’s Codec Avatar project aims to achieve a system capable of capturing and representing photorealistic avatars for use in XR. A major challenge beyond simply ‘scanning’ a person’s body is getting it to then move in realistic ways—not to mention making the whole system capable of running in real-time so that the avatar can be used in an interactive context.

The company has shown off its Codec Avatar work on various occasions, each time showing improvements. Initially the project started off simply with high quality heads, but it has since evolved to full-body avatars.

The video above is a demo representing the group’s latest work on full-body Codec Avatars, which researcher Yaser Sheikh explains now supports more complex eye movement, facial expressions, and hand and body gestures which involve self-contact. It isn’t stated outright, but the video also shows a viewer watching the presentation in virtual reality, implying that this is all happening in real-time.

With the possibility of such realistic avatars in the future, Abrash acknowledged that it’s important to think about security of one’s identity. To that end he says the company is “thinking about how we can secure your avatar, whether by tying it to an authenticated account, or by verifying identity in some other way.”

Photorealistic Hair and Skin Rendering

While Meta’s Codec Avatars are already looking pretty darn convincing, the research group believes the ultimate destination for the technology is to achieve photorealism.

Above Abrash showed off what he says is the research group’s latest work in photorealistic hair and skin rendering, and lighting thereof. It wasn’t claimed that this was happening in real-time (and we doubt it is), but it’s a look at the bar the team is aiming for down the road with the Codec Avatar tech.

Clothing Simulation

Along with a high quality representation of your body, Meta expects clothing with continue to be an important way that people want to express themselves in the metaverse. To that end, they think that making clothes act realistically will be an important part of that experience. Above the company shows off its work in clothing simulation and hands-on interaction.

High-fidelity Real-time Virtual Spaces

While XR can easily whisk us away to other realities, teleporting friends virtually to your actual living space would be great too. Taken to the extreme, that means having a full-blown recreation of your actual home and everything in it, which can run in real-time.

Well… Meta did just that. They built a mock apartment complete with a perfect replica of all the objects in it. Doing so makes it possible for a user to move around the real space and interact with it like normal while keeping the virtual version in sync.

So if you happen to have virtual guests over, they could actually see you moving around your real world space and interacting with anything inside of it in an incredibly natural way. Similarly, when using AR glasses, having a map of the space with this level of fidelity could make AR experiences and interactions much more compelling.

Presently this seems to serve the purpose of building out a ‘best case’ scenario of a mapped real-world environment for the company to experiment with. If Meta finds that having this kind of perfectly synchronized real and virtual space becomes important to valuable use-cases with the technology, it may then explore ways to make it easy for users to capture their own spaces with similar precision.

Continued on Page 2 »

Page: 1 2