Google is Rolling out Photorealistic ‘Likeness’ Avatars on Android XR to Compete with Apple’s ‘Personas’

0

Google is starting to roll out new photorealistic avatars which they call “Likeness”. Similar to Apple’s Personas, Likeness avatars are generated by scanning a user’s face, then animated it with input from the sensors on a headset. The avatars can be used to represent the user in video call apps, but Google doesn’t yet have a way to have spatial meetings with other Likeness avatars.

The News

Google is launching its own photorealistic avatars called Likeness avatars, for use on compatible Android XR headsets. The idea is similar to Apple’s Persona avatars: scan the user’s face to create a realistic representation, then use the headset’s on-board cameras to animate the scan as realistically as possible.

 

Likenesses take a slightly different (and probably more user-friendly) approach for the initial face scan; rather than scanning by holding a headset out in front of your face, Google instead released a Likeness (beta) Android app to let people scan themselves with their phone instead. Holding your phone in front of your face for a scan is definitely a bit easier than awkwardly holding a whole headset with both hands.

According to Google, the Likeness (beta) app is only compatible with Google Pixel 8 or newer, Samsung Galaxy S23 or newer, or Samsung Z Fold5 or newer. Without a compatible device, you can’t create a Likeness avatar, meaning Android XR users with an iPhone (or unsupported Android phone) won’t be able to scan themselves. One benefit of Apple’s approach to scanning with the headset itself is that anyone can use a Persona avatar on Vision Pro regardless of what kind of phone they have.

Image courtesy Google

Like Apple’s approach, Likeness avatars can be used generically as a ‘virtual webcam’. That makes them widely compatible with most video call apps that expect a front-facing camera, like Google MeetZoom, Messenger, etc.

And just like Apple, the first ‘beta’ iteration of Likeness avatars are 2D only. They are presented as a 2D representation with no way to transmit them in a spatial format, or have a ‘spatial meeting’, like Vision Pro can do with spatial FaceTime calls. However, Google says it’s working on spatial meetings for the future.

My Take

Photorealistic avatars on XR headsets are a great value-add because of the ability to use video call apps naturally. Apple’s Personas are currently the state-of-the-art as far as consumer-available photorealistic avatars, and the company has shown that it’s possible to cross over the uncanny valley with this approach to avatars.

During a recent meeting with Google, I joined a demo video call on Google Meet with one of the participants using a Likeness avatar. From a photorealism standpoint, the results look impressive, and facial movements look convincing too. However, because I didn’t personally know the individual using the Likeness, I was unfamiliar with their actual idiolect, which makes it impossible for me to judge the accuracy of the facial motion. Still, facial motion only needs to be plausibly realistic to be passable in many circumstances, and that’s been achieved from what I can see.

Image courtesy Google

While it’s a bummer that there’s no ‘spatial meeting’ yet for Android XR (allowing users to chat face-to-face with fully spatial Likeness avatars), Google made the right choice in prioritizing virtual webcam usage at the start. It’s less impressive than spatial meetings, but more widely useful and compatible with existing services and apps.

There’s probably no chance we’ll see spatial calls between Likeness avatars and Persona avatars any time soon, but virtual webcam compatibility makes it trivial for both kinds of avatars to chat across headsets.

One thing worth noting is that Likeness avatars probably won’t be compatible with all Android XR devices. Forthcoming ‘Android XR’ smartglasses (which don’t run anything close to the full-blown version of Android XR) don’t have the power or sensors necessary to render or animate a Likeness avatar. Similarly, devices like XREAL Aura (which does run full-blown Android XR), might have the power but don’t have the sensors (eye and mouth tracking cameras) to animate a Likeness avatar.

SEE ALSO
Valve Says Steam Frame Development Started Even Before Index Was Released

It’s possible that Google could make Likeness avatars compatible with these devices by doing simulated eye movements and audio-based lip-sync. Although those technologies are already widely in use for more cartoonish avatars, they’re likely to fall deep into the uncanny valley when applied to photorealistic face scans. So I doubt Google will take that approach.

With the introduction of Likeness avatars, Google also has the same challenge I pointed out recently regarding Apple’s Persona avatars: as headsets get smaller, how will they bring this level of avatar fidelity to smaller headsets that have even less room for the cameras that are essential for these kinds of avatars?

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."