Google’s First ‘Beam’ Videoconferencing Device is ‘HP Dimension’, Coming Late 2025 at $25,000

9

HP announced last year it was going to be the first to offer hardware based on Google Beam (formerly ‘Project Starline’), the light field-based 3D videoconferencing platform. Now, HP unveiled ‘Dimension’, which is being pitched to enterprise at $25,000 a pop.

HP Dimension with Google Beam is said to use six cameras and “state of the art AI” to create a realistic 3D video of each participant, displayed on a special 65-inch light field display with realistic size, depth, color, and eye contact.

HP says the device, which will be sold to select partners starting in late 2025, will be priced at $25,000. This notably doesn’t come with the Google Beam license, which is sold separately.

Image courtesy Google, HP

As an enterprise-first device, HP Dimension is slated to support Zoom Rooms and Google Meet, so it can do 3D immersive chats, but also 2D traditional group meetings, integrating cloud-based video services such as Teams and Webex.

“We believe that meaningful collaboration thrives on authentic human connections, which is why we partnered with Google to bring HP Dimension with Google Beam out of the lab and into the enterprise,” said Helen Sheirbon, SVP and President of Hybrid Systems, HP Inc. “HP Dimension with Google Beam bridges the gap between the virtual and physical worlds to create lifelike virtual communication experiences that brings us closer together.”

SEE ALSO
Valve Says No New First-party VR Game is in Development

First introduced in 2021, Google Beam (ex ‘Project Starlight’) combines a light-field display to show natural 3D depth without the need for an XR headset or glasses of any sort—essentially simulating a face-to-face chat between two people.

In its testing, HP says Beam this makes for 39% more non-verbal behaviors noticed, as well as 37% more users noting better turn taking, and 28% noticing an increase in memory recall over traditional videoconferencing platforms.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Christian Schildwaechter

    TL;DR: unless I am missing something very fundamental, this is not really a light field display; it only uses conventional cameras and displays with lenses and head tracking, and the only thing light field about it is the mathematical model for processing the camera views into viewer dependent perspectives.

    I somehow very seriously doubt that this is actually a light field display. A light field is kind of an image with light rays passing through each pixel not only in one direction, but several. A regular pixel image has a certain dimension in X and Y, plus a color/brightness value for each pixel that is the same no matter from which direction you look at it. In a light field, the color/brightness value of the pixel changes depending on the viewers perspective. Which is not the same as a hologram, which can store the full three dimensional light paths of a scene at much higher resolution.

    The light field concept has been around for quite a while, but we got actual light field cameras from Lytro during the early 2010s with roughly 1MPixels for USD 1600. The camera actually used a 40MP sensor with a special lens, consisting of a lot of micro lenses that looked at an object from slightly different perspectives, which later allowed software using a lot of math to slightly move inside the picture or change the focus depth, with an effective 1MP resolution.

    What Google/HP seem to have build here is optically a lot simpler. Where the Lytro turned 40MP into 1MP, so in theory had 40 different view points for each pixel in the light field, the HP Dimension/Google Beam uses seven conventional cameras, probably aligned in a row. So you can move to the side with the perspective following, but not up and down. And the display itself very likely uses a regular high resolution screen with lenticular lenses, spreading the light from every second pixel column slightly to the side, so the two eyes see two different pictures, with a camera tracking the users head position and calculating the matching perspective from the seven cameras. Like a hires, single user perspective version of what AVP does with its EyeSight front display rendering the eyes of the user's avatar.

    So it is less of a light field display, and more of a souped up auto-stereoscopic Nintendo 3DS display with eye tracking and a clever model to render realistic perspectives from only seven cameras. Which also means it will only work for one person, and all the HP/Google material only shows a single user in front of the display, with only one Google Project Starline video showing the camera moving around the user with the perspective changing, but this was most likely just to demonstrate how it would look for the user during movements.

    The effect will still be the same, a way to communicate with another person in a very lifelike way. But it is light field only in the sense of the mathematical model behind it. Just like almost none of the depth sensors advertised as LIDAR are actually LIDAR which usually involves a rotating laser, and pretty much nothing advertised as a holographic display/projection actually using holograms. It's basically an advanced stereoscopic video conferencing system allowing for single user head movement with a fancy marketing name only loosely related to how it is implemented. Still neat, but technically a lot less impressive than the name suggests.

    • Mike

      "has a certain dimension X"
      So you're saying this is one of those screen portals into Dimension X from the 1987 Ninja Turtles? Cool, now we can call Krang.

    • Stephen Bard

      The HP Dimension is probably using the Looking Glass company's new expensive 65" monitor that is technically a "light field" display, allowing multiple viewers (lookingglassfactory link not postable).

      • Christian Schildwaechter

        TL;DR: Looking Glass isn't a light field display either, they are just using dome shaped lenses plus a clever method of calculating images that avoids losing most of the resolution.

        Looking Glass is also using a kind of lenticular lens, but instead of one lens covering one pixel row, they have sort of small dome-like lenses that allow for showing different images in two dimensions, which is exactly what AVP's EyeSight does, but only along one axis. Their website says they can generate up to 100 perspectives, so my guess is that every lens dome covers 10*10 pixels.

        I somewhat doubt that HP is using this, as this would theoretically reduce a 4K display to 384*216 pixels, and a larger number of perspectives will always reduce the resolution compared to a single user view with eye tracking. Looking Glass seems to use a rather clever way to render the image though, so you don't need separate pixels for separate perspectives, instead it looks like they are working with overlaying wave patterns that will allow for a much higher perceived resolution, somewhat depending on what is displayed, and probably making it rather compute heavy.

        Looking Glass is big on the marketing hype train though, aiming to win at bullshit bingo. They claim their displays are "light field displays", using "next-generation holographic technology", resulting in "group-viewable holograms powered by light field display technology". It is not a light field display, it still only displays 2D images that have to be rendered in a way that turns them into different perspective when watched through the micro-lenses in front of the display, very similar to the pre-distortion required in VR HMDs to compensate for lens pin cushion effects. It has nothing to do with holograms and in reality uses advanced math for rendering the perspectives in real time, to be looked at through an array of small, spherical lenses, something that was first proposed as "fly-eye lenses" by French physicist Gabriel M. Lippmann in 1903.

        Below a screen capture of what Looking Glass actually displays when not looking at it through the lenses, courtesy of Road To MR. www_roadtomr_com/2021/09/26/3381/how-looking-glass-makes-you-see-3d/
        https://uploads.disquscdn.com/images/8a864d4534f4ca1d00780de1e86f2ed4b8254a480ae6776fab40e25d389b464e.jpg

        • psuedonymous

          Looking Glass are using columnar lenticular lenses too. Their trick is to rotate the columns slightly onto a diagonal, so they get a small number of vertical offset views as well as the normal horizontally offset views.

          For Google Beam, a standard autostereo display would work just fine, as the viewers are all seated so not changing height, and the viewing subjects are all nearly immobile (also seated). That allows you to make some fixed assumptions about viewer head location and scene contents that are both static and remain valid for the intended use-case, which greatly simplifies actual system requirements vs. a 'general purpose' lightfield display.

  • sfmike

    Worthless tech few if any will buy.

  • Nevets

    This has more than a whiff of the Rube Goldberg about it, with the glasses form factor edging ever closer and with the same functionality.

  • Only $25,000? This is the perfect gift for the next holiday season!

  • Popop

    ”I’ll take you entirety stock !”