Today VREAL is taking the shroud off of their VR livestreaming platform that puts viewers inside the game, right next to their favorite streamers. The unique mashup of virtual reality, game streaming, and social interaction feels like something we’ve simply not seen before.

First of all, what is VREAL? At its most basic, it’s an immersive livestreaming platform for gamers. Like Twitch, but for VR.

But that’s putting it far too simply. VREAL aims to not only allow streamers to effectively livestream VR games, it actually brings the audience inside of the game using virtual reality. So, for instance, you might have a livestreamer playing Surgeon Simulator in VR, while the viewers watching the streamer are wearing their own headsets and standing directly inside of Surgeon Simulator, right next to the streamer. Viewers can even move around the environment to see the action from any angle as it unfolds.

vreal-virtual-reality-livestreaming

The company says they’ll support the HTC Vive and Oculus Rift. If you don’t happen to have a VR headset, VREAL also streams output that can be viewed on a flat monitor (through existing streaming platforms), as well as 360 videos that are viewable on flat monitors or less powerful VR headsets like Gear VR or Cardboard.

There’s also a social element. Viewers in the space could be seen or heard, if the streamer wishes. And viewers themselves could be visible and audible or hidden and silent to other viewers on the stream.

This approach from VREAL feels like something completely new. It has elements of VR, game streaming, and social interaction, but they’re stirred together in a complementary way that seems to open the door to a totally different form of interaction which blurs the line between the content creator and the viewer.

So how does it all work? Well, that’s VREAL’s secret sauce of course. What we know is that the platform isn’t merely streaming video footage of the virtual space to viewers. Instead, it is somehow syncing the environment of the streamer/host with that of the viewer. That means VR viewers are seeing real, crisp 3D geometry, and they’re able to move around the environment as if they were playing the VR game themselves.

Unlike traditional game streaming, VREAL requires game-specific integration through the company’s SDK. With plugins for Unity and Unreal, this may not be as big a hurdle as it sounds, provided the integration is as near to drag-and-drop on the developer’s end as possible. With VREAL hitting beta this summer, we’ll find out soon enough.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Bryan Ischo

    So either the game engine itself has to render two different viewpoints (one for the streamer, one for the viewer), or it has to duplicate the engine state to the viewer so that it can locally choose a viewpoint and render that. In the first case, a single extra viewer would double the rendering requirements, and each additional viewer would add one more full render. This would be quickly unworkable as the streamer’s system would be brought to its knees rendering the same scene many times per frame.

    The second case sounds really, really costly from a network bandwidth perspective as it would require the entire game state to be synchronized between streamer and each viewer. This would be workable for a handful of viewers (kind of like being a spectator in a Call of Duty match) but would not scale beyond that. Not to mention requiring dead reckoning and other latency hiding mechanisms that would all have to be built into the game engine to begin with.

    While this sounds cool, I think it’s technically infeasible. I would like to be proven wrong though.

    • It depends on the game, of course, but I think it’s actually pretty reasonable. A position and rotation is 28 bytes, for example. Sending the position and rotation of 1024 objects every frame is 28kb. At 90 fps that would roughly 2.5 Mbps, which is less than most video streams nowadays. Realistically, more than pos & rot would be communicated, but most game engine state takes the form of floats, enums, or asset pointers (all small), and culling could reduce it to far fewer than 1000 objects every frame, so it seems pretty feasible to me. Moreso, you could knock it down to 30fps and it would still be fine since the client would still be rendering the local head @ 90.

    • Sam Illingworth

      Yeah, the rendering has to be happening in the viewer’s side, but after transferring the assets I wouldn’t have thought there bandwidth requirement will be too high. It’s basically just the same sort of data sent between client and servers in normal multiplayer games, just with a little more info regarding the player’s movement. In fact, will the positional data for a VR player be more than the data involved in, say, the movements of another player’s ship in Star Citizen, for example?

      The first load though, that’s gonna be huge! Basically downloading the game. They must be doing something clever that we haven’t thought of, or how could they possibly make it work on mobile?

    • kalqlate

      They could reserve a limited number of spots that give spectators full mobility. After that, they could support a larger number of spectators who are limited to jump points that stream the same 360 view at a given point to all limited spectators. This could work well as there was a recent demo of a game that used jump points as the main in-game navigation for the player, and it worked very well.

    • OgreTactics

      Technically, there are tons of ways to do it, it’s very easy to implement multiple ways. The question here rather is how will they get the majority of developers or platforms to adopt the solution?

      • Bryan Ischo

        Assuming that each client will render their own viewpoint, then each playback would require everyone to start with the same initial state. The entire set of game assets is included in that state. How would it be possible for anyone to view playback of any game that they didn’t already own? How would it be possible for the player to easily and reliably find their game installation so that it could set up the initial state? How many games are out of the box capable of transmitting enough state to perfectly synchronize rendering between streamer and clients? How much does that synchronization cost per client even if all of the above hurdles were overcome?

        I do not believe you when you say that it is “very easy to implement multiple ways”, unless you mean “very easy to implement in multiple ways, none of which would actually be workable in practice”.

  • eh

    Sounds interesting. I could see this becoming a way for people to cheat in games. A spectator could communicate with a player and tell them if someone is around a corner or hiding behind something..

    • kalqlate

      With that, you could have games intentionally built with invisible “angels” and “demons”.

      • Kai2591

        NICE

      • brandon9271

        I always thought it would be cool to allow some NPCs in games to be real people. How this could work without A-holes running around teabagging everyone in the middle of your single player campaign I’m not sure. Have to limit what they can actually do. I guess you could give them limited movements and dialog-trees, etc to keep them in line.

  • CazCore

    this is basically just multiplayer with a focus on visible spectators, and no gameplay interactions.

    still a good concept that a lot of people will enjoy.

  • OgreTactics

    Great idea I’ve never heard of before. This was to happen eventually, and they’re trying a first approach.

    I’m curious as to wether it means that viewers actually have to have the game for it to be rendered on their PC like a multiplayer game would, or if they are streaming the game from the player’s PC, in which case I don’t really know how they will do it.

    But it’s clearly possible and a good idea.