DreamWorks Animation VR Out of Home

dreamworks future of vr

While DreamWorks is experimenting with both real-time and pre-rendered experiences for home consumption, they’re also looking to the world of what’s known as ‘out-of-home’ entertainment; experiences found on-site at places other than the consumer’s residence.

For instance, Mayoss says that, much like Disney, DreamWorks is working to bring VR experiences to the company’s forthcoming theme park which will be called—you guessed it—DreamPark. Details here were light, but it’s likely to take the form of an attraction which stimulates more than just the visual and audible senses; motion will likely be a major component. This may even necessitate a proprietary HMD, for reasons outlined in a great guest article by Kevin Williams.

Beyond the theme park, DreamWorks is working on more portable (and probably promotional/marketing type) experiences which was referred to as ‘DreamHouse’. Mayoss teased at SDC that the company was working on a “retail experience” that will launch this holiday season in eight malls across the United States. There’s no word on exactly what this will entail, but my money is on a promotional experience for the soon to be released Penguins of Madagascar.

It’s great to see big studios validating the power of VR by jumping in with projects of their own. Will early wins in virtual reality provide DreamWorks Animation with a unique differentiator over rival Pixar? Only time will tell, but I can’t imagine Pixar is going to sit this one out for too long.


This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • David Mulder

    I would expect a studio such as Dreamworks to especially embrace non-pre-rendered VR movies… but guess I was wrong in that regard. Still think that could be a huge deal though.

  • Simon

    The video clip shows a single ‘rendered to sphere’ view which does not really tell us much.

    If we want ‘correct’ VR view we’d need to have RGBD image data and then cut/correct/compensate to form left/right views. But then there are issues with parallax and occlusion, at least when looking at a single frame – perhaps there are algorythms for filling in details from adjacent frames.

    I think that ‘pre-rendered VR movies’ data probably need a whole lot more ‘tweaking’ before it is actually displayed to user, but that this tweaking is going to be a lot lighter (computationally) that rendering photo realistic scenes.

    Perhaps particular formats will have special tricks for mapping sections when the occlusion is significant; if you are rendering a scene with a nearby object (ie. dragon) you would be perfectly capable of rendering the section hidden behind the dragon to an ‘off screen’ section and use that to fix the occlusion. Say add an additional 10% frame size to store these areas and a data table to map them…. this would be a whole lot harder for live action though.

  • Simon

    Also regarding image quality – many compression schemes have great variance in what bit rate you ‘record’ in. If the encoder is smart it can use more bits for the important parts of the screen and less for background.

    Not only is the scene pre-rendered, the encoding/compression could be heavily tweaked for screen content.

  • cly3d

    So they have a lat/long render of the scene and it’s marketed as “Super cinema”?
    How is this any different than what *everyone* is doing when producing CGI for S3D 360?

    Game designers / environment artists creating for 360 in CryEngine for instance, are already creating 360 sets. Arguably CryEngine on a good machine does approach pre-rendered (at least comparable to some Pixar / Disney Cg films)

    I acknowledge Pixar is the undisputed leader in Cg Storytelling, and each “scene” would need to be a complete environment (360 set)… but “super cinema” is just marketing/packaging, and not some breakthrough as I was hoping it would be on reading this.

  • It is not true that positional tracking is impossible with prerendered content. You can render and store light fields in a similar format as light fields captured by a light field camera, allowing positional tracking of the head within a certain volume and correct stereoscopic view from any angle where both eyes are within the volume with any chosen focus plane.

    360 degree lightfield video takes up even more space. We’d need something like Sony’s archival disc, with a read speed to match.

  • care package

    There are 360 VR videos on Youtube right now that would work just fine for a new cinema experience. Many have already seen the world of warcraft one. To try and make it to the point you can move your head around inside the environment would be insanely large for data and processing. Kind of a captain obvious thing really. Man, imagine the kind of data and processing real life must take…..

    Maybe they could just figure out a way quickly ask to stream only the field of view instead of trying to always render the whole sphere. What do I know.