I’ve written before how virtual reality might well enable new and more powerful ways to re-live moments from the past or just gently wallow in immersive nostalgia. The recent launch of nostalgia simulator MemoRift and the Transformers tech demo show that this is a clear desire, at least amongst developers of a certain age. In terms of Star Wars nostalgia in VR though, picking are surprisingly slim – with the now famous Trench Run demo by Riftcoaster creator Boone Calhoun remaining unreleased, where to Star Wars fans go for their Jedi fix in virtual reality?
Developer ‘DarthDisembowel’ clearly felt there was something missing as his Millennium Falcon demo for the Oculus Rift has just been released. The demo allows you to tour the sights of Han Solo’s ‘bucket of bolts’ via a lovingly modelled digital recreation. Peer through the cockpit, take a peek at the gun turrets and just generally hang out in this scoundrel’s lair.
DarthDisembowel said of the demo:
This is my first project for the Oculus. I have been most impressed so far with the “experience”-type submissions, particularly the Star Trek-themed ones. As a big Star Wars fan as well, I wanted to experience walking around inside my favorite spaceship, and I thought I’d share it.
I take no credit for the models, I only modified some free meshes that I downloaded. Special thanks to Sean Kennedy for the interior meshes, Al Meerow for the exterior, and Degardin Arnaud for the R2D2 mesh.
So a good example of VR nostalgia tourism fuelled by open source assets then. The demo is up for download via Oculus Share here.
Naturalistic input for users wishing to take that extra step towards full body immersion for their virtual reality experience is still some way off. You can see the pieces forming and coming together, and things are evolving quickly, but we’re not there yet. For now, the only real way to get all your limbs tracked and modelled in VR is to grab yourself a fully fledged motion capture studio.
VIVE (short for Very Immersive Virtual Experience), is a project by a team based at the Emily Carr of Art and Design University, Vancouver. The goal of the project was to create an untethered VR experience which tracks your body and limbs as you move freely through a space. The project and it’s application source code is being freely distributed to share the team’s work.
..so not exactly the kind of setup the average person is likely to have in their basement, but the results are undeniably cool. Motion data is captured at 120FPS for an impressively fluid translation of body to avatar mapping. MoCap data is then fed into Unity, which then translates your virtual presence in realtime through the environment.
The system uses a Paralinx Arrow wireless HDMI transmitter and receiver to beam images to the Oculus Rift directly, leaving the user to wander through space unencumbered by a heavy backtop. The team have developed a custom interface to read data from Vicon, an industry Motion Capture system, and convert the positional data into usable positions for Unity, in this case a custom version of Oculus’ SDK demo, Tuscany.
As stated, the team have released their software and have even included a rather straightforward looking set of instructions. You know, just in case you really do have that killer MoCap system in your basement after all.
It’s another example of research that could one day inform systems that are available to us regular consumers. And before you scoff, cast your mind back a short 3 years and ask yourself if you thought VR would be where it is today.
At the most recent Silicon Valley VR meetup, Albert Kim, CEO of DoubleMe, demonstrated his company’s technology for creating realtime 3D models from a synchronized collection of 2D images. Subjects step into a small studio consisting of blue-screen walls and 8 inexpensive cameras. Capture computers take the synchronized video feeds and run them through a series of imaging algorithms to create a 3D model in realtime. What differentiates DoubleMe from other solutions is that it’s capturing motion, and it’s doing it using cameras instead of motion capture gear.
The results are pretty impressive, as Albert demonstrated for me:
DoubleMe is opening a San Francisco studio soon that will allow for free 3D modeling of you, your kids, or your pets. You can even create a 3D printout of yourself based on the model, presumably to use as an action figure. For more information, see DoubleMe.me.
Oculus VR today posted an official shipping update on the highly anticipated Oculus Rift DK2. The company confirms that the first batch of DK2s have left the manufacturing facility and are on the way to distribution centers. Now at over 45,000 Oculus Rift DK2 pre-orders, the company expects that some 10,000 units will be sent from the manufacturing facility this month.
Last week Google surprised the crowd at I/O 2014 with the reveal of Cardboard, a low-cost VR smartphone adapter which the company gave away to every developer in attendance. Google, who praise Oculus for “putting VR back into the media’s attention with an awesome device…” recognizes the potential for virtual reality on smartphones and wants to kickstart mobile VR content with Cardboard.
While I was in Los Angeles for E3 2014, I had the opportunity to spend some time with the Kite and Lightning guys. They showed me a project called “Genesis” that took took my breath away.
Project Tango is project from the labs of Google that represents the cutting edge in realtime environment capture and modelling. The system, currently running only on dedicated hardware prototypes (codenamed ‘Peanut’), uses advanced depth and plus high resolution optical cameras to grab spatial and visual data from your environment. This data is fused with orientation and positional information to enable a spatially accurate representation of a captured environment.
A new video demonstrates just how cool this technology is in action. The user wanders the target environment, aiming the phone at areas to capture whilst monitoring the data collated in realtime. ‘Meshing’ can be paused at any time, with the captured data available for inspection and review at any time. Once you’ve done your first pass, walk the environment again (the view adjusted using positional and orientation information) to grab any gaps in the mesh.
The video comes from Ivan Dryanovski, a Research Assistant at the CCNY Robotics Lab (New York) who is lucky enough to be working with a prototype Project Tango device. The Ph.D student has published interesting work on the use of 3D mapping techniques using micro-UAVs – which is an interesting potential use for this technology – mapping of remote environments using robotics.
The Project Tango launch video below provides a good introduction to the technology below, should you not have seen it.
Some time ago we covered an event run by immersive experience specialists Inition that featured a very simple conceit; walk the plank in virtual reality and reality wearing an Oculus Rift VR Headset. The resulting experience was extremely successful and adding that additional physicality meant your mind was willing to accept (and fear) the virtual reality it was presented with that much more.
Codemodeon, another interactive experience specialist this time based in Turkey, has taken that original concept and added a new twist, you’re an action star plucked from your movie theater seat and flung into the film you’d just been watching (a la 90’s Arnie flick Last Action Hero). What follows is a short on-rails ride culminating in an interactive, perilous walk across a plank many stories above ground to escape your fictional pursuer. The video above again demonstrates how the simple addition of physical keys, roughly matched to your virtual world, is a powerful way to add immersion and in the case of many of the participants sheer terror!
A new version with positional tracking courtesy of the DK2 and an epic array of Petal VR Fans is quite clearly a must.
A new project by Polish open source software tutorial website mepi.pl aims to give you the opportunity to follow your own movements as if you were a 3rd person spectator on your own life.
The project rigs a dual, customised webcams to a motorised assembly that is, currently controlled manually by the user. The assembly is strapped to a poll, in turn strapped into a backpack raised a couple of feet above your head. The stereo view is rendered by an Intel powered backtop which then in turn feeds the output to the wearer’s Oculus Rift. The effect is akin to playing a 3rd person game, with an elevated position presented to you as you move and look around – for this reason the creators have nicknamed the system a TPP, or Third Person Perspective device.
The project was conceived and produced as an entry to Intel’s ‘Make it Wearable’ competition – designed as a way to promote creative advances in wearable technology. We’d love to see a more advanced version of this which takes live positional input from the Rift and automatically adjusts the camera position accordingly – that would make for a truly surreal out of body experience, made possible only with virtual reality.
When the Kite & Lightning developers told me they had something to show me that couldn’t physically be demonstrated anywhere else, I was intrigued. Little did I know, upon arriving at their office, that I would be stepping into a contraption that’s been presumed to be a torture device, a jungle gym, and yes, even a “sex machine.”
We reported on MemoRift a little while back and admired it’s approach to providing users looking for a virtual reality room in which to experience your nostalgic applications within.
We bring good news. MemoRift R1 is now available from their website and ships with a demo room and “several freeware \ shareware and public domain material”. We’ve not had a chance to get to grips with the new demo or it’s intricacies, but there are options to strap in your own emulators via the config file. We have included some screens above for your perusal, one of which demonstrates the neat idea of looping video mapped to an arcade unit as a performant way of showing an attract screen.
Let us know how you get on with the demo and any tips on config entries you’d like to share in the comments below.
Those who are old enough to remember VRML, will recall that the the technology rode the wave of virtual reality hype in the 90’s, fusing the burgeoning world wide web with a vision of dragging the Metaverse from the pages of fiction to consumer reality. The dream died when virtual reality as a technology failed to gain traction in and the movement fell out of favour with technologists. The idea behind VRML’s ambitions however lived on.
Now of course, virtual reality is experiencing its renaissance, growing stronger and faster and presenting the best chance yet of its technology and ideas finding a place in reality. Although VRML is all but dead, its successor, X3D has actually made significant inroads into establishing standards for programmers to render accelerated 3D in browsers, but the public push to encourage adoption of these technologies remained absent.
Vladimir Vukićević, one of the Architects of WebGL
Now it seems that the Mozilla Foundation wants to take VR’s new found lease of life and kickstart the browser based VR movement. In a recent blog post Vladimir Vukićević, a well known graphics programmer who has worked extensively on accelerated 3D technologies for the web, including the formation of the WebGL standard, has announced a set of new APIs to do just that. He writes:
There has been a lot of excitement around Virtual Reality recently, with good reason. Display devices such as the Oculus Rift and input devices such as the Leap Motion, PrioVR, SixenseStem and many others are creating a strong ecosystem where a high-quality, consumer-level VR experience can be delivered.
And alludes to that reignition of the dream to realise the Metaverse:
The opportunity for VR on the Web is particularly exciting. The Web is a vibrant, connected universe where many different types of experiences can be created and shared. People can be productive, have fun and learn all from within their browser. It is, arguably, an early version of the Metaverse — the browser is the portal through which we access it. It’s not perfect, though, and lacks many of the “virtual” and “immersive” aspects. Given that, could we not expand the Web to include the immersive elements of a fully three-dimensional virtual universe? Is it possible for the Web to evolve to become the Metaverse that Stephenson envisioned?
He then goes on to detail that the new project aims to add support for virtual reality hardware devices so that programmers can present virtual reality content direct from a user’s browsers. Beginning with experimental builds of the Firefox browser, the 2nd most popular – 2nd only to Google’s Chrome, the aim is to add these initial features:
Rendering Canvas (WebGL or 2D) to VR output devices
Rendering 3D Video to VR output devices (as directly as possible)
Rendering HTML (DOM+CSS) content to VR output devices – taking advantage of existing CSS features such as 3D transforms
Mixing WebGL-rendered 3D Content with DOM rendered 3D-transformed content in a single 3D space
Receiving input from orientation and position sensors, with a focus on reducing latency from input/render to final presentation
Simple example of Firefox’s VR API in action, courtesy of the tutorial at tyrovr.com
The core aim is to make any virtual reality hardware devices connected when running VR enabled content, transparent to the code behind the application – enabling apps rendering using WebGL and Canvas to seamlessly fire up a view in whatever HMD is connected.
Browser support enabling easily accessibly and extensible web content designed for virtual reality is a significant and important step. The bridging of immersive technology and the access to networked information steps us closer to realising an accessible Metaverse for all.
As we reported recently, Google fair shook up the mobile and VR world a few days ago by releasing a new venture designed to highlight and push virtual reality on it’s Android mobile operating system.
Google Cardboard is a low cost, VR Viewer solution that uses your Android mobile phone to deliver 3D VR style experiences via specialised applications leveraging new software toolkits launched alongside the Cardboard project.
Google have now posted the full Cardboard talk which explains the web giant’s hopes and reasons for the project. The talk goes into detail on the origins of the Cardboard project from rough concept to .. slightly less rough final design. It also highlights some of the early demo applications produced or modified by Google to show off Cardboard, such as Google Earth, Street View and highlights the APIs in Google’s VR Toolkit for developers to get started coding for Cardboard quickly.
It’s a really interesting talk and an interesting further affirmation that virtual reality is increasing in presence across industries.
Now, RiftAway has released a new version of the demo which allows the player to eject from the virtual ride mid-air, flinging your ragdoll avatar 100s of feet from the ride whilst your peer on through your Oculus Rift.
Additionally, apparently RiftAway world scale has also been tweaked, bringing your 3D view of the ride’s terror into line with your brain’s expectations. Such an enhancement will surely only heighten the demo’s power to demonstrate just how willingly your mind can accept that you really are on this ride from hell.
You can grab the new version of the demo here, and check out more of RiftAway’s other Oculus Rift demos here.
With the launch of Google’s Cardboard VR smartphone adapter at Google I/O 2014 earlier this week, the company hopes to kickstart VR development for Android. In addition to the Cardboard app, Google has pushed out and updated version of Google Maps which includes a VR mode for Street View.