The Oculus Team have published a detailed guide on Roomscale VR setup. This four-part series expands on the Experimental Roomscale Guide published before the launch of the Touch controllers, discussing sensors, bandwidth, host controllers and extra equipment.

While Oculus continues to recommend the two-forward-facing sensor configuration for Touch users and developers, many people are trying roomscale solutions, using two or three sensors to achieve 360 degree tracking. An Experimental Roomscale Guide was published before the Touch launch in December, and now the Oculus Team have taken a deeper dive into roomscale configuration, posting a four-part series on their official blog.

Watch Us Test Oculus Touch Front Facing Tracking to its Limits

‘Tips for Setting Up a Killer VR Room’

oculus advanced roomscale setup series (2)Firstly, they discuss sensors. A single sensor achieves best tracking up to 6 feet away, then precision drops until about 10 feet, where it will lose tracking. Placing two sensors in the normal layout creates a combined tracking volume, increasing the precision, with ideal tracking up to 10 feet away. Adding a third sensor improves things further, but the team suggests you’ll get best results if you place all three sensors higher up the walls, angled down, as this will avoid occlusion most effectively.

A fourth sensor is possible, but “can create more technical and performance issues than it’s worth,” the company says. Part One of the company’s roomscale guides further explores the Sensor’s tracking range and ideal layouts.

‘Balancing Bandwidth on USB’oculus advanced roomscale setup series (1)

Next up, something that most PC users never need to worry about: balancing bandwidth on USB. The combined data from three Oculus sensors has the potential to overload a USB host controller, which is why they recommend using two sensors on USB 3.0 and one on USB 2.0.

Not all motherboards behave the same way, but it is likely that more than two sensors plugged into either 3.0 or 2.0 will cause tracking issues. The team also doesn’t recommend using USB hubs for the sensors. Part Two goes over more details about the limitations of USB throughput with regards to Sensor usage.

‘Identifying Host Controllers’oculus advanced roomscale setup series (3)

If you’re experiencing bandwidth issues and want to know more about your host controllers, Part Three in the series explains how to access the information in Windows Device Manager. Once you’ve found the sensors, you can view devices ‘by connection’, allowing you to see the connected host controller, and its position in the hierarchy.

It may help to spread out multiple sensors across different host controllers, and the Device Manager allows you to visualise what is really happening when you plug the sensor into another random USB port at the back of your PC.

‘Extra Equipment’oculus advanced roomscale setup series (4)

Finally, in Part Four, they discuss extra equipment, such as extension cables and wall mounts for those who want to do something other than desk or tripod mounted Sensors. USB extension cables vary in quality and can cause problems, so the team has listed a few options that have worked for fellow enthusiasts.

One tip is that it might be a good idea to use USB 2.0 for a sensor that needs to be placed particularly far away from the PC, as the lower bandwidth tends to work more reliably with longer extension cables. Some PCI Express USB cards are also recommended, as these work around the potential host controller issues.

In addition, they also recommend some USB and HDMI extension cables for the Rift headset, and suggest wall tripod mounts from Amazon marketplace or to go the 3D-printing route.

– – — – –

While most of this information shouldn’t be too daunting for a VR early-adopter, when it comes to mainstream consumers, it’s an apt illustration of why Oculus were reluctant to discuss roomscale, and how Vive‘s approach to tracking is particularly well suited for roomscale VR.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Bryan Ischo

    Will this madness never end? Oculus, just suck it up and admit that Constellation is only appropriate for 180 degree forward facing seated experiences. And get to work on licensing lighthouse or even better coming up with something even better than lighthouse.

    • Sponge Bob

      lighthouse is dead in the long perspective
      reason: camera vs scanning laser beam
      camera wins in the long run: more resolution and very high refresh rate

      btw, high-speed motion tracking with cameras is an entire industry completely separate from VR
      I haven’t heard of any lighthouse-based high-speed motion tracking used for movie production etc

      This said, Oculus apparently does a very poor job at present

      • Get Schwifty!

        Very astute… glad to see someone else gets this…. there are some very good reasons for the use of cameras in the long run…. and Lighthouse really doesn’t scale well ultimately, though its a very good solution currently for small to medium areas and only for tracking.

        They really need to hire some true experts in that field of high-speed motion tracking camera work in and some math whizzes to get the software kinks out… and a good visual USB tool…

        • hyperskyper

          Oh great! This again…

          • Get Schwifty!

            The truth doesn’t stop being the truth just because you were away LOL.

        • Sebastien Mathieu

          lighthouses does not scale well??? being USB independent and practicly `dummy unit` i tought it was easier to upscale with basestation, instead of USB dependent camera… can you elaborate?

          • Bryan Ischo

            Well not to answer for him (whoever he is I don’t know, he’s a blocked user for me), I have heard these arguments given against lighthouse with regards to “scaling”:

            There is a conjecture that constellation would scale to more tracked objects because it uses a different marker and tracking system than lighthouse. I do not fully understand these arguments but I think they go something like this:

            – You can more easily add LED emitters than you can laser sensing diodes because the LED emitters are basically “dumb” and require little electronics in the tracked device
            – You can more easily add LED emitters than you can laser sensing diodes because the LED emitter doesn’t need to send anything back to the base unit other than dumb light (no need for wireless)

            Consider that the Oculus equivalent of the new HTC tracking puck would just have LED emitters and a battery, with no need for wireless.

            However, consider also that such an Oculus tracking puck would be MISSING some features from the Vive puck such as the ability to transmit other inputs from the tracked device like trigger pulls etc. And to get those, which you’d almost always want in any device more sophisticated than a baseball bat, you’d still need to add wireless capability to the Oculus tracking puck.

            Another reason that is proffered for why Oculus tracking is more scalable is that you can theoretically track any number of devices in the same scene since they’re all just clusters of LED dots.

            I do not understand how this is any different from Vive though, which would just be clusters of laser tracking diodes, except that the Oculus solution doesn’t require every tracked device to be communicating via wireless to the headset/base station.

            Once again, this is really only a benefit for “dumb” tracked objects. However, there are definitely some pretty “dumb” tracked objects that are important – like feet, wrists, hips, etc. That being said, I would really be surprised if adding Lighthouse tracking for feet, wrists, hips, etc, was significantly more costly or complex than doing so for Constellation would be.

            Another way that the Constellation solution could be argued to be more scalable would be in terms of tracking volume — theoretically, with a high enough camera resolution and bright enough LEDs, you could track arbitrarily far out in distance, wheras the Vive tracking solutions sweeps lasers that would (possibly) lose coherence much more quickly at larger distances and thus suffer from a smaller maximum possible tracking volume.

            However, Oculus’ inability to track even 10 ft by 10 ft well is some pretty good evidence that the practical realization of constellation is not nearly as good as its theoretical possiblities, and given the technology involved, I personally doubt that it ever will be.

            I also am very skeptical that a Constellation tracking system could easily track hundreds of LEDs that are grouped into different objects with different orientations all occluding each other at different times. It sounds like an image processing nightmare to me.

            Oh and let me add one more thing: one problem with lighthouse is that the two base units have to time multiplex their laser sweeps — i.e. one must flash a LED, then sweep lasers, then the other must flash a LED, then sweep lasers … you can see that if you tried adding another lighthouse unit that is visible to the others, it could only work if now each uses 1/3 of the available time instead of 1/2. Add another and each is using 1/4. This means that increasing tracking volume by adding more lighthouse units implies reducing tracking frequency of the whole system.

            One way around this would be to spin the lasers faster, but I don’t know if that then starts causing problems at the diodes which can no longer see the lasers going by that fast.

          • Caven

            With the next gen Lighthouses, they’re generating a single V-shaped beam instead of two separate lines, so conceivably they could double the number of Lighthouses without having to speed up the motors or reducing tracking frequency.

            Getting back to Oculus’ Constellation, despite what people may claim, the LEDs really don’t have the benefit of being “dumb”, as the system still needs to have some way of identifying which device is which. That means pulsed LEDs, which means needing circuitry that makes the LEDs pulse. It also needs to ensure that the IDs are unique, so that means being able to talk to the computer or other devices to avoid ID conflicts. Without doing so, at some point the LED clusters will make enough of a visual mess that the system won’t be able to reliably identify individual devices.

          • Get Schwifty!

            My point about scaling (and I am not the first to ever mention this) has to do with what you perceive as scaling by just adding units. A solution that simply requires adding Lighthouse units ad-infinitum is not a really good solution ultimately. The problem really as Sponge points out is that any mechanical solution requiring multiple units is inherently limited for very large spaces (and utterly incapable of integrating in objects for an AR/VR application).

            High-speed cameras allow a much broader coverage of an area (the lasers used for Lighthouse are just not that strong in comparison, but very good for smaller spaces) and not using mechanical parts means they are more reliable over time as well. The LED approach currently used by Constellation will likely yield to a different design but still incorporating cameras. Possibly Lighthouse will evolve as well, and not so many units over a space will be required, but as it stands, the need for a mechanical rig to spin to produce light is great for the current application but begins to show problems the larger the number and area covered. Even if they did get rid of the mechanical aspects of Lighthouse, it still cannot provide the additional AR/VR advantages a camera does. I find it telling that no one else, and I mean no one is going with a Lighthouse style solution I can think of, instead everyone is going with camera-based… maybe there’s a good reason for that over the long haul?

            Check this link out, which acknowledges strengths and weaknesses in terms of barcode reading… please note however it points out one thing that the imaging technologies and image processing is continuously improving – that is why AR/VR seems to be tied more to imagers than lasers down the line:

          • Sponge Bob

            to be precise, laser beam can track ONE object at a very large distance, but only ONE object
            This tech was described here awhile ago

            If you need to cover the whole space by scanning laser beam then all benefits are lost – the further you go away the less intensity per sq inch you get and less time for photodiodes to respond

            PLUS, laser must be eye-safe !!!

            This last requirement practically kills laser tracking prospects for larger distances

          • Bryan Ischo

            But that’s why you add base stations and split your tracking volume up onto segments, each covered by a different set of base stations.

            I seriously doubt that the camera based solution is ever going to have the fidelity to track small flashing LEDs at long distances. Ever.

          • Sponge Bob

            cameras scale up much easier than lasers. period.

            with the above example you would need 2 lasers per EACH player and EACH hand-held controller (that’s 6 in total per player)
            -to track them over really long distances………

            btw LED lights don;t have to be small – you can use diffusers to make them e.,g the size of a ping pong ball (excellent diffuser) or larger (put several high-output LEDs inside) – still a lot smaller and cheaper than those Vive tags

          • Bryan Ischo

            What? Why do you need 2 lasers per player? Do you realize that two lasers cover the entire tracked space no matter how many items are in that space? And with the single-motor version of the lighthouse base stations you could probably go to 4 lasers in the space, which would improve occlusion issues. That’s all you’d need no matter how many objects you have in the space.

            Please verify that you understand the basic principle of lighthouse technology, that the lasers don’t do the tracking, the lasers just provide a reference signal that the tracking diodes on the tracked devices use to do tracking.

            Theoretically you could go to more cameras and have even better occlusion avoidance, but that’s theory, I doubt it would work well in practice.

          • Sponge Bob

            Was talking about this tech:

            It does track ONE object 100m away, or further
            (don’t look in the laser beam though – wear your VR headset all the time)

          • Bryan Ischo

            You know, I had a feeling that you were trying to talk about that instead of Lighthouse. The discussion was about lighthouse. We’re not talking about prototypes of some unproven technology. We’re talking about how Lighthouse works and its merits when compared to Constellation.

          • Sponge Bob

            Bit I did say that Constellation sucks at present
            BTW there will be no wired USB connections to cameras in near future:
            cameras will have to do all image processing locally (on their own attached GPUs) and then just transfer XY angular coords wirelessly

          • chtan

            You might want to read up how Vive works. True the laser can track 1 object but this is not how Vive works. It’s sole purposes is to light up the sensor in the HMD and whatever sensor the the object that can sense the light. So, this effectively voided your assumption that light house can only track 1 object. In fact, there have been demonstrated that multiple Vive HMD works with only 2 light houses which is impossible for Rift to work.

          • Get Schwifty!

            Chtan…. you are stuck in the moment…. there is no reasoning with you on this point…. if you can’t grasp that cameras offer more flexibility in the long run there isn’t much anyone can say. I guess your thinking is that every city will erect a giant Lighthouse base station and one on every corner so one can walk outside with their wireless Vive and controllers and play Pokemon Go while walking into walls because the Lighthouse system can’t tell you they are there or integrate the image in…

          • chtan

            I know in long run camera has an potential but what we are discussing is now not future. With this kind of low bandwidth port and tremendously huge cost far high speed camera it is not practical at all now. If you want to discuss please discuss in context which is relevant, not fantasy. What if I tell you there will be a laser fast enough to scanner the object in the blink of the eye in future? Can it be used for tracking? Definitely, but how practical is it?

          • Caven

            If that were true with Vive tracking, then how do you explain the ability to use a single pair of Lighthouses to simultaneously operate two Vive headsets and the four controllers to go along with them? I know this works because for my hobby development I’m using a shared tracking space where this scenario has occurred on several occasions. My Lighthouses are still packed in the box, because the other pair of Lighthouses works just fine with my headset without any extra effort. The Lighthouses don’t care what’s in the tracking area, because they don’t talk to the devices in the tracking area.

          • Sponge Bob

            I wasn’t talking about Vive

            I was referring to

            That thing can track 100 meters away (don’t look inside laser though) BUT only ONE object

          • Caven

            Ah. Well, I’m skeptical of that particular laser tracking technology, but if those mirrors work well enough, a Lighthouse system that utilizes them in a way similar to television scanlines might be interesting.

          • chtan

            They forget how much bandwidth is required to push thru. the USB cable the need to tether back to PC.LOL.
            They start to fantasy and imagine thing that far beyond the current gen. VR and USB limitation.
            Just the Rift tracking up now and stop bullshitting on how good the camera can be in future.

          • Get Schwifty!

            Uh its not BS…. I mean people with far more experience in VR applications and technology than either of us have pondered various approaches…. there are good reasons why the camera approach, with its issues currently makes sense in the long run…. NOW…. what I would be in favor of and I think may occur is a hybrid system, basically combining the short field benefits of Lighthouse style tracking with a camera system to back it up …..

          • Get Schwifty!

            Very good point….

          • chtan

            You are comparing product with a different range. How much do you think a high speed camera cost vs light house?
            NASA sensor can still see the laser reflected from the mirror placed during Apollo mission, can your camera see it?

          • Get Schwifty!

            Can the laser bring in the surrounding objects? No, didn’t think so and that is the point… its about more than tracking…. why this unfounded bias towards Lighthouse? It’s great but it has limits, and potentially one fatal one in time… why this is an issue I cant grasp… I plainly point out the flaws in Constellation today, why can’t people accept the limits of Lighthouse? it’s not perfect, nor the solution for every case….. no solution is, nor is any solution perfect…

          • Caven

            I wasn’t talking about more expansive areas. I was talking about overlapping coverage, along the lines of how Oculus is experimenting with 3 and 4 camera setups for room scale tracking. Simplifying how a Lighthouse operates could allow mounting at more than just two opposing corners, further eliminating occlusion issues.

            For larger scale purposes, lasers and scanners do each have their own benefits, as the article you linked indicates. Though many of the factors listed aren’t relevant for room tracking, and I found the article slightly biased in favor of imagers. The ability to scan from a monitor is a theoretical benefit, but one that has left me disappointed in practice. In the course of my work, I’ve been in a number of occasions where I needed to configure an imager-based barcode scanner, which required scanning a 2D barcode. With one exception, every time I tried to scan the barcode directly off the screen of my laptop the result was failure, no matter what size I displayed the barcode on the screen. I would end up having to have the document printed and then scan the printed barcode.

            As for AR/VR advantages of a camera, I suspect that would be of more use if a camera was mounted to the HMD and could progressively see the room from multiple angles. Otherwise, I’d imaging a combined camera/laser system would be more useful for mapping a room, as the laser could be used to define regular points, which the camera could detect and process for room mapping. There are laser-based 3D scanners that do that sort of thing. An object is rotated past a laser projecting a line, and an offset camera can tell the changes in height based on the distortion of the laser beam.

            Finally, as for mechanical concerns, while moving parts are always an issue, solid state components are certainly not immune to failure. For instance, in my line of work I’ve had to replace a lot more laptop mainboards than I have laptop hard drives. And now that laptops have largely moved to SSDs, I’ve already had to replace a surprising number of them due to failure. Certainly not enough for me to distrust SSDs in general, but enough that I wouldn’t automatically trust every SSD to be superior to HDDs in terms of reliability.

            I have no problem with the idea of camera-based tracking completely overtaking Lighthouse-style tracking, but I think it’s going to take quite some time to get there, and once that happens, hardware cycles occur frequently enough that it’s probably not going to be a big deal to switch over when the time comes.

          • Scott C

            As far as ensuring IDs are unique, they could be configured in a dumb device by an external switch or switches easily enough. Barely more component cost than the board that pulls them in the first place. A set of four dip switches next to the battery gives you sixteen IDs, which, I believe, doubles the 8 addresses (so 7 devices plus the headset) that Vives Bluetooth PAN protocol currently maxes out at.

            As Bryan points out, if you start tracking feet, hips, and so on to get skeletal references, you run out of addresses on Vive quickly.

            Consider also that a future camera based tracking setup would infer the room layout for you, and that VR will be able to borrow heavily from research and advances being made for AR systems on the inside-out camera-based tracking front, and I see camera tracking outliving lighthouses in the long term.

          • Caven

            According to Alan Yates, they’ve tested 10 trackers on one PC, so there’s definitely not a hard limit of 8 devices. According to Nat Brown, there’s an arbitrary limit of 16 devices currently, allowing for 13 trackers in addition to the Vive controllers and HMD. It sounds like they could change that limit if desired, though the total number of USB ports available for tracker dongles would introduce its own problems.

            Setting an ID manually on a “dumb” device would be possible, but that does introduce some minor initial inconvenience. There’s also the issue that pulsing LEDs as a means of identification requires multiple frames from the camera before being able to identify a device. Once an object is identified, it shouldn’t be a problem as long as it’s constantly tracked, but as soon as occlusion occurs, identification will have to happen all over again, that that could cause unpredictable behavior. Oculus could get around that the same way HTC and Valve do by using MMUs to keep the device updated while waiting for the next sensor refresh, but then you’re not dealing with a “dumb” device anymore. The Oculus Touch controllers have MMUs in them, as confirmed by the iFixit teardown, so the cameras are definitely not enough by themselves to ensure stable tracking.

            As it is, I really don’t see a whole lot of benefit to tracking inert devices–at least ones that are expected to be in motion. I had to laugh when HTC promoted the Vive trackers by depicting them attached to a tennis racquet, golf club, and baseball bat, because that sounds like a recipe for property damage right there. And while you get the realistic weight and feel of the objects in question, whiffing through the air still won’t be the same as actually hitting a ball with them, so I think the simulation benefits are limited.

            In the long run, I could also see cameras outliving a lighthouse-type system, but it’s going to take a whole lot of additional bandwidth and processing power to do it. If three 1.2 megapixel cameras can choke a person’s USB controller when using Oculus Touch, we’re nowhere near having the computer specs needed to do large scale tracking. And with so many people complaining about not having enough room for Vive roomscale, I don’t see large-scale tracking being a big selling point for home use. For commercial or industrial use that may be a different story, but they already have the luxury of using technology that’s not likely to find its way into the home.

          • Scott C

            Thanks for the correction on the addressing space. I thought it was 3-bit, not 4. I also agree on the broader point you make about tracking inert devices.

            I think that 2016 is likely to drive some interesting trends in the mid-range future (18 months to 3 years from now, probably) as far as hardware development goes. The bandwidth issues Oculus is running into are primarily motherboard and chipset manufacturing habits. USB 3 controllers are expensive and not in particularly high demand for the consumer market — previously the only real benefit over USB 2 has been external hard drives, and most users only need one of those.

            Kaby Lake and the 270 line of chipsets just came out, and were thus too far along in the pipeline to react to this, but I feel like the next major chipset revision we see from Intel is likely to make a concerted effort to step up the offerings on PCIe lanes and USB 3.1 support.

          • Get Schwifty!

            see below

        • Caven

          If three sensors can start causing problems with the Rift and Oculus considers four sensors to be more trouble than they’re worth, I don’t see how that’s any more scalable than Lighthouse. Sure, cameras have their advantages, but so far scalability does not appear to be among them.

          • Get Schwifty!

            see below about imagers vs lasers for barcodes

          • Sam Illingworth

            I have nothing to add to the general discussion (I’m a Vive owner but I do suspect in the long term cameras will end up being better), bit I’m wondering what barcode scannign has to do with it? The way lasers are (were) used in barcode scanning was nothing like the way they’re used in Lighthouse. If the barcodes themselves had sensors in each line then I’m sure laser based systems would be just as good as modern camera based systems.

          • Get Schwifty!

            The bar code discussion is the best comparison of the issues and long term directions the two technologies have I found googling in the time I put into it, I am sure there are better discussions out on Reddit…

          • Sam Illingworth

            Don’t be silly. There’s nothing good on reddit! :P

        • Fredrik Pettersen

          Trump is strong with this one. In what universe is camera better than laser? You clearly don’t have a clue what you’re preaching. For the love of science can you please go and acquire a Vive already! That way we don’t have to see anymore of your embarrassing bad Oculus arguments. I have both headsets. They are both great products. But every sane person do realize that Vive has superior tracking than what Oculus has. Which makes the whole experience much more enjoyable and exciting.

          • Sponge Bob

            static hi-res camera is better than space scanning mechanically rotated laser beam. period.

          • Bryan Ischo

            Very convincing argument! Also, “space scanning” is technically incorrect, apparently you did not read the article I linked to you. Your loss I guess.

          • Get Schwifty!

            That’s the problem – you only understand the way it all works by comparing laser vs camera…. as an entire system it is more than that, simple minds fail to grasp the long term direction of AR/VR.

            Lighthouse is a *very* good system for small areas, and only if your goal is to track objects. Period. The camera solution, while on the surface creakier at first, offers a lot of advantages over time such as bring the user into the space, or other objects. The Lighthouse method cannot do this, and requires tacking Trackers on anything but then generating the associated image inside the space by hand which works fine for gaming in VR with limited objects, but is far less flexible a system over time.

          • chtan

            Agreed but do you think how will that system is going to cost? Definitely not now or any near future until the camera tech. cost is low enough for mass market. Lets not forget that the current camera they used is of low res. low FOV IR camera which cannot even see the image properly.
            Also, “lighthouse is good for small area”? This must be the most hilarious statement I see until now. Just looking the trouble Rift is having vs Vive. This is enough to prove you wrong.
            I personally use a lot of barcode scanner. I usually opt for laser. My second choice is IR and the last is camera base scanner. You know why? Camera is too sensitive to it’s ambient lighting condition and very inefficient to read the barcode correctly.

          • Get Schwifty!

            Cost is always headed down… thats actually one of the good things about it… as it improves technology costs also go down.. when I say good for small area let me clarify – a 30’x30 area is “small” if one compares it to what high-speed resolution cameras might cover…. and we are talking about near future developments not today. I am very consistent in my claim that Lighthouse is a superior solution today, and for a larger area than Constellation….

      • Bryan Ischo

        More resolution? Are you sure you know what you’re even comparing? Lighthouse resolves to millimeter or sub-millimeter. How much more ‘resolution’ do you need?

        Very high refresh rate? Again, what?

        I suggest you read this page. It will help you quite a bit I think:

        • Sponge Bob

          I read this article before

          the only conclusion of importance is that you really need 2 lighthouse base stations spaced from each other for good tracking, not just to prevent occlusion issues (with cameras too btw)
          that part many people just don’t get

          Camera based high-speed tracking is a whole industry in itself
          the update rate can be 1000fps without any IMU data (but it will cost $$$$)

          • Bryan Ischo

            And how accurate do you think you could get with a lighthouse based system that costs as many $$$$ as the camera based systems you linked to? And how large do you think the tracking volume could be?

            I expect that you’ll start to see lighthouse based systems supplanting some of those camera based applications soon for some use cases.

          • chtan

            Not really, you can run with 1 single facing lighthouse without any problem as long as you don’t turn 180 degree away.

          • NooYawker

            The bottom line is vive tracking is superior to oculus. Whether or not a few years from now cameras will replace the lighthouse doesn’t do oculus users any good today.
            Secondly the oculus comes with one sensor so if you more it’s $100 a pop.

    • Pre Seznik

      What? I’m having zero issues at roomscale with Oculus. What sort of setup did you try that makes you say all that?

      • Bryan Ischo

        I bent down to pick something virtual up and lost tracking due to the low FOV of the cameras and the obstructing table edge.

        But moreover, read the article. The number of technical quibbles and issues that could bite you seems very long. The solution seems fragile, and in my experience, breaks down rapidly at the edges, both of the play space, and of the fidelity of your setup.

        • Pre Seznik

          You’re right about that, tracking is really bad the lower you go. Placing a sensor higher and at an angle alleviates that somewhat.

          • Get Schwifty!

            I have no issues with tracking to the floor if I place the sensors at least at eye-level to my height and angle them down…. I mean its a cone more or less, if you don’t position the sensor for coverage what do you expect?

          • Pre Seznik

            Well yeah, that’s what I said. By “higher” I mean eye level or above, because my setup is mostly between waist & eye level for sensors due to how my space is set up.

    • The reason they’re using and sticking with the cameras is because in the future the cameras will have expanded capabilities such as scanning you one to one in the game. While you might experience better tracking with constellation right now, it’s capabilities won’t really go much further feature wise. It would be a disaster to change the system in future generations when all the previous gen games and components were designed for a completely different system

      Brendan Iribe:
      “We’re really big believers in optical tracking, in camera sensors. That is the bet that we’re making. And that’s the future of sensor tracking. If you look at things like the Kinect, or any of these different kinds of infrared structured light sensors, or any of the stereo camera sensors, they’re all based on cameras. And cameras continue to get better.”
      If you want to see your full body in the game, if you want to see your fingers and your fingernails … not this generation, but, eventually, if you want to see all of that, that’s going to be done with camera sensors. That’s not going to be done with any other kind of sensor. That’s an optical sensor, and that’s the investment we’re making.”

      • Bryan Ischo

        If that technology is available in future, then it should be shipped then. In the meantime, the superior solution that is available using current technology should be shipped now.

        If you think that full body tracking using cameras with good fidelity and a large tracking volume (that goes floor to ceiling, front to back, and edge to edge) is likely to happen in gen 2, then of course, that’s what we should all be using in gen 2.

        I hope that’s true. I guess only time will tell. But I do also hope that if gen 2 comes out and the cameras still are inferior, you don’t re-use the argument that they should have stuck with cameras because they were ‘the future’. They should be used when they are the superior solution, not before they are.

        • They’re sticking with camera technology because they believe it will be the superior technology in the future and when that day comes all of their legacy hardware and software will be compatible without having to rewrite everything or rendering legacy hardware and software obsolete because the devs have moved on.

          I have a Rift and my only issue is that my sensors sit further back from the table edge which occludes part of the floor. But that would happen were I to have a Vive base station in the same spot. I think their decision was wise. It seems to me like they are making careful and calculated decisions with their hardware which is why the headset was designed better (with the exception of that stupid cloth covering imo) and the delay of the Touch controllers until they refined the design and had a decent amount of software to support it.

          With the exception of not having quite as large of a room-scale space as the Vive, the Rift+Touch is the superior overall VR experience. I’ve found that using the Rift with it’s full room-scale capabilities, I rarely even moved to the extents of the available area. All these games are being made to accommodate a relatively small area to maximize potential customers and compatibility

    • CMcD

      I have the Vive as well as the rift, and room scale with the rift works perfectly for me. Now, I did get a third sensor so I should mention that first. But by placing my rift sensors higher and aiming them down (like the lighthouse sensors) I have perfect floor to ceiling tracking without any issues. The oculus touch controllers feel so wonderful that I would rather play EVERYTHING with the touch controllers that doesn’t REQUIRE the vive wands… which at this point most of my steam library has updated for touch controllers.

  • Get Schwifty!

    If they stick with Constellation as-is, they will have to create some kind of interactive tool to display the USB ports and indicate which are ones to use to share the load…. asking your regular home user to do anything along those lines without clear diagrams and them just plugging things in until it reads green will fail big time in the market….

  • OMG. It should just freakin work.

    • Get Schwifty!

      For the most part it does… the problem is the part of the percentage of users with three or more sensors for which it was not originally intended having issues… but it makes incredible tasty click-bait….

      • hyperskyper

        It’s not clickbait if it’s true. Roomscale on the Roft isn’t just plug-and-play as it should be.

        • Get Schwifty!

          Small minds seem to lose track of the context… Oculus only supports currently officially one and two sensors reliably and it does that very well… just because Vive does full room scale doesn’t mean Rift must do it currently. They are working through adding a third sensor and likely in time a fourth for 360/room scale. Currently there are issues with running three or more sensors for tracking, hence it is “experimental” on the Rift. Why is this so hard to grasp?

          There is no “it should be”; it is either ready to go or it is not as defined by Oculus, not you or anyone else.

        • Andrew Jakobs

          Neither is it on the Vive.. But it’s a bit easier on the vive..

    • David Herrington

      100% agree, this is ridiculous.

    • ✨EnkrowX✨

      Why? It wasn’t designed for this application.
      It was designed for 2 front facing cameras, which it does very well.

  • beestee

    Why not just make all of the cameras USB 2.0 since the extra bandwidth is evidently unnecessary? Or, could you use a USB 2.0 hub plugged into a 3.0 port in the PC to mitigate overtaxing the 3.0 controller?

    • Caven

      If the bandwidth wasn’t necessary, the cameras shouldn’t be able to overload a USB 3.0 controller. It sounds like any camera plugged into a USB 2.0 port will probably have to throttle back on framerate, resolution, or both.

      • Get Schwifty!

        Yeah I think that must be the case….

      • chtan

        This is why high speed camera is not practical in this gen. The fanboy should stop bringing up the fantasy about how wonderful the tech will be in future. Get real, fix the existing Rift tracking problem.

  • I still feel like CV1 is more like DK3, with all the setup and troubleshooting needed to get it working, but I finally found the combination of camera positions, usb3 ports, pcie cards and cords to make it track better. Now I just need that late update and see if at address some weirdness it still has from time to time.

    Also this time period seems like the HD-DVD vs Blueray fight with Vive appearing to be the better tracking solution, but not the best headset or controller. I’m staring to feel like outside-in will be the real solution in the end and put these 1st generation fixed point tracking solutions to shame once we get it figured out.

    • Bryan Ischo

      I think you meant “inside-out” not “outside-in”. It’s a confusion of terms that alot of people make.

      “inside-out” tracking is where the thing “inside” the play space (the headset) does the tracking by looking “out” into the world.

      “outside-in” tracking is where the thing “outside” the play space (the tracking camera) does the tracking by looking “into” the play space at the headset/controllers.

      Lighthouse is a little weird in that it’s “outside-in” tracking in spirit (you need fixed components of the tracking system set up outside the play space that send a signal in) but “inside-out” in practice, in that the actual sensing is done on the HMD/controllers. However, it is missing the really differentiating aspects of true “inside-out” tracking in that it requires more than just the ambient environment to do tracking.

      Anyway, I agree with your basic point — once the HMD acts like our eyes and can figure out where we are by “looking” around, we’ll have the best solution. Except it won’t handle tracked controllers well … ooops …

      • Sponge Bob

        dude, can your eyes figure out where you are with 1 mm precision ?

        if yes then you are a superman

        and controllers should be tracked relative to HMD anyway, with sub-mm precision, not to some basestation 5 m away

        • Bryan Ischo

          Hold on a sec … so your basic problem with my post is that I didn’t fully qualify what “once the HMD acts like our eyes” means so that people like you don’t have to actually think about what they’re reading?

          OK genius, I’ll spell it out for you then. Once the HMD is able to track its position using sensors on the device rather than outside of the device, much like we figure out where we are in the world using sensors on our body rather than outside of our body, we’ll have the best solution.

          Also, tracked objects are much more frequently occluded from the HMD than they are from numerous points outside of the play space, so I would expect this to be a real problem that is somewhat prohibitive in terms of HMD-relative controller tracking. But you know, I could be wrong about that, so I’ll give you that last point … maybe inside-out tracking can handle controllers well, maybe that’s not really a concern. Time will tell.

      • Yes, I had that term flipped. Controllers are always an issue, I feel like any vision based solution isn’t the best since it can always be occluded in some way from your headset or outside fixed positions.

        I wonder what other technologies could be used, like magnetic fields that could pass through solid matter. Basically have the headset track inside-out and position the controllers based upon it’s location + controllers relative position to headset.

  • Jona Adams

    I’ve been doing room scale with 3 sensors for a little over a week now. And it’s great, I can reach up above my head, all the way down to the floor with full steady tracking. Even better than the Vive, when I had that at first launch. I’m using an Anchor slim USB 3 hub that I purchased on Amazon.