Adaptive AR Design
To draw an analogy, it took web developers many years to develop reliable, practical design rules for getting a website to fit on screens of different shapes. And yet that seems like a simple task compared to adaptive AR design, which will need to work across a mind-boggling range of arbitrary environments spanning all three dimensions, rather than just a handful of common 2D screen sizes.
This is not a trivial issue. Even VR game design—which has years of practical development time as a head start—is struggling with a much more basic version of this problem: designing for varying playspace sizes. Generally VR playspaces are square or rectangular in shape, and have nothing in them except for the player; a walk in the park compared to the complications that come with AR—and yet still an ongoing challenge.
Consider: even for people living in identical apartment units, the arrangement of their furniture and the objects in their home is going to be completely unique. It is going to take many, many years for AR game design to evolve to understand how to create convincing entertainment experiences which can adapt to a seemingly infinite set of environmental variables—from floor plan to ceiling height to furniture arrangements and much more—across billions of different homes and buildings, not to mention wide-open outdoor spaces.
You might think it isn’t difficult to make a simple AR shooter where enemies will come from the only other room in someone’s one-bedroom apartment, but don’t forget that without pre-mapping the environment in the first place, the AR system wouldn’t even know that there is another room.
Let’s just assume that we’ve solved the object classification problem (discussed in the prior section)—such that the system could understand the objects around you at a human level—how could a developer create a game that takes advantage of those objects?
Let’s consider a simple farming game where players will plant and water augmented reality crops in their home using a real cup that will pour AR water. Neat idea… but what if there’s no cup around? Does the game become useless? No… developers are smart… as a backup, let’s just let the player use a closed-first as a stand-in for the cup; when they tilt it over, water will pour right out. It’s foolproof!
So now we move on to planting our crops. The American developer expects everyone should have enough room to plant 10 rows of corn, but half a world away, half of Europe is cursing because their typically smaller living quarters won’t fit 10 rows of corn, and there’s no fourth bedroom to act as the seed shed anyway.
I could go on, but I’ll spare you. The takeaway is this: if we’re to escape from experiencing immersive AR only on blank floors and walls, we’ll need to design adaptive AR games and apps which will need to utilize the actual space and objects around us, and somehow, through some very smart design, figure out to manage the billions of variables that come with that.
While this challenge is perhaps the furthest behind of the three identified here, it’s one that can begin to be worked out today on paper, ahead of the future devices that will actually be able to deliver these experiences.
– – — – –
I’ve heard many people in the last year or so suggest that AR and VR are matched in terms of technological maturity, but the truth is that AR is several years behind where VR is today. AR is massively exciting, but from hardware, to sensing and perception, to design, there’s still major hurdles to overcome before we can achieve anything close to the common AR concepts we’ve seen over the last decade. It’s an exciting time for AR, the field is still wide open, and the opportunity is ripe to break into the space with something that could move the entire sector forward. Get to it!