
We're half way through 2016 and it is certainly gearing up to be the year that consumer-ready virtual reality makes its presence known. As with all new technology, there is a certain amount of (understandable) confusion regarding what it is, how it works, and why it’s going to change the future of education, healthcare, gaming, and computing. Those are topics that can be tackled in a separate post, but for now, there’s an elephant in the room that needs to be addressed! I will no longer idly stand by while my colleagues, Facebook friends, and fellow bloggers write about 360 videos as if they were virtual reality experiences.
Let's clear this up right now...360 degree videos by themselves DO NOT constitute virtual reality. Don’t get me wrong – they can be incredibly entertaining and fascinating to look at...but it takes more than a 360 degree video to create a true virtual reality experience.
Google defines virtual reality as "the computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors."
We see the embodiment of this simulation in VR headsets like Samsung’s Gear VR, Oculus Rift, HTC Vive, and Zeiss VR One. When you put on the headset, you’re not just watching a video that can be panned on a screen, rather, you’re immersed in it. In the case of the HTC Vive and other "room-scale" solutions you’re even able to physically walk around it! Now – the lines do blur a bit when we start to talk about watching static 360 degree videos in a VR headset. One might argue that, based on the definition above, this scenario still does not qualify as "Virtual Reality" as it is not necessarily three-dimensional and we do not interact with it in any sort of physical way other than turning our head to see a different part of the image. While this is true, the argument can be made that this sort of implementation is actually right on the cusp of what we should consider to be a true VR experience for the following reasons...
- It is immersive and viewed via a headset
- We are at least minimally interacting with it in a ‘real or physical way’
- It can be three-dimensional (stereoscopic) depending on the type of camera used to capture the footage.
So now you’re probably thinking... "360, 3D...what’s the difference?" In layman’s terms, 360 means it surrounds you. 3D means it has depth, like looking at a landscape, you’ll notice that there are objects closer to you, and objects that are further away. An image can be 360 and not 3D, or 3D and not 360, but keep in mind the distinction. Shooting footage in 3D requires a camera(s) with stereoscopic lenses (one lens for your left eye, and one lens for your right). You see – humans perceive depth due to the fact that there’s a bit of space between our left and right eye. Each eye receives a slightly different image of the world around us because of this fact. So it follows that in order to perceive scenes in virtual reality with depth (3D), each eye must receive a slightly different image.
The takeaway here is simple. VR is intended to be much more than static 360 degree videos. It has to be multi-sensory by appealing to or activating multiple senses simultaneously AND it has to be immersive. As time passes, the bar will be raised fairly quickly. The experience of simply watching a 360 degree video in a VR headset will soon feel like watching standard definition television on a 4K TV. Haptic feedback via the use of gloves and other clothing, super crisp ultra-high definition headsets, and true "room-scale" experiences will become the norm. Only then will we see the true impact of what VR can and should be. For now, let’s just agree to stop calling the 360 videos we watch in our newsfeeds virtual reality.