Yesterday, I opened up my brand new 7 dollar Google Cardboard. This nifty little device, made of innocuous cuts of cardboard, a few magnets, and some lenses, can take any smartphone and convert it into a virtual reality device, similar to the Oculus Rift (though for a MUCH lower price). While there are still only a handful of apps available, and you can’t use it for too long before becoming a bit queasy, the potential of this device is incredible.
Most of the apps you find are what you might expect: riding a virtual roller coaster, taking a virtual tour of the palace of Versailles, and even a rudimentary zombie first-person shooter. What really caught my attention though was an app called Orbulus. Users can upload 360 photos, which can be taken using any smartphone and Google Camera, which creates “photospheres.” If you step into one of these photospheres using Google Cardboard, it is as if you are standing exactly where the picture was taken. Last night, in the space of five minutes, I was looking at the glittering lights of the Eiffel Tower, then was transported to a tiny comic book shop in Tokyo. Each scene is incredibly immersive.
To me, this device, and really, the future of Virtual Reality in general, holds great promise for science and science learning. A huge difficulty facing teachers and students alike is the foreign nature of the cell; it simply is too small to see and comprehend. Imagine though, stepping inside a cell, looking up at a massive Golgi overhead, and watching as swirling enzymes crisscrossed before you in a bustle of productivity. The micro turned into macro through virtual reality. I get excited even thinking about it – imagine a child seeing the cellular world in it’s full glory – this is what inspires curious minds.
Even further than aesthetics though, manipulation of objects in virtual space interests me. There are already a number of apps which allow you to view proteins in 3D from your smartphone (I use NDKmol). It should be a rather easy step to configure apps like these to be compatible with Google Cardboard and other virtual reality devices. Thus, you can see a protein, perhaps interacting with small molecules, like drugs or antibodies, in fully rotational three-dimensional space.
However, this is a very hands-off, non-interactive approach. You might notice that Google Cardboard has a slot for the front facing camera to be exposed. This may be to allow for apps to combine virtual reality with reality, such as superimposing Google Maps directions directly onto your field of view (through your front-facing camera). It also may be used for motion tracking of your hands, so that you can interact with your virtual reality using simple hand gestures.
So imagine being able to interact with a protein in virtual space, and, if you can code it robustly and efficiently, you may even be able to push and pull on residues and see how the whole protein might respond, conformationally, given the underlying chemical properties (i.e., charge and stereochemical configuration), to your virtual mechanical stimulus. This of course is heavily reliant upon computational biology, which is still tackling what may seem to be the most basic of challenges: how a protein folds. Strides are being made, however, and using these ideas, you could probe a protein for its weak points, potentially find targets of inhibition or activation, all simply through virtual manipulation.
These concepts are of course mere speculation, but the possibility is very real given the technology we have at hand. It may only be a matter of time before we can step into the cell, snag a whizzing protein out of the air, and unfold it to see how it is made.