Virtual-reality applications give science a new dimension

.. with some concluding notes on the virtual reality breakthrough

26 May 2018 (Washington, DC) – Over the long history of human beings, we have had tools. These tools have been an extension of us. And they have been wonderful. They have made us more prosperous and inventive.

Yet, we’re embarking on a far different merger with our machines now. These machines are intellectual. They filter the world for us. They shape the way that we view reality. Indeed, they create a virtual reality. Soon these machines will be implanted within us. What’s wrong with this? Well, we’re not just merging with machines. We’re merging with the companies that run these machines—who run these machines for profit. And here’s the existential threat… these technologies will change what it means to be human. Once we take this leap, it will be very hard to reverse course.

However, I tend to take these developments in baby steps.

Last week I was in Munich, Germany, exploring the world of virtual-reality. I was at Arivis, a life-sciences software company that developed this particular VR visualization tool, called InViewR. As I put on a virtual-reality (VR) headset, the outside world disappears. A cell fills my visual field, and as I crane my neck, I can see it from several angles. I stick my head inside to explore its internal structure. Using hand controllers, I dissect the cell layer by layer, excavating with a flick of the wrist to uncover tiny, specialized structures buried beneath the surface.

Looking at a cell in VR is “as close as you can get to touching” such a minuscule structure, says Sebastian Konrad, product manager for VR at Arivis. VR isn’t new, but interest in the technology has boomed since 2016, when gamers and a handful of scientists introduced several high-quality, relatively inexpensive commercial headsets to the public. A similar surge has emerged in augmented reality (AR), a related technology that uses a see-through visor or smartphone screen to layer objects on top of real surroundings.

Some scientists see VR and AR as more intuitive to use than conventional flat screens for viewing complex 3D structures. Others have sought cheap, smartphone-based headsets, which use a smartphone screen as the goggles, to increase public understanding of their work. Their numbers are relatively small: VR and AR remain niche tools for scientific research. Yet some researchers say that the technology has provided new insights.

Adam Lacy-Hulbert is a principal investigator at the Benaroya Research Institute in Seattle, Washington who I met at the Mobile World Congress this year which has had a growing, enormous display of VR-AR products. He is particularly interested in lysosomes – structures that help to clean up the insides of cells. But he was perplexed by some of the 2D images he was getting using conventional microscopy. “It looked as if part of the lysosomes of the cell had moved into the nucleus, which didn’t really make sense to us.” But ConfocalVR, a tool developed at Benaroya that uses VR to visualize images from confocal microscopes, made what was really happening “jump out within seconds”, he told me. The nucleus was actually deforming and moving around the lysosomes. I also saw a VR tool for molecular-visualization of proteins and other structures.

Although inexpensive options are available, most visualization tools work only with the priciest headsets — such as Facebook’s Oculus Rift and the Vive from Taiwanese electronics company HTC – because they can track the user’s head and handheld-controller movements in 3D space. Researchers and gamers have their preferences, but the differences between Oculus Rift and Vive are small. There is no clear winner at this point.

That said, not every tool is compatible with all headsets. InViewR works only with Oculus Rift, whereas ChimeraX and ConfocalVR work with both. Oculus Rift and Vive both run using the Windows operating system, although Vive is also compatible with MacOS X.

VR is computationally intensive, both because each eye must see a different image to produce a 3D effect, and because those images must refresh rapidly. In some cases, a new graphics card will add sufficient computing power, but in general you’re probably going to buy a new computer. Oculus Rift suggests using VR-compatible computers ranging from US$850 to nearly $3,100; it recommends at least 8 gigabytes of memory and a high-end graphics card.

The VR software itself can also be expensive. Although ConfocalVR is free for non-profit entities, that is not true for commercial firms. ConfocalVR declined to share pricing information, but a competitor (ChimeraX) can cost up to $20,000, depending on the number of users.

For researchers who like to work as a team, the developers of ConfocalVR added in April the option for up to four users to simultaneously view, point to and grab structures in the same VR space. This could mean that scientists do not have to meet face to face to work together, says Skillman, which would potentially reduce travel costs. The developers of both ChimeraX and InViewR are looking to add similar collaborative features in the future.

Compared with VR, visualization software for AR headsets is less advanced. I read an article about Mark Hoffman, chief research information officer at Children’s Mercy Kansas City, a hospital in Missouri, who has experimented with viewing proteins and computed tomography (CT) scans using Microsoft’s HoloLens — a kind of visor with a built-in computer that projects 3D objects over the real world. The article notes that AR is more user-friendly than VR because users can see their surroundings and so are less prone to disorientation. Hoffman actually experiences motion sickness in VR – and this is not an uncommon complaint. He says in all his work with the HoloLens, I’ve never been uncomfortable.

But my Munich contacts said the downside is that, whereas a VR headset envelops your entire field of view, the HoloLens projects objects only onto a relatively narrow rectangle in the centre of your vision. It’s part of the trade-off. AR is not completely immersive, but it is “an enabler to comprehension” and there may be things you can miss on a flat screen that become clearer in AR – protein–protein interactions, for instance.

Nature magazine also had a story about surgeons at Children’s Mercy who are exploring the use of AR to view CT scans of patients’ hearts before an operation. They are using a step-by-step approach to make such data viewable using the HoloLens. The surgeon can explore the tissue by projecting it onto a fixed point in space – say, in the middle of the room. But if they turn their head, the image disappears and they see only what is actually there. They walk into the ventricle or the atrium of the heart, and maybe they’ll see that, for a particular child, the entry point of a blood vessel is not where it normally would be. The HoloLens costs $3,000, and must be ordered from Microsoft directly, because it is not available in the shops.

Cheaper headsets that use smartphones as the screen in a pair of goggles, such as the Samsung Gear VR or Google’s $15, ultra-simple Cardboard, can help researchers to reach a broader audience.

Biologists have also adopted Augment, an app normally used to illustrate how furniture might look in a room, to allow colleagues, students and members of the public to inspect 3D models of proteins through theirsmartphone screens.

For researchers interested in creating their own visualization tools, Unity – software designed by Unity Technologies in San Francisco for building games – is one of the most commonly used development environments. It runs on relatively modest hardware. There are scores of other options but they require more developing work.

Despite the broad proliferation of VR and AR tools in consumer culture, only a small minority of labs currently uses the technology, and it remains to be seen how many others will follow suit. Yet many advocates predict that VR and AR could become standard lab tools over the next five years or so. The technology feeds information to our brains in three dimensions, the way “a million years of evolution” intended, say my Munich friends. It requires an enormous amount of intellectual work to construct a 3D mental model from a 2D screen, they say. All that work goes away when you put on the goggles.

Virtual reality breaks through

2016 was the year in which VR finally broke through at the mass consumer level. Users were enabled to toggle between virtual, augmented, and substitutional reality, experiencing virtual elements intermixed with their “actual” physical environment or an omnidirectional video feed giving them the illusion of being in a different location in space and/or time, while insight may not always be preserved.

Oculus Rift, Zeiss VR One, Sony PlayStation VR, HTC Vive, Samsung’s Galaxy Gear VR or Microsoft’s HoloLens are just the very beginning, and it is hard to predict the psychosocial consequences over the next two decades, as an accelerating technological development will now be driven by massive market forces—and not by scientists anymore. There will be great benefits (just think of the clinical applications I have outlined above) and a host of new ethical issues ranging from military applications to data protection (for example, “kinematic fingerprints” generated by motion capture systems or avatar ownership and individuation will become important questions for regulatory agencies to consider).

The real news, however, may be that the general public will gradually acquire a new and intuitive understanding of what their very own conscious experience really is and what it always has been. VR is the representation of possible worlds and possible selves, with the aim of making them appear as real as possible—ideally, by creating a subjective sense of “presence” in the user. Interestingly, some of our best theories of the human mind and conscious experience describe it in a very similar way: leading theoretical neurobiologists like Karl Friston and eminent philosophers like Jakob Hohwy and Andy Clark describe it as the constant creation of internal models of the world, virtual neural representations of reality which express probability density functions and work by continuously generating hypotheses about the hidden causes of sensory input, minimizing their prediction error.

In 1995, Finnish philosopher Antti Revonsuo already pointed out how conscious experience exactly is a virtual model of the world, a dynamic internal simulation, which in standard situations cannot be experienced as a virtual model because it is phenomenally transparent—we “look through it” as if we were in direct and immediate contact with reality. What is historically new, and what creates not only novel psychological risks but also entirely new ethical and legal dimensions, is that one virtual reality gets ever more deeply embedded into another virtual reality: the conscious mind of human beings, which has evolved under very specific conditions and over millions of years, now gets causally coupled and informationally woven into technical systems for representing possible realities. Increasingly, consciousness is not only culturally and socially embedded, but also shaped by a specific technological niche that, over time, quickly acquires rapid, autonomous dynamics and ever new properties. This creates a complex convolution, a nested form of information flow in which the biological mind and its technological niche influence each other in ways we are just beginning to understand.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top