The term virtual reality, for me, used to conjure images of individuals blocked off from the outside world by large headsets. Even the most compelling experiences I had early on with VR—in which I was able to see and empathize with people in vastly different circumstances from my own—were still solitary. Increasingly, VR is interactive, with remote collaboration tools that capture gestures, and immersive games that can be played by multiple people over a distance. Perhaps due to the head gear and associations with popular combat games, though, I thought of virtual reality in opposition to intimacy.
This shifted during a recent trip to San Francisco, where I visited an installation called Visual Voice Virtual Reality (VVVR), created by Ray McClure and Casey McConagle. The installation is currently housed in the Grey Area Foundation, where Ray is an artist in residence. The installation is minimal. On the floor are two meditation cushions and between them a sensor that tracks gestures and other movement of head and hands via infrared LED. The small, grey carpeted room reminded me of my graduate school office. My expectations started off low.
But when I sat on the cushion and put on the Oculus VR goggles, I was transported from this cramped grey office to an infinite expanse. Intensely colored circles, triangles, and more complex geometrical shapes floated above me. Some shapes emanated from my avatar and some emerged from the avatar of my partner, who sat on the cushion several feet away. The shapes were generated by whatever sounds we uttered.
We interpreted the shapes moving between us as flowers. Purple, yellow, and orange flowers floated toward me as he vocalized and vice versa. It seemed intuitive to send each other flowers by singing. They changed in shape and color based on volume, pitch, duration of our sounds. I am no singer, but here I felt like Lana del Rey. Some of my sounds were ethereal and others resonated like deep yogi chants. The sounds were as beautiful as the shapes.
That’s because our voices were digitally enhanced. As the microphone in the Oculus picks up the sounds, it distorts and delays them. Words get garbled. These delays and distortions made conversation all but impossible. We gave up on forming sentences and found other, more primal ways to connect through sound.
The interaction that Ray designed is intended to be subtle and nondirective. We made simple sounds and with those sounds, sent one another colored shapes. The shapes don’t do anything different if they hit each other, there are no points or gimmicks to motivate particular types of interaction. Ray and his collaborator Casey didn’t want to create a game and resisted anything that would steer people to vocalize in a particular way. In this way, VVVR is very different from the first-person shooter games, like Gears of War, that are typically designed with Unreal Engine, the platform on which this was developed.
Ray has observed a kind of vocal attunement among some of the couples who have used it. They start making sounds that are similar or complementary. He described first daters who found a connection beyond pleasantries, a long-standing couple who got so lost in resonances that he had to kick them out after 30 minutes, and other cases, like a woman singing alone across from her silent, skeptical partner, where the installation surfaced relationship tensions.
Ray imagines VVVR being used in a kind of speed dating. He envisions a darkened room where participants would not see each other but would later get feedback on their matches, based on vocal harmonization and similarity in breathing patterns. The colored shapes in VVVR currently glow when the two people using it make sounds with similar frequencies, and this may expand to include visualizations of other musical relationships. He also sees possibilities for it as something that might help people with autism or speech impediments, or as a form of couples therapy.
There is something liberating about being in an environment that dramatically alters visual identity and distorts voice to the degree that words are difficult to decipher. It frees us from norms of self-presentation and the habits of conversation, inviting exploration with sound to connect.
Read more enhancing relationships with tech in my new book, Left to Our Own Devices.
 For more on empathy and VR, see Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford and Chris Milk’s talk, “How Virtual Reality Can Create the Ultimate Empathy Machine”.