VR

Words Works Worlds & Warcraft by Yiliu Shen-Burke

A few weeks ago, I was a volunteer with the Kaleidoscope VR world tour during their stop in Berlin. It was an eye opening experience, full of a-ha moments (both encouraging and disconcerting). As a VR-vangelist, it was great to see all these people excited enough about VR to pay thirty bucks and wait in line all night for a chance to try on a pair of goggles. As an architecture student, it pained me that they invited all these people to the massive glass atrium of the Jewish Museum in Berlin... just to cover up their eyes. (By the way, if you're reading this and are affiliated with Kaleidoscope: I never got that t-shirt? I want that t-shirt.)

As consolation to the volunteers, who would be too busy helping visitors untangle themselves from masses of wires to watch any of the content, we were given a couple hours before the event to see as many of the VR experiences as we could handle. This turned out to be a great perk—once the doors opened, huge lines formed for the Oculus and Vive, and not everybody even had the chance to use one. At least there were plenty of Gear VRs.

Unfortunately, the content I saw was mostly disappointing. In all fairness, despite Kaleidoscope's relatively high profile in the VR content world, it is only a couple of years old (like everything else in the industry). I'm sure that the quality of content from all production houses will only improve with time and experimentation. I completely get that this is an awkward phase of development that everybody has to go through.

But I needed to understand what I had seen. In the days and weeks after the event I tried to process some of my feelings of disappointment. Was it simply that VR has been over-hyped? Were my expectations unrealistically high? I don't think so. I was and still am confident that this is just a hump that needs getting over. But what is the nature of this hump?

It wasn't a question of production quality. Sure, there was clearly work from a range of production budgets on show. But Kaleidoscope has been fairly selective in their curation process, and mostly everything was well-polished and well presented. Instead, I'm starting to realize that there are basic structural qualities of the medium of virtual reality that pose brand new challenges to content creators. Central to these challenges is the question of who the viewer is supposed to be in the virtual universe that she enters. I think this question is quite tricky to answer, and probably almost unique to VR.

First, the lay of the land. Currently available VR is dominated by three groups of content producers: filmmakers, journalists, and video game makers. Each of these industries has been well-poised to jump into VR content-making, enjoy a first-mover advantage, and bring with them deep expertise in areas of knowledge hugely relevant to this medium.

All three of these industries rely more or less on narrative storytelling as a central mode of worldmaking (with some notable exceptions, such as Minecraft). Now, narrative is a powerful vehicle for the communication of complex, interconnected world states. For example, you can read on Wikipedia that "Human beings often claim to understand events when they manage to formulate a coherent story or narrative explaining how they believe the event was generated."

However, narrative also happens to be also dependent on certain associated concepts, such as causality, sequentiality, revelation, events, progression, and teleology. Narrative makes a lot of sense for the media that these industries have heretofore worked with, but it is not clear to me that narrative is necessarily the most natural or effective way to make virtual worlds, especially given all this baggage that it comes with. It's certainly not the only way.

The difference between VR and the media that have preceded it is that VR places the viewer into the world space that the narrative is about. Observations of the events of the virtual world are now subordinate to the spatial identity and values of the viewer, rather than being carefully framed and timed by an editor. Yi-Fu Tuan puts it very succinctly in Space and Place when he writes that "The human being, by his mere presence, imposes a schema on space." Space that is occupied by a human body acquires hierarchies of value—simplistically, high and low, front and back, left and right. 

On another level, space cannot be but such agglomerations of distinctly valued dispositions, as no human has ever experiences space outside of her body. Two dimensional media obviate the need to deal with this question by collapsing all inclinations into the frontal one. Films are presented on a screen in front of the viewer, and the same applies to books, photographs, websites, non-VR video games, and so on. This way, everything in front of you is of central narrative importance (by which I mean, of sole causal agency within the narrative world), where as events that occur elsewhere are distractions.

Of course, content creators have already realized this distinction about virtual reality, and are quickly developing answers to this question. They make sure to space out the timing of events so that viewers have a chance to realize they're happening. They use clever tricks such as audio cues to make sure that viewers don't miss those events. Such cues are aided by the fact that the viewer is often helplessly placed at the 'center', and therefore her spatial axes can line up with those of the scene by facing the correct direction. Once freedom of movement is granted to the viewer, this problem becomes much more difficult, requiring frequent and redundant audio and visual cues to orient the viewer the right way at the right time—something that video game designers perhaps have the most experience with.

Moreover, by inserting the viewer into the virtual universe, there is created the implicit potential for a causal relationship between the viewer and the events she witnesses. This is a relationship that very few narrative media have had to address before (murder mystery dinner theatre, anyone?). To gloss over this relationship feels disenfranchising at best, and violent at worst. By its nature, narrative wants to show you things: events that lead one to the other, connecting the dots to form a larger picture. It has something to say and already knows how it wants to say it. But in VR, without causal agency, the viewer is left with the awkward philosophical question of why they are there in the midst of the unfolding story in the first place. Is it really only to observe?

Eugene Chung, founder of Penrose Studios and co-creator of the Oculus Story Studio, has acknowledged the challenge that VR poses to storytellers. Through his experience producing short films and experiences for the Oculus Rift, Chung has come to realize there seems to be an inherent incompatibility between presence and narrative storytelling. In this article he writes, “A high quality VR experience… has the potential to deliver Presence. However, this poses a challenge for VR storytellers—a challenge that can be captured in another simple phrase: Presence and Storytelling are in conflict with each other."

Chung describes the conflict as arising from the competition between Presence and Storytelling for the viewer’s attention.

“When we truly engage with a story, we begin disengaging with the physical stimuli around us that aren’t germane to the narrative… If someone in the theatre sneezes or if a cell phone goes off, we are jolt out of the experience."

“When we’re experiencing things in reality—when we’re fully present—rarely is our brain engaged in the same way than when we’re told a story… We experience the sights and sounds as a present individual, but we don’t feel like we’re being told the story of our [experience] outside of our own bodies."

“A similar thing happens in VR. Presence means we’re viscerally transported to another world, but because we inhabit this other world so completely, it is difficult to tell a story in the classic cinematic, theatrical, or campfire sense. To enhance storytelling, we might conduct tricks such as darkening the stage… but this consequently decreases the sense of Presence."

“While Presence and Storytelling are not necessarily inverse functions of each other, they appear to be in conflict. The deeper question that this conflict brings up is the question of Point of View—who are we supposed to be in the VR experience? This identity question is a lot harder to answer than on first inspection."

This is a hard question indeed, and I have the feeling that the narrative mode of world building places unnecessary contrainsts on the kinds of answers VR content creators can come up with. For example, how about discarding some of the associations with causality, sequentiality, revelation, events, progression, and teleology that I brought up earlier? These concepts all serve to force the viewer to the periphery of the action, even as VR places her sensorially at the center.

The sum effect of these challenges to narrative storytelling in virtual reality was that watching these films and experiences often felt oppressive, not liberating. I acutely felt the reality—that somebody had strapped a screen to my face and was forcing me to watch their particular version of a sequence of events—rather than the virtual reality—that I had been magically transported into a place other than my physical location.

Perhaps those best equipped to deal with this problem currently are the game makers, since player agency is such a central component of game design. Although even the most critically acclaimed video games still rely heavily on a progression through a good story, gamers know exactly who they are within the universe of the game, and they have a body with spatial agency that is fully under their control (usually to allow the pointing of large weapons). Recently, open universe games have appeared, such as Minecraft, and No Man's Sky, which attempt to do away with narrative altogether, granting full agency to the player to explore, build, destroy, make friends, make enemies, or simply be bored.

I think there is great potential in this non-narrative mode of worldmaking for virtual reality. I also think that architects are perfectly positioned to contribute to the conversation at this point, because the spatial design disciplines have had to deal with the problem of designing worlds for itinerant users since... forever. Buildings and cities can suggest patterns of usage, but architects can only establish the general conditions of possibility for the stories that unfold within and between these spaces, not the narratives themselves. Perhaps this way of worldmaking is less predictable and less precise than the narrative,  but it is certainly no less powerful, especially over the long run.

Therefore, the reasons I am fascinated by virtual reality are the same reasons I am fascinated by architecture. The creators of virtual worlds will have to learn to grapple with the same hard questions that architects and landscape architects and urban designers have asked themselves for centuries. But that chain of reasoning also suggests the reverse: that spatial designers, who have already developed the vocabulary and mental models and tools necessary for non-narrative world building, are a natural population from which the creators of virtual worlds should emerge. Just as the invention and popularization of the touchscreen turned graphic designers into user interaction and user experience designers, I see the invention and popularization of consumer virtual reality turning spatial designers into virtual spatial designers.

So, grab your tub of Purell wipes, and let's go—the future is waiting!

 

Visit Your Design (Virtually)! A Rhino-to-Rift Guide by Yiliu Shen-Burke

Oh hi there.

If you've had any form of internet access over the past year, you probably know that VR is like, gonna be HUGE. The first consumer version of the Oculus Rift starts shipping at the end of this month, and after that it's only a short, slippery slope to this:

Let's say you're an architectural designer who's super excited about the coming subsumption of physical reality and want in on the virtual action ASAP. In VR you can build whatever the hell you want, without pesky clients and regulators telling you annoying things about budgets and code! But how to get started?

I spent the last week figuring this out, and unfortunately it really did take that long to get a basic workflow down. VR is still in its 'installation phase', which means that there's a lot of jostling and confusion out there about the hardware and software. Hopefully things will calm down soon, but for the moment, you do have to be prepared to spend some time setting this up.

If you follow this guide and don't run into any unanticipated hurdles, it shouldn't take more than a few hours of tinkering to start running around inside your Rhino model.

Ready to go? Let's go.


WHAT

Hardware

  • MacBook Pro 15" with Retina Display (mid-2012)

Oculus doesn't officially support Macs because of their inability to drive the headset at a constant 75+ frames per second in many video games. This leads to nausea and headaches, which are indeed bad. If you are a designer, however, chances are good that you use a Mac (because you like pretty things). Luckily, static architectural models appear to be far less taxing to run than games, so I've been able to maintain 75 fps on my rig even in very large environments.  Part of developing this workflow was the trial-and-error necessary to determine which combination of (mostly older/deprecated) software and drivers still works with Mac hardware. If you have a Mac with an NVIDIA graphics card at least as capable as mine (650M), you should be fine. I have no idea how Macs with only integrated or AMD graphics cards will fare, although you are more than welcome to swap tales in the comments.

  • Oculus Rift Development Kit 2

I don't know if the DK1 can be used with this workflow, but maybe? The consumer version that's coming March 28 will probably not work because it will probably require (the as-yet-unreleased) Oculus Runtime 1.0, which will probably be incompatible with any Mac laptops.

Software

  • Oculus Runtime 0.6.0.1 for Windows

The key constraint on the software side of things is that Oculus Runtime 0.7 and 0.8 will not work on a MacBook Pro. That means that we have to use Runtime 0.6.0.1 instead, which therefore means that we have to use Windows 7 or 8 and Unity 4. Frowny face.

  • Windows 8.1 Pro (via Boot Camp)

I've tested this successfully with both Windows 7 and Windows 8.1. I would really not recommend trying to run this with Parallels/VMWare because of potential graphics performance and driver issues. If you are on Windows 10, you officially need to use Oculus Runtime 0.7 or 0.8 instead of 0.6.0.1, but (again) neither of those newer runtime versions will work on a MacBook Pro. Actually, there does seem to be a way to get Runtime 0.6.0.1 working in Windows 10, but you have to have upgraded from Windows 7 or 8... otherwise it will break your computer like it broke mine that time I tried it. Frowny face.

  • Rhino 5

Or any other 3D modeling software that can export .obj mesh files, like SketchUp or Blender.

  • Unity 4.6.9 for Windows

This is the latest version of Unity that I can successfully use with this workflow. Unity 5 has lovely native support for VR, but sadly requires Oculus Runtime 0.7 or 0.8. Again, frowny face.

  • Oculus Unity 4 Integration 0.6.0.0

For whatever reason, version 0.6.0.1 of the Unity 4 Integration package has an OVRPlayerController component that doesn't jive with this setup, and leads to a black screen on the DK2. Don't worry if this last sentence doesn't make any sense to you yet; just use version 0.6.0.0 instead of 0.6.0.1 of the integration package.


HOW

Because my target audience here is architects and architecture students, I will pretend that you already have Windows running on Boot Camp, with Rhino (Sketchup or your 3D modeler of choice) set up. If you are using Windows 10... you'll have to decide whether you are willing downgrade to Windows 7 or 8 for the sake of the future. No pain/no gain, two steps forward/one step back, etc. Or you can try that hack for running Oculus Runtime 0.6.0.1 on Windows 10.

First, let's get your DK2 working properly.

Step 1: Download and install the latest NVIDIA drivers for your Mac and version of Windows.

In my case, this is version 362.00; it's possible that newer versions of the NVIDIA drivers will break this setup, so if you install newer drivers and your DK2 isn't working, then try reverting to version 362.00.

Step 2: Install all four of these C++ redistributables (yes really all four).

It's possible you already have some or all of these installed on your computer, especially if you are running Windows 8+. Don't worry; the installer will detect this and leave well enough alone.

Step 3: Download and install version 0.6.0.1 of the Oculus runtime for Windows; restart to complete installation.

Step 4: Get your DK2 and the positional tracker all plugged in and set up.

Here's a handy guide from Oculus on how to do so.

Now that Oculus Runtime 0.6.0.1 is installed on your computer, there should be a little eye-con (get it?) in the notification tray. Click on this and then on "Rift Display Mode"; in here, make sure your Rift is in Direct Display mode.

 
 

Click on the eye-con again, and open the OculusConfigUtil; fiddle with the settings and such. Use the Demo Scene to make sure your Oculus is working properly.

 
 

 

Your hardware is all good to go! Now, this is how you turn your Rhino model into a standalone, VR-enabled app.

Step 5: Download and install Unity Editor 4.6.9 for Windows.

Step 6: Watch and follow Nathan Melenbrink's great three-part video tutorial on properly importing your Rhino model into Unity 4.

A few words on this part of the process:

  • The videos run about an hour in length, and the actual process of porting your Rhino model into Unity can take many times longer, depending on how detailed your model is geometry/material-wise. Happily, if you make changes to the Rhino model, you can simply re-export over the .obj file that Unity references, and things will mostly work out okay.
  • Don't add the FirstPersonPlayerController object like Nathan tells you to, because it won't work with your DK2 (more on this later).
  • Don't build your project for the web player (more on this later).
  • Make sure that all mesh face normals face "out" in Rhino. The easiest way to ensure this is to only use closed polysurfaces in your model, but this isn't necessary if you check mesh normals carefully (Nathan covers this).
  • Make that the meshes representing anything you don't want people to walk/fall through (like the terrain or walls) has a Mesh Collider component added to them in Unity (Nathan also covers this).

Step 7: Now that your Rhino model is happily in Unity, download and import version 0.6.0.0 of the Unity 4 Integration package.

You'll want to unzip the downloaded file and throw the folder somewhere easy to reach, like the desktop.

Then, in Unity, go to the dropdown menu item "Assets" -> "Import Package" -> "Custom Package...". Navigate to the folder you just unzipped, go into the folder "OculusUnityIntegration", and select "OculusUnityIntegration.unitypackage". When prompted, import all the assets, which will appear in a new folder inside your project called "OVR".

Step 8: Add the OVRPlayerController component to your scene.

In your Project tab, click on "Favorites" -> "All Prefabs", and find the "OVRPlayerController" component.

 
 

Drag this into your scene and move it around until it is where you want the player to start when the application launches. Make sure that the bottom of the OVRPlayer object does not go through the ground/floor, because if it is does your player will immediate fall through the floor and keep falling... forever!

Step 9: Build your project for Windows.

Once your meshes and player controller are all set, press Ctrl + Shift + B or go to "File" > "Build Settings..." to bring up the build settings dialogue.

 
 

First, click on the "Add Current" button to add your scene to the build list. Then make sure you are making a "PC, Mac & Linux Standalone" application, that your "Target Platform" is "Windows", that your "Architecture" is "x86" or "x86_64", and that none of the boxes are checked.

Lastly, click "Build" (not "Build And Run") and point to the folder where you want your finished application to appear. Hit "Save".


DONE!

To visit your newly created virtual pocket universe, simply make sure that your DK2 is plugged, then double click on the standalone .exe that Unity built. Note that there might be a long moment of black between the "Made With Unity" screen and when you enter your world.

The basic controls are:

  • WASD or arrows keys to move
  • Q and E keys to rotate your 'body'
  • Space bar to bring up the debugging HUD, where you can check if your computer is delivering a consistent 75 fps to the headset

Thanks for making it all the way to the end of this (admittedly lengthy) tutorial. At the moment, unfortunately, there is not a more streamlined way to do this, nor does it look like there ever will be for currently existing Macs (unless somebody writes a program that partially or fully automates the above process).

If you check back in a few days, I will have posted some tweaks you can make to improve your virtual experience, like lightmapping, jumping, automatic rotation, etc.

Happy VRing!

- Yiliu