We’ve seen how NASA recreates the vacuum of space right here on Earth, but what about the gravity of space? What about the forces of inertia? When large objects move and behave so differently, how to you train for a mission so you know what to expect when you get there? Like this.

We visited the lab where NASA creates incredibly intricate virtual worlds, then blends them with real life and robots in the most advanced virtual reality we’ve ever seen. Virtual reality that puts Oculus to shame. A freakin’ robot that simulates the physics of space. And yes, a jetpack.

Gizmodo’s Space Camp is all about the under-explored side of NASA. From robotics to medicine to deep-space telescopes to art. All this week we’ll be coming at you direct from NASA’s Johnson Space Center in Houston, Texas, shedding a light on this amazing world. You can follow the whole series here.

Advertisement

NASA’s Johnson Space Center (JSC) in Houston, Texas, is home to the agency’s Virtual Reality Lab, which is arguably the most high-tech training the astronauts will receive on Earth. In it, they will encounter DOUG. DOUG is not a person; it’s an acronym for Dynamic Onboard Ubiquitous Graphics program. It’s an impeccably detailed rendering program that models everything on the space station, from the decals to the fluid lines and electrical lines.

Sponsored

According to James Tinch, NASA VR Lab manager, “Anything that the crew might see outside on the Space Station we model in here so that when they go outside they feel like they’ve already been there, because they’ve experienced it here in the virtual world.”

Inside the VR headsets, crew members get a 3D virtual representation of the ISS. The VR Lab staff can put them anywhere on the ISS, regardless of whether it’s on the U.S., Russian, Japanese, or European segments. That means that if they’re working on a pump unit, they’ll get a real perspective on what their workspace will look and feel like. If they’ll be working while riding on the end of the station’s robot arm, they can experience that (virtually) as well.

This gives them a chance to see if the worksite can accommodate their needs, to experience realistic viewing angles from within their spacesuit, to see whether they’re in the right place on the ISS to replace the part that needs to be fixed, to see if the hand tool they’re using will work as intended. The mission will be adjusted and rewritten, and tools and components will often be rebuilt, based on what they learn in the virtual environment.

Making space on Earth

When you’re suited up in the VR rig you are outfitted with a pair of sensor-packed gloves. This setup enables you to track your own hand movement and see them in your virtual world. The gloves also have force sensors in the palms, so you can close your hands to grasp an object (a tool, for instance, or a handrail), and your virtual hands will close around it. It isn’t articulated at an individual finger level, but the open/close positions seem to be detailed enough for these simulations. There’s also a boxy rig worn on your chest that tracks your rotational movement, so when you twist it actually moves your body in the virtual world.

Image credit: NASA

Why all the trouble to recreate these worlds virtually? The Earth’s gravity makes it impossible to accurately simulate some space tasks with physical hardware.

The next-best option for astronauts to simulate work conditions in space is NASA’s Neutral Buoyancy Laboratory, which is underwater in a gigantic pool at the JSC. There they’ll have the hardware version of some specific components, but it’s not in a space station environment—the ISS is just far too big. They might have one small piece of the ship mocked up on one side of the pool, and then they’ll have to move over to another piece somewhere else in the pool. It’s better than nothing, but it doesn’t reflect the reality of the real ISS layout.

Virtual reality, by contrast, gives you a full view of everything you’ll experience in the real environment, and all the pieces are in the same spatial relationship that they’re actually in on the station.

Then, of course, there’s the helmet. It may not be a looker, but it’s definitely high-functioning. It gives users a 720p display, along with the full head-tracking functionality you’d expect. During my time in the helmet I didn’t notice any lag at all, which is the number one thing that ruins a virtual reality experience.

I asked Dr. Tinch if the system was based on the Oculus Rift. He said that at the time of development the Oculus didn’t have the resolution that they required, and so one of the VR Lab’s engineers built the current helmet in his garage. They did say, though, that they are keeping an eye on how commercial VR hardware evolves, and they may swap one in as they progress.

A magnetic system handles motion tracking for the head, chest, and hands. The sensor is mounted to the ceiling above the user, and can perceive where different, specific points are at any given time relative to a specified central point. It seemed very smooth in the few minutes that I used it. It didn’t suffer from any dead spots where your hands would suddenly disappear, which I’ve seen happen with some LED light based tracking systems, like the NAVY’s Blue Shark.

I asked if NASA has ever tried to combine VR and neutral buoyancy. He said that would be tough to do, and they haven’t attempted it, but the two labs work closely with each other. They’ll experiment with physical hardware astronauts will eventually have to use on the ISS in the the neutral buoyancy environment, and then they’ll see how it works within the context of the entire station in VR. Sometimes they’ll have to go back and forth several iterations until they get the technique or the tool dialed in just right.

NASA also has a helmet-free simulation of the ISS cupola’s robotics work station. It has three monitors which simulate the station’s three windows. It’s not a completely immersive experience, but it’s good enough for the task at hand. Users can still scroll around to get the full 360 degree view outside the windows, they just can’t display all 360 degrees at once. It’s a lot simpler as far as setup and use goes than its space-born counterpart. It also has three smaller monitors that come from the camera views outside the ISS.

The combination of the window views and the camera views allows the crew to monitor the arm movement and make sure it’s not going to hit any structures. Obviously, this is even more important if a crew member is working at the end of the arm. It also has the same physical control panels that are on the ISS. The point is that this would simulate what one of the onboard astronauts is doing, while astronauts on an EVA are doing their thing in VR, and it lets them interact simultaneously in the same simulation. It’s an ingenious system.

The Charlotte robot

Being able to see how things are spatially related on the outside of the ISS is obviously extremely important for orientation, but there are some things that virtual reality can’t simulate; most critically, the way objects feel. This is especially important on station because objects in microgravity behave very differently than they do on Earth. When you’re on an outside of the station, you may be tasked with pulling out a two ton panel and working with some components inside.

In microgravity the panel—even a two-ton one—wouldn’t have any weight, but it still has size, it has moments of inertia, and it has a center of mass. Astronauts need to learn how big objects like that will behave in space so they don’t accidentally fling something off the station or otherwise break it.

Enter the Charlotte robot. The device is composed of eight separate motors, each of which controls its own wire. The wires all connect together on a swappable object in the middle. This creates a web-like look, which is how the robot got its name (an homage to Charlotte’s Web). The motors are all controlled by a central computer, which uses advanced physics models to determine how the object in the middle should move depending on the size and mass of the object it is representing. It can move several feet in all six axis and rotate realistically.

Say, for example, an astronaut needed to remove a large, four-foot cube-like structure from the station. The object in the middle of Charlotte might only be one cubic foot and have just a piece of a handrail on it, but in virtual reality, it would appear to the astronaut as if it were the full-sized cube, and it would be in true physical relation to everything else on the station. When the astronaut pulls the box out, they have to deal with stopping its inertia. Maybe its center of gravity is way off to one side; the astronaut will find that the box wants to rotate in a squirrelly way, just like it would in reality, and they’d have to adjust. Learning how to counter that tendency to spin in a virtual environment—before expensive pieces of equipment are on the line—could be critical to a mission’s success.

When it’s time to put the object back in to its place, maybe it’s somewhat of a tight fit. Charlotte can implement contact models that simulate the bumping and hard-stops of the object being wiggled back into place, even though you can see in real life that the robot is unimpeded. Will there be a click you have to feel for at the end so you know it’s locked in? Charlotte can simulate that, too.

Perhaps most impressively, NASA’s VR lab has two separate Charlotte robots, and they can work in tandem. So, say for example that they needed to simulate the removal and replacement of a very large object that would require two astronauts working together. Each astronaut can be in his or her own virtual reality helmet, working with a separate Charlotte robot that will behave as if they’re a part of the same structure. So if one astronaut pulls, the other will feel it. If one astronaut rotates the object, it will twist in the other’s hands. The amount of computational smarts that goes into pulling that off is just incredible.

The handrails can be changed out to make them look and feel like any type of an orbital replacement unit (i.e. a spare part) that a crewmember might have to remove from or install onto a platform, so it’s an incredible virtual reality system that engages not just a person’s sense of sight, but also touch. Astronauts have come back from space and reported that the Charlotte rig does an incredibly accurate approximation of how things will feel and move in microgravity.

And then the jetpacks

One of the things astronauts train for in the VR Lab is the terrifying scenario of becoming disengaged from the station and hurtling through space. Enter the SAFER unit, which stands for Simplified Aid For EVA Rescue. It goes on the back of the PLiSS, the Portable Life Support System. This is, essentially, the jetpack thingy George Clooney was testing out at the beginning of Gravity.

If an astronaut becomes untethered from the ISS during a spacewalk and loses their grip, then they can use the SAFER unit to arrest their rotation, point themselves in the right direction, and propel themselves back to the station. It uses pressurized nitrogen, with nozzles at the top, sides, back, and bottom, which enables them to control their rotation rates, spin rates, and translation rates. It’s all controlled from a joystick that pops out of the right side. It has several toggles that are used to switch between modes (e.g. rotation or translation) and the joystick itself, which looks like it might be at home on an Atari system.

NASA’s virtual reality helmet allows astronaut to practice real-world emergency situations where they would need the SAFER unit. The astronaut gets strapped into the unit, and in their VR helmet they would seem to be in the middle of an mission outside the space station. The VR Lab engineers would then throw the astronauts off the station, in the virtual expanses of space. The engineers would set a spin rate and a rate at which the astronaut is translating away from the station, which I imagine is a pretty dizzying experience.

It takes about 30 seconds for the astronaut to get the SAFER’s hand-controller out, checked-out, and ready to go. From there they have to stop their rotation and locate the ISS in the distance, which isn’t as simple as it sounds. The astronauts are wearing rigid, enclosed helmets that don’t turn with them. So if they try and just turn their head, all they’re going to see is the inside of their helmet. Instead, they have to use the hand-controller to rotate around until they spot the ISS, then navigate their way back to the it. The drill typically ends when they grab onto a handrail on the station and clip back on, or when they return to the airlock.

All of these factors combined—the intricate level of detail in the computer models, the incredible physics engine that makes Charlotte behave realistically—add up to make the most advanced virtual reality systems we’ve ever seen. Leave it to NASA to take things to the next level.

Gizmodo’s Space Camp is all about the under-explored side of NASA. From robotics to medicine to deep-space telescopes to art. All this week we’ll be coming at you direct from NASA’s Johnson Space Center in Houston, Texas, shedding a light on this amazing world. You can follow the whole series here.

Video shot by Brent Rose, edited by Nick Stango.

Special thanks to everybody at NASA JSC for making this happen. The list of thank yous would take up pages, but for giving us access, and for being so generous with their time, we are extremely grateful to everyone there. Huge thanks also go to OSU Space Cowboys for inviting us in the first place.

Space Camp® is a registered trademark/service of the U.S. Space & Rocket Center. This article and subsequent postings have not been written or endorsed by the U.S. Space & Rocket Center or Space Camp®. To visit the official space camp website, click here.

This post originally ran on 11/16/14 as part of Gizmodo’s Space Camp series.