The History of the Uncanny ValleyS

Partway through the latest Medal of Honor game, you nearly die in an ambush. Your AI buddy saves you, then helps you to your feet. It should be a poignant moment; instead it's chilling, because his eyes are completely lifeless.

Or, you're playing Mass Effect 2, engaged in a serious conversation with one of your crewmates. Except that you can't take it seriously, because you keep staring at his teeth. There's something terribly wrong with his teeth.

Welcome to the Uncanny Valley.

The History of the Uncanny ValleyS

The concept of the Uncanny Valley originally applied to robots, and was first presented by Japanese roboticist Mashiro Mori in a chart.

Mori suggested that as robots looked and behaved more like humans, they became more easily accepted by real humans – up to a point. When a robot's behavior becomes very similar, but not exactly like, human behavior, our reactions become increasingly negative. At some point, if a robot's behavior becomes indistinguishable from human behavior, we accept the robot.

As with robots, the Uncanny Valley applies to characters generated with 3D graphics. In this case, we'll look at how real-time 3D graphics in games has evolved over time, and how games that try to achieve some level of realism have settled deep into the depths of the Uncanny Valley. We're not going to talk about movies or pre-rendered 3D animation.

Before we do, however, let's divide the problem up a bit.
• Appearance. This is how a character looks and appears, particularly faces.
• Movement. How does a character move through the world? Does he or she move like a real person moves?
• Behavior. When a character talks, or responds emotionally, does that behavior seem realistic?

The History of the Uncanny ValleyS

Appearance

The History of the Uncanny ValleyS

Emotional acceptance of a character makes a big difference if the character is designed to be stylized, or to look realistically human. The shading and artwork in Borderlands, for example, resembles a graphic novel, so it's relatively easy for us to suspend our disbelief. These characters don't look real enough to be disturbing.

While game designers made some attempts over the years to create more realistic looking characters, the technology and tools were pretty limited until the last few years. Rolling back the clock, no one would suggest that the early "2-1/2D" games of the early 1990s looked remotely realistic. Yes, for the day, they were immersive and looked great. But no one would call Duke Nukem from the original Duke Nukem 3D a realistic looking character.

Over the years, technology evolved, and the advent of programmable shader technology with DirectX 8 and the new generation of high definition game consoles, gave programmers the graphics and compute muscle to attempt more realistic looks to their characters.

In 2004, Valve finally shipped the long-awaited Half-Life 2. The character of Alyx proved to be one of the more memorable creations, with relatively realistic facial animation and, in particular, eyes that seemed to respond to emotion. The character modeling for Alyx wasn't perfect, but it was a big step forward in creating characters that looked like real people. Still, the world of Half-Life 2 and of Alyx are obviously computer generated.

The History of the Uncanny ValleyS

If we move forward to 2007, Bioware's Mass Effect took another step forward towards more realistic characters. The focus was a little narrower, as Mass Effect attempted to model conversations more realistically. We're not talking about the controversial radial conversation tree, but about the facial animation and expressions. However, conversations in Mass Effect were still an eerie experience. Although the characters eyes were more expressive, they tended to wander off in odd directions.

Mass Effect 2 improved on facial expressions a bit, but the mouths—particularly teeth—still seemed jarring compared to conversations with real people.

Part of the constraint, of course, is the necessity to model characters to run in a real-time environment, able to hit a minimum of 30 frames per second, but targeting 60fps or more. This is often why pre-rendered cut scenes using the same character models can look somewhat better—pre-rendered video takes substantially more compute time to generate, but then the game is just playing back video.

While consoles are mired in the DirectX 9 era of graphics programmability and processor horsepower, PC technology marches on. AMD recently began shipping its second generation of DirectX 11 GPUs, while Nvidia's DirectX 11 hardware offers a robust GPU compute architecture. Both AMD and Nvidia hardware is opening up interesting possibilities for more realistic character models. In particular, hardware tessellation is capable of smoothing out the polygonal heads, facial features and joints that detracted from the look of past generations of PC game characters.

But technology isn't the whole answer. Artists need to step up as well. As we've seen with the most recent Medal of Honor game, the overall visuals are approaching photorealism. Even the soldiers are looking more realistic – until you see their faces, which are devoid of human emotion. On top of that, they don't quite move like real people. Which brings us to our next topic.

Movement

The History of the Uncanny ValleyS

Let's climb in our wayback machine to the 1990s, when game developers began trying to build more realistic looking characters. One seminal development was the original Tomb Raider, which arrived on the scene in 1995. Gamers were startled at just how realistic—for that time—the character of Lara Croft moved. Sure, the visual environments were still pretty blocky, and Lara's appearance was still pretty cartoony. But when she ran, jumped and swan dived, you could almost believe you were watching a real person.

That first Tomb Raider was one of the earliest examples of using motion capture of real people, and applying it to real-time, interactive game animation. Over the years, motion capture has become pretty common in the game industry. Sometimes the results are unintentionally hilarious, as when human motions are applied to giant combat robots that weight 30 tons or more in Front Mission Evolved. Or, back to Mass Effect, where the same set of motion capture data is used whether the main character is male or female.

The somewhat comical results we see illustrates the limitations of motion captured animation. There is only so much studio time available during game development. Then there's the issue of how, exactly, do you motion capture purely fictitious characters, like aliens that may have leg structures different from humans, or people running in powered body armor, as with the Warhammer 40,000: Dawn of War games.

Even if you limit yourself to human or human-like characters, problems persist. It's difficult to become immersed in an MMO if the all the characters are slide-walking across the world, or moving with jerky, unnatural motions.

Here, technology is having an impact as well. Procedural animation—applying animation algorithms to models, rather than motion capture data—is one solution. In the past, procedural animation has been limited, but the new capabilities of modern GPUs will likely have a major impact. Middleware like Natural Motion (http://www.naturalmotion.com) attempt to address these issues with its Dynamic Motion Synthesis.

These types of technologies provide powerful toolsets for developers, but artists and programmers need to spend time and effort generating the best and most correct looking algorithms. Again, it's not a matter of making that four-legged centaur with tentacle arms move realistically—no one knows what that's supposed to look like. The problem is how to make human characters move in a way that looks real and doesn't leave players with a vague sense of unease.

Behavior

The History of the Uncanny ValleyS

Modeling behavior is more subtle than appearance or movement. It's not always immediately obvious whether a character or characters are behaving realistically. A good example of this is the combat behavior of AI opponents in the more recent Total War games, like Medieval 2: Total War. At first blush, the combat units move realistically. They maintain unit cohesion across variable terrain.

But that appearance of realism breaks down as you play more. Artillery is set up only a few yards behind buildings. Cavalry squadrons charge infantry squares willy-nilly. Infantry units wander aimlessly around the map. Of course, that brings up the question: what's realistic behavior? It's not like real generals in real historical battles always behaved rationally, either.

On a more personal level, modeling interpersonal relationships in a realistic way have met with limited success. One of the more interesting attempts is Façade (http://www.interactivestory.net/), a first person game in which you can type whole sentences into a natural language system that tries to interpret what you're saying, and have the characters respond realistically.

Between the small scale of Façade and the massive scale of a Total War battle, there are a host of behaviors in-between, ranging from giving orders to a small group, trying to interact with non-player characters in an RPG or building faction relationships in an open world game.

The real problem with behavior is always player expectations. In our real world interactions, we can't predict how friends, coworkers or antagonists might behave. In most cases, the game designers need to build in predictability for the player, but that also ends up limiting possible options and minimizing realistic behavior. Gamers endlessly complain about the two-dimensional characterizations in video games, but then complain if those characters behave in a way that we feel doesn't reflect how that character should behave.

Bringing it All Together

What we really want, of course, is the whole pie: characters that look real, move like real people and behave and react as we might expect real people to behave. Creating characters like that may actually be possible with today's technology—but it may not be practical. What's lacking are tools that span the whole range—appearance, movement and behavior—in an integrated way. Game engines and other middleware attempt this, but the development resources are often too limited to really pursue all three axes of realism.

So when it comes to gaming, will the Uncanny Valley ever be crossed? Already, we're starting to see that happen in movies and television. If you've seen The Curious Case of Benjamin Button, the Lord of the Rings trilogy or the recently released Sintel, a 15-minute short animation sponsored by the Blender Foundation, you know that it can be done with non-interactive storytelling.

Bringing that level of realism to the interactive, real-time world of gaming is much larger challenge. After all, players themselves are notoriously unpredictable. So while games might reach a believable level of realism with visuals and motion, crossing that last chasm of behavior will likely be the deepest valley of all.

The History of the Uncanny Valley

IGN, 3Drealms, Aving" />

The History of the Uncanny Valley

Maximum PC brings you the latest in PC news, reviews, and how-tos.