AMD's Radeon VII Is a Great Gaming Card, But That's Just the Beginning

Illustration for article titled AMDs Radeon VII Is a Great Gaming Card, But Thats Just the Beginning

AMD’s fighting competitors on all fronts. It’s battling Intel in the CPU space, with each company advertising more cores and tacking on more features to woo users (and manufacturers) away from the other. Yet its battle against Nvidia in the GPU space is different. While Nvidia comfortably dominates the high-end space, AMD has been content to offer cheaper cards with comparable power in the mid-range and below. The Radeon VII, AMD’s new $700 card—the first 7nm GPU to ship to consumers—is intended to take on Nvidia’s very best cards. What’s surprising is that it isn’t just as good as Nvidia’s best—sometimes it’s better.

Advertisement

It’s a surprise that the Radeon VII is impressive because it’s based on the Vega architecture that AMD introduced in 2017. It’s the same architecture found in its Ryzen CPUs with integrated graphics, and it’s the same architecture found in the Vega 56 and 64 graphics cards, which launched back in August 2017 and sold for $400 and $500 respectively. A $700 card based on a two-year-old architecture feels like a bad deal when you consider that Nvidia also has a card that starts at $700, the RTX 2080. The RTX 2080 is based on Nvidia’s Turing architecture, which launched late last year and is still rolling out. The 2080 also has ray tracing support, which the Radeon VII is not equipped for and may not gain support for later via software updates.

Advertisement

So there’s an immediate question of why you should buy this expensive GPU based on some old architecture. AMD believes that the reasons are two-fold. One it’s based on a 7nm node (hence the name). Earlier Vega chips come from a 14nm node, and Nvidia’s Turing GPUs are on a 12nm node. The node’s size usually translates to the size of the chip itself. A smaller node means a smaller chip which means data should travel faster and the chip should require less energy.

But the Radeon VII processor isn’t that much smaller than the ones found in the Radeon 64 and 56—because AMD’s used the extra space to cram in more memory. The Radeon VII has is that it’s loaded with twice the memory of most other consumer-grade cards, and it’s not just any memory. AMD jammed in 16GB of HBM2 memory, which has nearly twice the bandwidth of the GDDR6 found in Nvidia’s cards.

Those four rectangles surrounding the larger rectangle are the 16GB of HBM2 memory. It’s a lot.
Those four rectangles surrounding the larger rectangle are the 16GB of HBM2 memory. It’s a lot.
Photo: Alex Cranz (Gizmodo)

In theory, this extra memory, coupled with the shrinking die size of the GPU, should allow AMD’s aging architecture to be competitive with Nvidia’s shiny new stuff. AMD told me ahead of my testing that it wouldn’t necessarily beat the RTX 2080 every time, just some of the time. Games, especially the AAA titles that truly tax a high-end GPU, often have soft caps for the amount of GPU memory they can call on. That cap is 11GB, the same amount of memory found in the RTX 2080 Ti which starts at $1000. That means 5GB of the Radeon VII’s memory is wasted on a lot of games.

Advertisement

AMD’s claims bear out in our tests. The Radeon VII does better in some, and it does worse in others, often falling within just a few frames per second of the 2080, and always lagging distantly behind the 2080 Ti—the gold standard in gaming GPU performance at the moment.

Advertisement

So why on earth would you get a Radeon VII when the 2080 costs the same amount and tosses in support for ray tracing? There’s the obvious reason that few games actually support ray tracing currently—with Battlefield V being the only AAA title to have it. There’s also all the other stuff a graphics card does that has nothing to do with gaming.

That stuff includes processing big 3D scenes and high-resolution video files. And this is where the Radeon VII smokes not just the 2080, but the $1,000 2080 Ti as well. Admittedly, few people need a number crunching beast like the Radeon VII for their day-to-day work. If you are spending eight hours a day at a desk animating in 3D or rendering high-resolution videos, you are one of them—less than that, and you can probably pass.

Advertisement

In three separate benchmarks, the Radeon VII performed so well it looks like a steal compared to the 2080 Ti. In Blender, we note the time it takes to render 3D images, and in Adobe Premiere 2019, we time how long it takes to render and convert a minute long 8K video that included effects and transitions. Finally, we run Luxmark, a freely available benchmark that works across platforms and processes a 3D file, using ray tracing, then provides a score, with higher scores being better.

Advertisement

As you can see above, the Radeon VII is the superior choice, using all 16GB of that HBM2 to scream past the 2080 and 2080 Ti. We should note that Luxmark doesn’t tap into the potential benefits of Nvidia’s tensor cores—the cores in the GPUs that allow it to do ray tracing in real time. So there could be instances where the 2080 or 2080 Ti perform better, but wherever a professional has run up against the limit of what 11GB or less of GDDR6 memory can do the Radeon VII is there, offering a speedier future.

Advertisement

The only problem is this isn’t the last fancy card from AMD this year. At CES, AMD noted that its next-generation architecture, Navi, would be coming this year, and right now we know very little about it. Maybe it’s as fast or faster than the Radeon VII, but with hardware support for ray tracing too. Maybe it will be cheaper. Maybe it will be so expensive we all start thinking of the 2080 Ti as a steal. We just don’t know—and that makes recommending the Radeon VII as your next GPU difficult.

If your card’s crapped out or you have zero interest in what might come six months down the line, then investing $700 in the Radeon VII isn’t a bad idea. It’s an admirable performer in games (though lacking in Nvidia’s bells and whistles) and one of the fastest consumer priced cards available for professionals in the video and 3D space. But personally, I’d rather save my money for Navi. If this is what AMD can do with aging architecture, I can’t wait to see what it does with something new.

Advertisement

README

  • For gaming, it performs comparably to the Nvidia RTX 2080.
  • It lacks the fancy features of the 2080, including ray tracing. So it might not be your best choice for gaming.
  • However, it is absolutely one of the best cards you can get for professional work right now.
  • For $700 it is pricey, and AMD has a new GPU architecture planned for later this year.
  • Your move Nvidia.
Advertisement

Senior Consumer Tech Editor. Trained her dog to do fist bumps. Once wrote for Lifetime. Tips encouraged via Secure Drop, Proton Mail, or DM for Signal.

Share This Story

Get our newsletter

DISCUSSION

Important to note: AMD’s graphics drivers have had consistent difficulties in really adapting to games. And nVidia’s Gameworks initiative has saturated many triple-A developers. These developers only want to do things once for their engine - so they WILL make a choice of one over the other, and that one is more than likely to be nVidia if they wish to reach the most poeple. Once you get past these two blockades, you start to realize that for years AMD’s GPUs have been undervalued for their architecture strengths.

Vega is old, but it will likely be the basis for Arcturus. Navi is their console chip. Neither Vega, Arcturus, or Navi are what has been on AMD’s “Next Gen” slides for 2020+. Next Gen is generally believed to include AMD’s infinity fabric and modular chiplet designs as a core part of its infrastructure. We likely would have never seen Vega 20 in a gamer card if it weren’t for difficulties in unifying chiplets for GPU presentation to the software, and next gen would just “paste on” various components like shader compute, tensor math cores, and even raytracing. People praise Intel’s 3d chip ideas - but its very important to remember, AMD did it first and they’re doing it better. And they got a lead.

More importantly: AMD knows they got to be on the very bleeding edge of this technology - even if they stay two to three generations ahead of big blue, even if they don’t make a single mistake, in 2 years they know they won’t be able to gain a single percent of marketshare against nVidia or Intel. If they make one slip up, even a very minor one, they’re dead as a company. Literally and figuratively. One bad architecture mistake, one simple slipup in promises to the server or consumer markets, and they will be gone. And in 10 years... they’ll likely be gone anyway unless they hit home runs every single year for every single segment. Jensen can lie day-in and day-out to his investors, Intel can go through CEO’s like water and buy talent to stuff into its cubicles until the 22nd century... but AMD will always be the underdog and held to a higher standard

That said, I’m plunking down money on Radeon VII. :) This card has things I need.