Nvidia's GeForce GTX 580 is what the original should have been: quieter, full-featured, faster and more efficient.
When Nvidia launched the GTX 480 — code-named the GF100 — early this year, the new GPU proved to be something of a mixed bag. It was undeniably fast, but also crippled – every GTX 480 GPU shipped with a full functional unit disabled. Whether that was because of yield or power issues wasn't clear. Power clearly was a problem – Nvidia's flagship ran hot and loud.
Given the competition, Nvidia had to get Fermi out the door. Even before the original Fermi left the building, Nvidia's engineers were heads-down, respinning and reengineering the GF100. The result is the GF110. The new GPU is, as Emperor Palpatine might put it, "fully operational", with all functional units now enabled. Even with more transistors humming, the core clock speed's been pumped up from the original 700 MHz. Memory clocks are now 1GHz, up from the stock GTX 480's 924MHz.
As you can see, power is down a bit, while the number of functional units and clock speeds are up.
The GF110's designers took some time to tweak the paths through the GPU to streamline data flows. They also tweaked a few features, increasing overall FP16 texture performance, among other things. In addition, the card itself now has a new cooling subsystem, including a redesigned fan and a vapor chamber (replacing the heat pipes in the GTX 480). Even the shroud surrounding the cooler has been revamped, with the fan recessed slightly and the rear edge beveled more, which increases airflow and cooling effectiveness in SLI setups when cards are mounted very close to each other.
The GTX 580 vapor chamber dissipates heat more efficiently than the old heat pipes on the GTX 480.
The overall result is less obtrusive fan noise under load. Nvidia estimates the acoustics to be about 5dBA down relative to the GTX 480 and even lower than the GTX 285. We noticed during our testing that the card is not only quieter, but the fan noise is at a different pitch, which is less annoying. This may be due to the fan redesign, which is more rigid due to a ring surrounding the fan blade structure.
The lack of heat pipes on the GTX 580 is clearly visible here.
The bigger, shallower bevel on the rear of the cooling shroud is part of the cooler redesign.
So what we have is a faster Fermi, with all the functional units enabled, some internal tweaks to the architecture, all quieter and cooler. So how does it really perform? We took an Nvidia GTX 580 reference card and compared it to a stock GTX 480 from Asus, plus Radeon HD 5870, HD 5970 and the new Radeon HD 6870.
Note that the Radeon HD 5970 is a dual GPU card. While AMD's CrossFireX performance and support has improved considerably with recent driver releases, performance is still dependent on CrossFireX scaling. Also, the HD 5970 is twelve inches long, which makes it a nonstarter in many mid-tower cases. Also, the XFX Radeon HD 5870 XXX edition is overclocked a bit, 3% on the core clock and 8% on memory. Keep that in mind as we take a look at the results.
All tests were run at 1920x1200, with 4x AA enabled. Our test system consisted of a Core i7 975 at 3.3GHz, with 6GB of DDR3/1333 memory, running on an Asus P6X58D Premium motherboard, with a Seagate 7200.12 1TB drive, an LG Blu-ray ROM drive, a Corsair TX850w 850W PSU, and Windows 7 Ultimate 64-bit.
Let's take a quick perusal at a couple of synthetic benchmarks. We don't put much weight to these results, but it's interesting to check them out.
The 3DMark Vantage test was run at the highest quality "extreme" mode, which isn't particularly extreme by today's standards. While the Radeon HD 5970 edges out the GTX 580, this is the fastest score we've seen with a single GPU card on our test system.
We expected the GTX 580 to be what Nvidia likes to call a "tessellation monster", and if the Heaven 2.1 scores are any indication, it certainly is. What we also see here is a difference in philosophy on how to handle tessellation. AMD likes to focus their sweet spot for tessellation on 16-pixel triangles, so when you crank up Heaven's tessellation factor, it stretches the Radeon cards past that sweet spot. So Nvidia's cards come out on top.
But what we really care about is performance on real games. Let's first take a look at DirectX 10 performance, then.
The games we tested for DX10 performance include Far Cry 2 (two different scenes), Just Cause 2 (the Concrete Jungle benchmark), Tom Clancy's HAWX and the aging, but still gorgeous, Crysis.
By this time, Crysis has been thoroughly studied and drivers optimized by the GPU manufacturers. Even so, the GTX 580 finally manages to best the XFX Radeon HD 5870, though the dual-GPU 5970 crushes Crysis. The HD 5970 also manages top scores in Just Cause 2.
The same can't be said for HAWX or either of the Far Cry 2 benches. The GTX 580 just edges out the dual GPU HD 5970 in these games. Note that the GTX 580 crushes the single GPU Radeons in all these tests. [For the full rundown of DX10 performance charts, head here]
So the GTX 580 looks like a beast in DirectX 10. Now let's move on to DirectX 11 performance results.
The DX11 games we tested are a mixed bag. Some, like the recently released Tom Clancy's HAWX 2 and Metro 2033, make heavy use of DX11 features. HAWX2, in particular, uses DX11 hardware tessellation very heavily. Others, like BattleForge, DiRT2, Aliens vs Predator or Call of Pripyat, use DX11 features a little more judiciously.
HAWX 2 uses tessellation in an extreme way, but the result is gorgeously rendered, near-photorealistic landscapes. Fermi's ability to tessellate and render down to very small meshes plays very well in this test.
Metro 2033 was also interesting, mostly because of how poorly the single GPU Radeon HD 5870 fared. This result was repeatable, and we're not quite sure what's going on, since the newer HD 6870 managed a reasonable, if low score.
In most of the rest of the benchmarks, the GTX 580 gave the dual GPU Radeon HD 5970 a run for its money, either winning outright or coming very close. [For the full rundown of DX11 performance charts, head here]
So how much power does the card consume? Given Nvidia's specs, it should be close to the power consumption of the GTX 480. Here's what we found.
Yes, the GTX 580 continues Fermi's high power consumption, but at least it eats watts more politely and quietly. Performance per watt is higher, too, given the better benchmark results we've seen in our testing. Still, it's impressive how even the dual GPU Radeon HD 5970 uses less power under full load than the GTX 580.
As we've seen, Nvidia's GTX 580 is clearly the fastest single GPU card on the market today. Given that we're looking at an early reference sample, it's likely that we'll see factory overclocked cards emerge in the next several months, pushing performance up even further.
However, the price for this level of single GPU goodness is steep. Nvidia's suggested pricing for the GTX 580 is $499. As always happens when a new GPU arrives, we start to see price moves by the competition. Radeon HD 5870s are now down to under $350 (under $340 in some cases.) The Radeon HD 6870, which is just a little slower, but more efficient, than the HD 5870, is under $250.
Even the somewhat scarce Radeon HD 5970 is seeing price drops. You can pick up a Sapphire Radeon HD 5970 for $499, $469 after a $30 rebate. AMD also likes to refer to the HIS HD 5970, but that's listed as "deactivated" on Newegg. However, other HD 5970s remain either very expensive or unavailable. Really, AMD's real answer to the GF110, code-named Cayman, isn't out yet. The Radeon HD 5970 really isn't a mainstream card.
Of course, we don't know what the yields are on GF110 yet, or what availability will be. Cards from companies like eVGA and Asus will likely be slightly north of the $499 price point, at least initially.
On the other hand, the GTX 580 is still 10.5 inches long, so will fit in more modest cases than an HD 5970. Nvidia recommends a 600W PSU for a single GTX 580. Given what we've seen with our performance tests, you don't really need more than one card for a typical 1920x1200 or 1920x1080 display. But if you want three displays, particularly three displays coupled with Nvidia's 3D Vision stereoscopic 3D glasses, you'll want two – but the much more modestly priced GTX 470 might work just as well in those scenarios.
So Fermi—the real Fermi—has arrived. It's still pricey and power hungry, but quieter and performs much better. We're looking forward to checking out retail cards, but for now, the fully operational GTX 580 should delight gamers with deep pockets.
Maximum PC brings you the latest in PC news, reviews, and how-tos.