If you’ve visited the AMD subreddit, the Linus Tech Tips forums, or elsewhere over the last several months, you might have come across a conspiracy theory that Intel and Nvidia struck a secret deal with one another to keep higher-end GPUs out of AMD Ryzen 4000-series laptops. If you looked at the slate of AMD laptops released last year, you might believe it. The Asus ROG Zephyrus G14, Lenovo Legion 5, and others all came with an AMD processor but nothing higher than an RTX 2060. Conspiracy theories are tantalizing, but this one appears to be nothing more than a product of the Intel/AMD/Nvidia wars. It doesn’t help that unsubstantiated claims from blogs and news sites around the world keep pushing the same narrative. All it takes is a little digging to see there isn’t a juicy scandal here—just a complicated web of how CPUs and GPUs work together.
In April 2020, Frank Azor, AMD’s Chief Architect of Gaming Solutions and Marketing, responded to a Twitter user’s question about the lack of high-end GPUs in AMD laptops, saying “You’d have to ask your favorite OEMs and PC Builders that question.” That’s around the time the conspiracy theory started to take shape, but Azor was right. Laptop configurations are decided by OEMs, not chip makers. And those configurations are usually driven by cost, but they also have to make sense. An underpowered CPU with an overpowered GPU is not a good combination, and that’s kind of the trap something like the Ryzen 9 4900HS, or lower, falls into.
Azor even sat down with The Full Nerd in May 2020 to address the issue again, talking specifically about OEMs’ confidence in Ryzen processors. “I think Ryzen 4000 has exceeded everybody’s expectations, but for the most part everyone tip-toed with us. Because of that, it was hard to imagine a world where we were the fastest mobile processor,” said Azor. “I think when you’re planning your notebook portfolio as an OEM, and you haven’t come to that realization yet—and remember, all this planning for these notebooks was done last year—you leaned into AMD a little bit.”
Essentially, OEMs’ confidence that AMD has a screamin’ fast mobile processor just wasn’t there. So why would they pair a high-end mobile processor with something that they believed would be inferior? Finding the middle ground, the “meat of the market,” as Azor put it, was laptops with RTX 2060s and lower. Yet even with this reasonable explanation, the rumor-mill churns on, looking for clues in the processors’ specs for answers.
Gizmodo reached out to Intel and Nvidia about these rumors, which both companies vehemently denied. An Nvidia spokesperson told Gizmodo, “The claim is not true. OEMs decide on their system configurations, selecting GPU and then CPU to pair with it. We support both Intel and AMD CPUs across our product stack.”
An Intel spokesperson echoed the same sentiment: “These allegations are false and no such agreement exists. Intel is committed to conducting business with uncompromising integrity and professionalism.”
Nvidia and Intel’s firm denials certainly suggest this theory holds little to no water, yet I don’t think you necessarily even need their denial to know the theory is bunk. The fact is the Ryzen 4000-series was never going to be a strong contender for high-end mobile gaming.
There are three elements of AMD’s Ryzen 4000-series that likely factored into OEMs decision not to pair it with higher-end graphics card. They are PCIe limitations, CPU cache, and the most obvious: single core performance.
Gaming relies more on single core performance than multi-core, and Intel typically has the better single core performance. This is true both historically and with regards to Intel’s 10th-gen versus AMD’s Ryzen 4000-series. Heck, the 10th-gen Core i9-10900K’s gaming benchmarks are even on-par with AMD’s newer Ryzen 9 5950X when both are paired with an RTX 3080.
In our previous laptop testing, AMD’s Ryzen 9 4900HS in Asus’ ROG Zephyrus G14 had weaker single core performance than the Intel Core i7-10875H in MSI’s Creator 15. The Core i7-10875H is not at the top of Intel’s 10th-gen mobile line, but the Ryzen 9 4900HS is at the top of AMD’s. Yet with close to the same GPU (RTX 2060 Max-Q on the G14, RTX 2060 on the Creator 15), the Intel system still averaged 1-3 fps higher (1080p, ultra settings.). Pairing a more powerful GPU with a Ryzen 9 4900HS most likely would have bottlenecked some games due to the single core performance.
That’s gonna lead to less than stellar performance compared to Intel’s offerings—particularly when combined with the wimpy L3 CPU cache in the Ryzen 4000 series. Just 8MB of L3. That’s half of Intel’s. So the average time it would take to access data from the main memory would be slower than the Intel mobile processor.
The PCIe limitations of the Ryzen 4000 series could have also contributed to OEMs reluctance to adopt, but that idea is a little shakier. It originated from a blog post on igor’sLAB that explained since Ryzen 4000s CPUs have only eight PCIe 3.0 lanes dedicated for discrete GPUs, this could cause a bottleneck if paired with anything higher than a RTX 2060. Every PCIe device requires a certain number of lanes to operate at full capacity, and both Nvidia and AMD GPUs need 16. Because Intel’s 10th-gen processors have 16 lanes of support, that made them a better match for the RTX 2070 and higher GPUs in last year’s slate of gaming laptops.
However, many people across Reddit and other online forums have pointed out that the performance drop from pairing a Ryzen 4000 CPU with a RTX 2070 or higher GPU would be very small, if noticeable at all, so to them the explanation didn’t make sense. (More fuel for the conspiracy theory.) Naturally, I had to test this all out myself to see if there really is a drop in performance going from 16 lanes to 8.
Running my own tests, I found that 16 lanes does indeed offer better performance on higher-end GPUs, but that that performance can also be pretty dang negligible. Granted I used a much more powerful processor than the Ryzen 9 4900HS, so it was plenty capable of handling an RTX 2060 and higher regardless of how many PCIe lanes were available.
My test PC was configured with: an Intel Core i9-10900K, Asus ROG Maximus XII Extreme, 16GB (8GB x 2) G.Skill Trident Z Royal DDR4-3600 DRAM, Samsung 970 Evo 500GB M.2 PCIe SSD, a Seasonic 1000W PSU, and a Corsair H150i Pro RGB 360mm AIO for cooling.
Gaming performance barely changed after I switched the PCIe configuration from 16-lanes to 8-lanes, but the performance difference was noticeable in synthetic benchmarks. Comparing an RTX 2060 to an RTX 2070 Super (the closest GPU I had on hand to an RTX 2070), I ran benchmarking tests in GeekBench 5, 3DMark, PCMark 10, Shadow of the Tomb Raider, and Metro Exodus, some of which are part of our usual slate of tests.
Frame rates increased only a maximum of 4 fps, the most noticeable difference being Shadow of the Tomb Raider at 1080p. This backs-up what many have said about gaming performance not being substantially affected by cutting the number of PCIe lanes to the GPU in half until you get to something as powerful at the RTX 2080 Ti.
The synthetic benchmark tests didn’t change much from 8-lane to 16-lane with the RTX 2060, but the difference in scores was more pronounced with the RTX 2070 Super, suggesting there is a measurable difference that might matter in other applications. The RTX 2070 Super’s GeekBench score jumped up by 3000 when all 16 lanes were made available to the GPU. Time Spy yielded results in-line with the gaming benchmarks, and oddly the RTX 2060 saw a bigger boost in the PCMark test compared to the 2070 Super.
Of course, synthetic benchmarks aren’t a measure of real-world performance, and PCIe bandwidth isn’t the main thing that’s going to slow down your system. But considering a lot of reviewers use these metrics to paint a picture of a laptop or desktop, any of the AMD 4000 series processors paired with something higher than a RTX 2060 would have seen lower than usual scores. For higher-end GPUs that are “performance-driven,” every extra number, every extra frame matters, especially when there’s a lot of OEMs competing for a spot on your desk.
This suggests that, yes, OEMs will favor the “better” CPU, even if the better CPU is only marginally better. A lack of AMD 4000 series processors paired with higher-end Nvidia graphics could be the result of OEMs under-estimating how many consumers were actually interested in that sort of laptop configuration last year, but it more likely has to do with the 4000 series lack of L3 cache and slower single core speeds. Sure, the RTX 2070 and higher can run fine on PCIe x8, but if the CPU doesn’t have the juice to handle the GPU, none of that matters.
There’s one final point to debunking this theory. If Intel and Nvidia were colluding to shut AMD out then why are more OEMs wholeheartedly embracing the AMD/Nvidia combo this time around? Many of their AMD Ryzen 5000 series-powered laptops will have up to an RTX 3070 or 3080; the newest AMD Ryzen mobile processors will have 20MB of L3+L2 cache and support up to 24 PCIe Gen4 lanes (16 lanes dedicated to a discrete GPU)—exactly what it needs to pair nicely with something higher than a mid-range card.
Companies are regularly found to be involved in a myriad of shady activities that boost their bottom line while harming consumers and affecting the choices we make every time we step into a Best Buy with money burning in our pockets. But no, Intel and Nvidia are probably not to blame for the slow adoption of AMD CPUs by OEMs. AMD has had to spend the last few years rebuilding its reputation and creating processors that really rival Intel in the mobile space and can support the powerful GPUs Nvidia produces for laptops.
The Ryzen 4000-series was very good, but not quite ready to compete in the areas that matter most to gamers, and gaming laptop OEMs. The Ryzen 5000 series, if OEM adoption indicates anything, is going to be a whole different beast. And it will likely be found in all the big gaming laptops the 4000 series was not. Nvidia and Intel have nothing to do with it.