The AMD Ryzen 9 9950X and Ryzen 9 9900X Review: Flagship Zen 5 Soars - and Stalls
by Gavin Bonshor on August 14, 2024 9:00 AM EST- Posted in
- CPUs
- AMD
- Desktop
- Zen 5
- AM5
- Ryzen 9000
- Ryzen 9 9950X
- Ryzen 9 9900X
Core-to-Core Latency: Zen 5 Gets Weird
As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.
But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.
If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.
Looking at the above latency matrix of the Ryzen 9 9950X, we observe that the lowest latencies naturally occur between adjacent cores on the same CCX. The core pairs such as 0-1, 1-2, and 2-3 consistently show latencies in the 18.6 to 20.5 nanoseconds range. This is indicative of the fast L3 cache shared within the CCX, which ensures rapid communication between the inner cores on the same complex.
Compared to the Ryzen 9 7950X, we are seeing a slight increase in latencies within a single CCX. The SMT "advantage", where two logical cores sharing a single physical core have a lower latency, appears to be gone. Instead, latencies are consistently around 20ns from any logical core to any other logical core within a single CCX. That average is slightly up from 18ns on the 7950X, though it's not clear what the chief contributing factor is.
More significantly – and worryingly so – are the inter-CCD latencies. That is, the latency to go from a core on one CCD to a core on the other CCD. AMD's multi-CCD Ryzen designs have always taken a penalty here, as communicating between different CCDs means taking a long trek through AMD's Infinity Fabric to the IOD and back out to the other CCD. But the inter-CCD latencies are much higher here than we were expecting.
For reference, on the Ryzen 9 7950X, going to another CCD is around 76ns. But in Ryzen 9 9950X, we're seeing an average latency of 180ns, over twice the cost of the previous generation of Ryzen. Making this all the more confusing, Granite Ridge (desktop Ryzen 9000) reuses the same IOD and Infinity Fabric configuration as Raphael (Ryzen 7000) – all AMD has done is swap out the Zen 4 CCDs for Zen 5 CCDs. So by all expectations, we should not be seeing significantly higher inter-CCD latency here.
Our current working theory is that this is a side-effect of AMD's core parking changes for Ryzen 9000. That cores are being aggressively put to sleep, and that as a result, it's taking an extra 100ns to wake them up. If that is correct, then our core-to-core latency test is just about the worst case scenario for that strategy, as it's sending data between cores in short bursts, rather than running a sustained workload that keeps the cores alive over the long-haul.
At this point, we're running some additional tests on the 9950X without AMD's PPM provisioning driver installed, to see if that's having an impact. Otherwise, these high latencies, if accurate for all workloads, would represent a significant problem for multi-threaded workloads that straddle the Infinity Fabric.
123 Comments
View All Comments
TheinsanegamerN - Monday, August 19, 2024 - link
That's a red herring. Both are being sold on a feature that isnt used yet.Oxford Guy - Thursday, August 22, 2024 - link
Given how poor the competition is from Intel, the red herring is expecting Zen 5 to be a big improvement in anything other than AVX-512.If Intel were in a highly competitive position it would be different.
Heavensrevenge - Thursday, August 15, 2024 - link
The biggest problem is using Microsoft Windows for the benchmark platform, Linux benchmarks show the true numbers AMD can give, it's just that the Windows kernel isn't using the hardware to it's potential but Linux can.ondma - Thursday, August 15, 2024 - link
Huh?? The "true" numbers you get are the numbers you get with the operating system you are using.TheinsanegamerN - Monday, August 19, 2024 - link
Huh?? The "true" numbers for hardware are what the hardware provides, if your OS is screwing up those numbers, that error should be corrected.James5mith - Thursday, August 15, 2024 - link
Seems like the interim answer while you wait for a fix from AMD is simply to re-run the tests without the PPM driver.Dante Verizon - Thursday, August 15, 2024 - link
Does Zen 5 use mesh instead of ring bus? If so, that's the explanation for the horrible latency.evanh - Friday, August 16, 2024 - link
How come the latency is so bad is something that needs investigating. The poor latency to DRAM could explain why games are hit hard. They tend to need large amounts of main memory and rapidly bounce around it. Which will also be why the X3D parts excel in that environment.evanh - Friday, August 16, 2024 - link
PS: It was noted by TechPowerUp that disabling SMT has a positive effect on most tests, not just games. Which should be the other way around.Ryan Smith - Friday, August 16, 2024 - link
The latency to DRAM is fine (~94ns for a 128MB access). The oddity we're looking into right now is the die-to-die latency. It's taking around 200ns for one CCD to reach the other.