A New Architecture

This is a first. Usually when we go into these performance previews we’re aware of the architecture we’re reviewing, all we’re missing are the intimate details of how well it performs. This was the case for Conroe, Nehalem and Lynnfield (we sat Westmere out until final hardware was ready). Sandy Bridge, is a different story entirely.

Here’s what we do know.

Sandy Bridge is a 32nm CPU with an on-die GPU. While Clarkdale/Arrandale have a 45nm GPU on package, Sandy Bridge moves the GPU transistors on die. Not only is the GPU on die but it shares the L3 cache of the CPU.

There are two different GPU configurations, referred to internally as 1 core or 2 cores. A single GPU core in this case refers to 6 EUs, Intel’s graphics processor equivalent (NVIDIA would call them CUDA cores). Sandy Bridge will be offered in configurations with 6 or 12 EUs.

While the numbers may not sound like much, the Sandy Bridge GPU is significantly redesigned compared to what’s out currently. Intel already announced a ~2x performance improvement compared to Clarkdale/Arrandale, and I can say that after testing Sandy Bridge Intel has been able to achieve at least that.

Both the CPU and GPU on SB will be able to turbo independently of one another. If you’re playing a game that uses more GPU than CPU, the CPU may run at stock speed (or lower) and the GPU can use the additional thermal headroom to clock up. The same applies in reverse if you’re running something computationally intensive.

On the CPU side little is known about the execution pipeline. Sandy Bridge enables support for AVX instructions, just like Bulldozer. The CPU will also have dedicated hardware video transcoding hardware to fend off advances by GPUs in the transcoding space.

Caches remain mostly unchanged. The L1 cache is still 64KB (32KB instruction + 32KB data) and the L2 is still a low latency 256KB. I measured both as still 4 and 10 cycles respectively. The L3 cache has changed however.

Only the Core i7 2600 has an 8MB L3 cache, the 2400, 2500 and 2600 have a 6MB L3 and the 2100 has a 3MB L3. The L3 size should matter more with Sandy Bridge due to the fact that it’s shared by the GPU in those cases where the integrated graphics is active. I am a bit puzzled why Intel strayed from the steadfast 2MB L3 per core Nehalem’s lead architect wanted to commit to. I guess I’ll find out more from him at IDF :)

The other change appears to either be L3 cache latency or prefetcher aggressiveness, or both. Although most third party tools don’t accurately measure L3 latency they can usually give you a rough idea of latency changes between similar architectures. In this case I turned to cachemem which reported Sandy Bridge’s L3 latency as 26 cycles, down from ~35 in Lynnfield (Lynnfield’s actual L3 latency is 42 clocks).

As I mentioned before, I’m not sure whether this is the result of a lower latency L3 cache or more aggressive prefetchers, or both. I had limited time with the system and was unfortunately unable to do much more.

And that’s about it. I can fit everything I know about Sandy Bridge onto a single page and even then it’s not telling us much. We’ll certainly find out more at IDF next month. What I will say is this: Sandy Bridge is not a minor update. As you’ll soon see, the performance improvements the CPU will offer across the board will make most anyone want to upgrade.

A New Name A New Socket and New Chipsets
Comments Locked

200 Comments

View All Comments

  • Mithan - Tuesday, August 31, 2010 - link

    I will be buying one of these the day it comes out.

    The only question will be between whether I get a CoreI5 or the Corei7. It will depend on price I guess, as the max I am willing to spend on a i7 CPU is $250.

    Anyways, should be a nice upgrade to my E8400.
  • starfalcon - Tuesday, August 31, 2010 - link

    Considering how great of a quad core the Core i5-750 is at $195, hopefully they'll have some great quad cores at about $200.
  • Sabresiberian - Tuesday, August 31, 2010 - link

    I've often wondered why people don't use WoW to test their video performance in the computers they are testing, and the obvious occurred to me - it so much depends on where you are and what the population is in the area you are in, that the frame rates vary widely. I imagine the frame rates reported here were for an area like Durotar with no one else in sight, heh. It would be a good place in terms of consistency, anyway, though less taxing that somewhere in Storm Peaks.

    WoW is often described as a CPU-intensive game, and so a great game to be included in tests of CPUs like you are doing here. Thanks for including it! I hope it is used for more video card tests as well; WoW may not be the most taxing test bed at lower end video, but at upper end in some areas it can hit 4 GHz i7 based Crossfired systems hard. I like playing at 85 Hz everywhere in the WoW universe I go - and Cataclysm will bring new video challenges, I'm sure.
  • drunkenrobot - Tuesday, August 31, 2010 - link

    I'm a bit disappointed at Intel's attempt to completely lock us out of over clocking all together. But maybe this is AMD's chance to win back the enthusiast market. If AMD sold only unlocked parts, they would have a market segment all to themselves...
  • theangryintern - Wednesday, September 1, 2010 - link

    OK, didn't see it in the article and don't really feel like wading through 200 comments. What I want to know is will we be able to either A) disable the onboard graphics if we have the latest and greatest bad-ass video card...or even better, B) Will it be able to run both at the same time in a configuration where when I'm doing just generic web surfing, emailing, etc, the Intel GPU is doing the work and the discrete card can power down (quieter and less heat generated), and then when I fire up a game, the discrete powers up and the onboard powers down?
  • JonnyDough - Thursday, September 2, 2010 - link

    Intel is screwing over minorities! Colorblind people unite!

    "Both H67 and P67 support 6Gbps SATA, however only on two ports. The remaining 4 SATA ports are 3Gbps. Motherboard manufacturers will color the 6Gbps ports differently to differentiate."
  • JonnyDough - Thursday, September 2, 2010 - link

    Higher performance integrated GPU's should help bring some of the gaming market back to the PC. That is a very good thing. :)
  • starx5 - Tuesday, September 7, 2010 - link

    I'm sorry anand but is this because your intel frendly?

    Come on..you have to run high resolution(2560x1600 or higher eyefinity) gaming benchmark too.

    Sandbridge is nothing if it doesnt have much supiror performance in high resolution gamming.

    But I know intel sucks. Even 980X is sometimes sucks in high resolution gaming.

    When I see your bench, I can clearly SEE your intel frendly. Espesilly in gaming part.

    Anand, of course your site is very popular(even in my country korea).

    But in reality..your nothing but a intel suckass indian.
  • wut - Friday, September 10, 2010 - link

    Stop. You're making yourself look like a bigoted fool.
  • mekdonel - Friday, November 5, 2010 - link

    Naaah, you're not a Korean. Only Americans make dumb spelling mistakes like "your" in place of "you're".

Log in

Don't have an account? Sign up now