Alongside Intel’s regular earnings report yesterday, the company also delivered a brief up on the state of one of their most important upcoming products, Meteor Lake. Intel’s first chiplet/tile-based SoC, which completed initial development last year, has now completed power-on testing and more. The news is not unexpected, but for Intel it still marks a notable milestone, and is important proof that both Meteor Lake and the Intel 4 process remain on track.

Meteor Lake, which is slated to be the basis of Intel’s 14th generation Core processors in 2023, is an important chip for the company on several levels. In terms of design, it is the first chiplet-based (or as Intel likes to put it, “disaggregated”) mass-market client SoC from the company. Intel’s roadmap for the Core lineup has the company using chiplet-style SoCs on a permanent basis going forward, so Meteor Lake is very important for Intel’s design and architecture teams as it’s going to be their first crack at client chiplets – and proof as to whether they can successfully pull it off.

Meanwhile Meteor Lake is also the first client part that will be built on the Intel 4 process, which was formerly known as Intel’s 7nm process. Intel 4 will mark Intel’s long-awaited (and delayed) transition to using EUV in patterning, making it one of the most significant changes to Intel’s fab technology since the company added FinFETs a decade ago. Given Intel’s fab troubles over the past few years, the company is understandably eager to show off any proof that its fab development cycle is back on track, and that they are going to make their previously declared manufacturing milestones.

As for this week’s power-on announcement, this is in-line with Intel’s earlier expectations. At the company’s 2022 investor meeting back in February, in the client roadmap presentation Intel indicated that they were aiming for a Q2’22 power-on.

In fact, it would seem that Intel has slightly exceeded their own goals. While in a tweet put out today by Michelle Johnston Holthaus, the recently named EVP and GM of Intel’s Client Computing Group, announced that Meteor Lake had been powered on, comments from CEO Pat Gelsinger indicate that Meteor Lake is doing even better than that. According to Gelsinger’s comments on yesterday’s earnings call, Meteor Lake has also been able to boot Windows, Chrome, and Linux. So while there remains many months of bring-up left to go, it would seem that Meteor Lake’s development is proceeding apace.

But that will be a story for 2023. Intel will first be getting Raptor Lake out the door later this year. The Alder Lake successor is being built on the same Intel 7 process as Alder Lake itself, and will feature an enhanced version of the Alder Lake architecture.

Comments Locked

72 Comments

View All Comments

  • Kangal - Saturday, April 30, 2022 - link

    Well, what I said is true as long as all factors are controlled.
    So that means each chipset is using the: same software (eg Windows10 Pro), same lithography (eg 6nm), same architecture (not huge difference like Zen1 vs Zen3+).

    A large server that you use as a Cloud, needs to handle lots of queries at the same time. Here security is paramount, followed by multi-threaded performance, then efficiency. So AMD's chiplet design which can hold many cores in one chiplet, and combine many chiplets into a large chipset works great. You can greatly increase multi-threaded performance this way, and it won't decrease the efficiency by much, and will keep things secure. If one user requires more performance (or cache) the system might steal resources from another core, and this will take a hit to latency. But this latency is acceptable since there's a tertiary latency with the external connection between the User and the Server.

    On a Desktop PC, something like Intel's Core i9-9900k works well. And that's running on an outdated +14nm node, on an outdated architecture (Skylake), which was able to match a new architecture (Zen2) on a 7nm node. If you were able to refresh that with a newer architecture on a newer node, it will perform better against the chiplet design of AMD. The reason is that on a monolithic design, the whole chipset is stamped out by the same good node and it is a closer integrated system. Since this is a computer for a single user, performance (single-core) is much more useful than outright total/multithreaded performance and the system is connected to a power socket. So some efficiency is traded for more performance (ALL Large Cores).

    Now stepping down to Heavy Notebooks, here it is expected to run on battery power. And occasionally be plugged into a power socket to extract more power. For that reason, it is very beneficial to use small cores for Power Efficiency and Balance them out with some Large Cores. Since this is a single user, there's no concern about security between each cores/packages, so having a weird Hybrid Design is not very important.

    Moving down to Slim Laptops, here it's expected to be used exclusively on battery power. However, the nature of the form-factor does allow the installation of a decent-sized battery and a pretty potent cooling solution. So there is a pretty even balance between having small cores and Large Cores. Again, inter-core security isn't a problem here, and it does reduce efficiency a bit... but it becomes a benefit by providing ample performance at acceptable power drain.

    At the lowest end of the spectrum, at least for x86, are Thin Tablets. These use a passive cooling method, and due to battery/heat concerns don't charge particularly fast. Here Large Cores can come to the detriment of the experience. It might deliver an initial jolt of performance, but it will use up a notable amount of battery power, then heat up the device, which causes it run slow for the remainder of that duration. So the most optimal solution is really to use all small cores solution, for maximum efficiency.
  • lmcd - Sunday, May 1, 2022 - link

    Congrats Kangal on typing a lot of words. They're still not accurate though! Your cursory understanding of chiplet design doesn't make you an expert.

    You have decided that heat is completely irrelevant to a desktop or server-class design. This is completely wrong. One of the big challenges when Zen 2 came out was that the chiplets produced hot spots that were extremely difficult to cool because the small dies did not disperse heat across a continuous silicon surface. Adding little cores is a perfect solution because they do not dramatically change the architecture of the tile/chiplet, add silicon to allow better heat dispersion, and also do not dramatically change the power profile of the tile. The net result is that the little cores are basically "free" in a tiled/chiplet architecture.

    If it wasn't obvious, this is part of why AMD is also releasing Zen 4c.
  • Kangal - Sunday, May 1, 2022 - link

    Thank You.
    I am an amateur at best, much like most commenters here such as yourself.

    Where did I say heat was irrelevant, because I can't see it up there. Desktop PCs are designed for One User with Maximum Performance in mind. That is why they are connected direct to a power-socket, physically large, and have highest cooling system (per user) compared to the other options. Yes, Desktop PCs cool more than Servers because Servers run parallel many users at the same time, and they are more balanced due to budget constraints (electricity).

    Not sure about what Epyc/Zen2 issues you speak about. They may have been isolated or early day issues, but I haven't heard anything about it. You can read the papers released by AMD on the topic of optimising thermals here:
    https://www.cs.utah.edu/wondp/eckert.pdf
    https://arxiv.org/pdf/2108.00808.pdf

    Your statement about Zen 4C is completely wrong. There is very low information about those cores, so anything you say is just misinformation. They could in-fact be "small cores" as you infer, usually achieved by taking an older architecture (eg Zen1) to miniaturising and optimising it for efficiency and area. That's what Intel did with their Pentium-Celeron-Atom cores, and later with their Skylake-eCores. Or these might actually be the same Zen4 cores, but they are using better silicon, packaged differently on-die, and are better voltage regulated. They may even have cut-down on the L1/L2 cache slightly, or removed a bit of processing/hardware which is not useful for Cloud Computing. At the end of the day, it is all speculation.
  • lmcd - Monday, May 2, 2022 - link

    "On a Desktop PC, something like Intel's Core i9-9900k works well. And that's running on an outdated +14nm node, on an outdated architecture (Skylake), which was able to match a new architecture (Zen2) on a 7nm node. If you were able to refresh that with a newer architecture on a newer node, it will perform better against the chiplet design of AMD. The reason is that on a monolithic design, the whole chipset is stamped out by the same good node and it is a closer integrated system. Since this is a computer for a single user, performance (single-core) is much more useful than outright total/multithreaded performance and the system is connected to a power socket. So some efficiency is traded for more performance (ALL Large Cores)."

    There's two options for that hypothetical die. The die shrink with no other changes leads to no performance gain. Ballooning the die size back to the same area on a monolithic design just leads to leakage, horrible binning, and resulting heat increase. The same can be said for clockspeed increases. The only valid extrapolation from the paragraph I quoted is that you don't care about efficiency when scaling the design up. Otherwise it would be obvious why the 9900K is a dead end.

    As for the cooling issues, hello and welcome! https://www.igorslab.de/en/cpu-heatspreader-in-det...

    These issues are not isolated. They are not yet solved. I am sure they can be conquered but it's nontrivial. Point is -- building dies to better diffuse heat is a must when the "uncore" elements that used to add cooling surface are separated out.

    Zen 4c is guaranteed to be smaller than Zen 4. I can guarantee that AMD is not wasting any space on its Genoa platform in either of its high-end configurations using Zen 4 and Zen 4c. The expected difference is cache, but that's still a component of the core.

    I'm a bit astonished by the inaccuracies in your Intel history section. There's no such thing as a Skylake eCore. Intel drew upon P3 for early Atom designs but Silvermont and later are dramatically different.
  • michael2k - Saturday, April 30, 2022 - link

    Not true. If you’ve got 30 threads and 26 of them are time or performance insensitive then you can schedule them on the efficiency cores. Doing so gives your performance cores a larger slice of the power budget, which in turn means you performance sensitive tasks can run faster and hotter.

    IE a file copy, a network stack, an indexer, and a virus scan can all run on a set of low power cores that only sips 5W, leaving you main app the remaining 145W; the alternative is to split your performance cores in those tasks.
  • JayNor - Saturday, April 30, 2022 - link

    Intel's Alder Lake eight e-cores are reportedly more performant than the four cores in my three year old NUC box... so looks like they have a point to me. The Raptor Lake chips are supposed to be drop-in upgrades, adding 8 more e-cores, so looks like a good upgrade path to me.

    I've also read some comments about using process lasso or a windows power profile as a way to effectively go fanless, so I'd be interested in trying to run exclusively on e-cores and silence the fans when I don't want to hear them.
  • JayNor - Saturday, April 30, 2022 - link

    as a reference, wikichip fuse has an article
    "Intel’s Gracemont Small Core Eclipses Last-Gen Big Core Performance"
    which presents Intel slides on Gracemont relative performance vs the Skylake cores.
  • hecksagon - Tuesday, May 3, 2022 - link

    Not at all. If I can fit 4 efficiency cores in the same area as 1 performance core and get greater multi-threaded performance than that performance core then it definity has merit on desktop. At the end of the day we are still die size limited. Getting more cores on that die is certainly useful.
  • abufrejoval - Saturday, April 30, 2022 - link

    Guys, what's going on?

    AT has become a real ghost town with the highest activity being some robot "Deals".

    Really great writers left without parting words like Johan de Gelas or Andrei Frumusanu, some parted with words, but even the last full timer had very few things to say for some time now.

    Do you have a legal obligation to keep the site running or do you actually believe you can get out of this stall?

    Closing AT would be terrible, but it doesn't get any better by dragging it out like a 19th century opera death.
  • flgt - Saturday, April 30, 2022 - link

    You know what's going on. No one pays for online content anymore. They are at the mercy of their advertisers. All those writers were good guys and probably made good money going into industry. It is what it is. No point in harassing the guys working their ass off to try and keep it going.

Log in

Don't have an account? Sign up now