Google's IP: Tensor TPU/NPU

At the heart of the Google Tensor, we find the TPU which actually gives the chip is marketing name. Developed by Google with input and feedback by the team’s research teams, taking advantage of years of extensive experience in the field of machine learning, Google puts a lot of value into the experiences that the new TPU allows for Pixel 6 phones. There’s a lot to talk about here, but let’s first try to break down some numbers, to try to see where the performance of the Tensor ends up relative to the competition.

We start off with MLCommon’s MLPerf – the benchmark suite works closely with all industry vendors in designing something that is representative of actual workloads that run on devices. We also run variants of the benchmark which are able to take advantage of various vendors SDKs and acceleration frameworks. Google had sent us a variant of the MLPerf app to test the Pixel 6 phones with – it’s to be noted that the workloads on the Tensor run via NNAPI, while other phones are optimised to run through the respective chip vendor’s libraries, such as Qualcomm’s SNPE, Samsung’s EDEN, or MediaTek’s Neuron – unfortunately only the Apple variant is lacking CoreML acceleration, thus we should expect lower scores on the A15.

MLPerf 1.0.1 - Image Classification MLPerf 1.0.1 - Object Detection MLPerf 1.0.1 - Image SegmentationMLPerf 1.0.1 - Image Classification (Offline)

Starting off with the Image Classification, Object Detection, and Image Segmentation workloads, the Pixel 6 Pro and the Google Tensor showcase good performance, and the phone is able to outperform the Exynos 2100’s NPU and software stack. More recently, Qualcomm had optimised its software implementation for MLPerf 1.1, able to achieve higher scores than a few months ago, and this allows the Snapdragon 888 to achieve significantly better scores than what we’re seeing on the Google Tensor and the TPU – at least for those workloads, in the current software releases and optimisations.

MLPerf 1.0.1 - Language Processing 

The Language Processing test of MLPerf is a MobileBERT model, and here for either architectural reasons of the TPU, or just a vastly superior software implementation, the Google Tensor is able to obliterate the competition in terms of inference speed.

In Google’s marketing, language processing, such as live transcribing, and live translations, are very major parts of the differentiating features that the new Google Tensor enables for the Pixel 6 series devices – in fact, when talking about the TPU performance, it’s exactly these workloads that the company highlights as being the killer use-cases and what the company calls state-of-the-art.

If the scores here are indeed a direct representation of Google’s design focus of the TPU, then that’s a massively impressive competitive advantage over other platforms, as it represents a giant leap in performance.

GeekBench ML 0.5.0

Other benchmarks we have available are for example GeekBench ML, which is currently still in a pre-release state in that the models and acceleration can still change in further updates.

The performance here depends on the APIs used, with the test either allowing TensorFlow delegates for the GPU or CPU, or using NNAPI on Android devices (and CoreML on iOS). The GPU results should only represent the GPU ML performance, which is surprisingly not that great on the Tensor, as it somehow lands below the Exynos 2100’s GPU.

In NNAPI mode, the Tensor is able to more clearly distinguish itself from the other SoCs, showcasing a 44% lead over the Snapdragon 888. It’s likely this represent the TPU performance lead, however it’s very hard to come to conclusions when it comes to such abstractions layer APIs.

AI Benchmark 4 - NNAPI (CPU+GPU+NPU)

In AI Benchmark 4, when running the benchmark in pure NNAPI mode, the Google Tensor again showcases a very large performance advantage over the competition. Again, it’s hard to come to conclusions as to what’s driving the performance here as there’s use of CPU, GPU, and NPUs.

I briefly looked at the power profile of the Pixel 6 Pro when running the test, and it showcased similar power figures to the Exynos 2100, which extremely high burst power figures of up to 14W when doing individual inferences. Due to the much higher performance the Tensor showcases, it also means it’s that much more efficient. The Snapdragon 888 peaked around 12W in the same workloads, so the efficiency gap here isn’t as large, however it’s still in favour of Google’s chip.

All in all, Google’s ML performance of the Tensor has been its main marketing point, and Google doesn’t disappoint in that regard, as the chip and the TPU seemingly are able to showcase extremely large performance advantages over the competition. While power is still very high, completing an inference faster means that energy efficiency is also much better.

I asked Google what their plans are in regards to the software side of things for the TPU – whether they’ll be releasing a public SDK for developers to tap into the TPU, or whether things will remain more NNAPI centric like how they are today on the Pixels. The company wouldn’t commit yet to any plans as it’s still very early – in generally that’s the same tone we’ve heard from other companies as even Samsung, even 2 years after the release of their first-gen NPU, doesn’t publicly make available their Eden SDK. Google notes that there is massive performance potential for the TPU and that the Pixel 6 phones are able to use them in first-party software, which enables the many ML features for the camera, and many translation features on the phone.

GPU Performance & Power Phone Efficiency & Battery Life
Comments Locked

108 Comments

View All Comments

  • sharath.naik - Saturday, November 13, 2021 - link

    Looks like you both do not value to concept of understanding the topic before responding, So pay attention this time. I am talking about hardware binning.. not software binning like every one else does for low light. Hardware binning means the sensors NEVER produce anything other than 12MP. Do both of you understand what NEVER means? Never means these sensors are NEVER capable of 50MP or 48MP. NEVER means Pixel 3x zoom are 1.3MP low resolution images(Yes that is all your portrait modes). NEVER means at 10x Pixel images are down to 2.5MP.
    Next time Both of you learn to read, learn to listen before responding like you do.
  • meacupla - Tuesday, November 2, 2021 - link

    IDK about you, but livetranslation is very useful if you have to interact with people who can't speak a language you can speak fluently.
  • BigDH01 - Tuesday, November 2, 2021 - link

    Agreed this is useful in those situations where it's needed but those situations probably aren't very common for those of us that don't do a lot of international travel. In local situations with non-native English speakers typically enough English is still known to "get by."
  • Justwork - Friday, November 5, 2021 - link

    Not always. My in-law just moved in who knows no English and I barely speak their native language. We've always relief on translation apps to get by. When I got the P6 this weekend, both our lives just got dramatically better. The experience is just so much more integrated and way faster. No more spending minutes pausing while we type responses. The live translate is literally life changing because of how improved it is. I know others in my situation, it's not that uncommon and they are very excited for this phone because of this one capability.
  • name99 - Tuesday, November 2, 2021 - link

    Agreed, but, in the context of the review:
    - does this need to run locally. (My guess is yes, that non-local is noticeably slower, and requires an internet connection you may not have.)
    - does anyone run it locally (no idea)
    - is the constraint on running it locally and well the amount of inference HW? Or the model size? or something else like CPU? ie does Tensor the chip actually do this better than QC (or Apple, or, hell, an intel laptop)?
  • SonOfKratos - Tuesday, November 2, 2021 - link

    Wow. You know what the fact that the phone has a modem to compete with Qualcomm for the first time in the US is good enough for me. The more competition the better, yes Qualcomm is still collecting royalties for their parents but who cares.
  • Alistair - Tuesday, November 2, 2021 - link

    That's a lot of words to basically state the truth, Tensor is a cheap chip, nothing new here. Next. I'm waiting for Samsung + AMD.
  • Alistair - Tuesday, November 2, 2021 - link

    phones are cheap too, but too expensive
  • Wrs - Tuesday, November 2, 2021 - link

    Surprisingly large for being cheap. Dual X1's with 1 MB cache, 20 Mali cores. So many inefficiencies just to get a language translation block. As if both the engineers and the bean counters fell asleep. To be fair, it's a first-gen phone SoC for Google.

    Idk if I regard Samsung + AMD as much better, though. Once upon a time AMD had a low-power graphics department. They sold that off to Qualcomm over a decade ago. So this would probably be AMD's first gen on phones too. And the ARM X1 core remains a problem. It's customizable but the blueprint seems to throw efficiency out the window for performance. You don't want any silicon in a phone that throws efficiency out the window.
  • Alistair - Wednesday, November 3, 2021 - link

    Will we still be on X1 next year? I hope not. I'm hoping next year is finally the boost that Android needs for SoCs.

Log in

Don't have an account? Sign up now