Sabrent Rocket Q4 and Corsair MP600 CORE NVMe SSDs Reviewed: PCIe 4.0 with QLC
by Billy Tallis on April 9, 2021 12:45 PM ESTAnandTech Storage Bench - The Destroyer
Our AnandTech Storage Bench tests are traces (recordings) of real-world IO patterns that are replayed onto the drives under test. The Destroyer is the longest and most difficult phase of our consumer SSD test suite. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.
Average Data Rate | |||||||||
Average Latency | Average Read Latency | Average Write Latency | |||||||
99th Percentile Latency | 99th Percentile Read Latency | 99th Percentile Write Latency | |||||||
Energy Usage |
The 4TB Sabrent Rocket Q4 turns in excellent scores on The Destroyer, helped greatly by the fact that the test fits entirely within the drive's SLC cache so write latency is minimal. The 2TB Corsair MP600 CORE still has decent overall performance with solid 99th percentile latency scores indicating that it doesn't run into the kind of severe latency spikes that can be common with QLC NAND.
The major downside is that these are among the most power-hungry drives, consuming a bit more energy than the TLC-based Phison E16 drive and significantly more than any of the other drives in this batch.
AnandTech Storage Bench - Heavy
The ATSB Heavy test is much shorter overall than The Destroyer, but is still fairly write-intensive. We run this test twice: first on a mostly-empty drive, and again on a completely full drive to show the worst-case performance.
Average Data Rate | |||||||||
Average Latency | Average Read Latency | Average Write Latency | |||||||
99th Percentile Latency | 99th Percentile Read Latency | 99th Percentile Write Latency | |||||||
Energy Usage |
The shorter duration of the Heavy test means that smaller drives can also get good mileage out of their SLC caches, so the 4TB Sabrent Rocket Q4 loses the advantage it had on The Destroyer. The Rocket Q4 and the Corsair MP600 CORE both turn in good scores overall for low-end drives, with clear improvement over the Phison E12 QLC drives.
However, on the full-drive test runs the 2TB MP600 CORE is showing some elevated latency. It's not as bad as on QLC SATA drives and some competing QLC NVMe drives, so overall this isn't a serious concern, but it does emphasize how QLC SSDs need a lot of capacity (and a lot of SLC cache) in order to stay close to the performance of TLC SSDs.
AnandTech Storage Bench - Light
The ATSB Light test represents ordinary everyday usage that doesn't put much strain on a SSD. Low queue depths, short bursts of IO and a short overall test duration mean this should be easy for any SSD. But running it a second time on a full drive shows how even storage-light workloads can be affected by SSD performance degradation.
Average Data Rate | |||||||||
Average Latency | Average Read Latency | Average Write Latency | |||||||
99th Percentile Latency | 99th Percentile Read Latency | 99th Percentile Write Latency | |||||||
Energy Usage |
Both of the Gen4 QLC drives provide top-tier performance for the empty-drive runs of the Light test, and they also still provide acceptable performance on the full-drive test runs with no serious latency spikes. As with the other ATSB tests, they come in last place for energy efficiency.
PCMark 10 Storage Benchmarks
The PCMark 10 Storage benchmarks are IO trace based tests similar to our own ATSB tests. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.
Full System Drive | Overall Score | Average Bandwidth | Average Latency | ||||||
Quick System Drive | Overall Score | Average Bandwidth | Average Latency | ||||||
Data Drive | Overall Score | Average Bandwidth | Average Latency |
The two PCIe Gen4 QLC drives offer good performance on the Quick System Drive and Data Drive tests, which are relatively shorter and more focused on sequential IO. The longer Full System Drive test with more random IO stresses these drives enough for their low-end nature to show through - in stark contrast to the Intel SSD 670p that manages very good scores on both of the system drive tests.
60 Comments
View All Comments
ZolaIII - Friday, April 9, 2021 - link
Actually 5.6 years but compared to same MP600 TLC 8x that much or 44.8 years and for just a little more money. But seriously buying a 1 TB mp600 which will be enough regarding capacity and which will last 22.4 years under same explanation (vs 2.8 for Core) then that makes a hell of a difference.WaltC - Saturday, April 10, 2021 - link
In far less than 22 years your entire system will have been replaced...;) IE, for the use-life of the drive you will never wear it out. The importance some people place on "endurance" is really weird. I have a 960 EVO NVMe with endurance estimates of 75TB: the drive is three years old this month and served as my boot drive for two of those three years, and I've used 19.6TB of write as of today. Rounding off, I have 55TB of write endurance remaining. That makes for an average of 6.5 TBs written per year--but the drive is no longer my boot/Win10-build install drive, so an average of 5TBs per year as strictly a data drive is probably overestimating, but just for fun, let's call it 5 TBs write per year. That means I have *at least* 11 years of write endurance remaining for this drive--which would mean the drive would have lasted at least 14 years in daily use before wearing out. Anyone think that 11 years from now I'll still be using that drive on a daily basis? I don't...;) The fact is that people worry needlessly about write endurance unless they are using these drives in some kind of mega heavy-use commercial setting. Write endurance estimates of 20-30 years are absurd and when choosing a drive for your personal system such estimates should be ignored as they have no meaning--they will be obsolete long before they wear out. So, buy the drive performance at the price you want to pay and don't worry about write endurance as even 75TB is plenty for personal systems.GeoffreyA - Sunday, April 11, 2021 - link
It would be interesting to put today's drives to an endurance experiment and see if their actual and advertised ratings square.ZolaIII - Sunday, April 11, 2021 - link
I have 2 TB writes per month, using PC for productivity, gaming and transcoding and still not to much. If I used it professionally for video that number would be much higher (high bandwidth mastering codes). To hell transcoding a single Blu-ray movie quickly (with GPU for sakes of making it HLG10+) will eat up to 150GB of writes and that's not a rocket science task to perform. By the way its not that PCIe interface will go anywhere and you can mont old NVMe to a new machine.Oxford Guy - Sunday, April 11, 2021 - link
One can't choose performance with QLC. It's inherently slower.It's also inherently reduced in longevity.
Remember, it has twice as many voltage states (causing a much bigger issue with drift) for just a 30% density increase.
That's diminished returns.
haukionkannel - Friday, April 9, 2021 - link
Well, soon QLS can be seen only in highend top models, when middle range and low end go to PLS or what ever...for SSD manufacturers it makes a lot of Sense because they save money in that way. Profit!
nandnandnand - Saturday, April 10, 2021 - link
5/6/8 bits per cell might be ok if NAND manufacturers found some magic sauce to increase endurance. There was research to that effect going on a decade ago: https://ieeexplore.ieee.org/abstract/document/6479...TLC is not going away just yet, and they can just increase drive capacities to make it unlikely an average user will hit the limits.
Samus - Sunday, April 11, 2021 - link
When you consider how well perfected TLC is now that it has gone full 3D and the SLC cache + overprovisioning eliminate most of the performance\endurance issues, it makes you wonder if MLC will ever come back. It's almost completely disappeared even in enterprise.Oxford Guy - Sunday, April 11, 2021 - link
3D manufacturing killed MLC. It made TLC viable.There is no such magic bullet for QLC.
FunBunny2 - Sunday, April 11, 2021 - link
"There is no such magic bullet for QLC."well... the same bullet, ver. 2, might work. that would require two steps:
- moving 'back' to an even larger node, assuming that there's sufficient machinery at such node available at scale
- getting two or three times the layers as TLC currently uses
I've no idea whether either is feasible, but willing to bet both gonads that both, at least, are required.