Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Corsair Neutron XT 240GB
25% Over-Provisioning

Performance consistency has never been Phison's biggest strength and that continues to be the case with the S10 controller. The consistency is actually worse compared to the older S8 controller (i.e. Corsair Force LS) because the variance in performance is so high. I'm pretty sure the issue lies in Phison's garbage collection architecture as it doesn't seem to give enough priority for internal garbage collection, which results in a scenario where the drive has to stop for very short periods of time (milliseconds) to clean up some blocks for the IOs in the queue. That is why the performance frequently drops to ~1,500 IOPS, but on the other hand the drive may be pushing 70K IOPS the second after. Even adding more over-provisioning doesn't produce a steady line, although the share of high IOPS bursts is now higher. 

For average client workloads, this shouldn't be an issue because drives never operate in steady-state and IOs tend to come in bursts, but for users that tax the storage system more there are far better options on the market. I'm a bit surprised that despite having more processing power than its predecessors, the S10 can't provide better IO consistency. With three of the four cores dedicated to flash management, there should be plenty of horsepower to manage the NAND even in steady-state scenario, although ultimately no amount of hardware can fix inefficient software/firmware. 

Corsair Neutron XT 240GB
25% Over-Provisioning


Corsair Neutron XT 240GB
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the drive with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as expected.

Introduction, The Drives & The Test AnandTech Storage Bench 2013
Comments Locked


View All Comments

  • magnusmundus - Monday, November 17, 2014 - link

    I think the SATA 3 SSD market is already saturated. Read/write speeds and IOPS are pretty much as good as they are going to get. The only thing left to do is increase capacity and reduce costs. Why not start releasing drives for the new SATA Express interface, or more M.2 form factor drives? Too small a Z97 market? I guess we'll have to wait another year or so.
  • sweenish - Monday, November 17, 2014 - link

    I personally vote for skipping m.2 altogether. Let's just move right on to the PCI-E drives.
  • TinHat - Monday, November 17, 2014 - link

  • hrrmph - Monday, November 17, 2014 - link

    I think you mean let's skip the M.2 drives that use the (slower) SATA protocol, and move right on to the M.2 drives that use the (faster) PCI-E protocol.
  • Samus - Monday, November 17, 2014 - link

    Right. I have a Samsung M.2 PCIE drive, and after finally getting it to boot on my H97 board (using a EFI boot manager partition on my SATA SSD to point to its windows installation) all I can tell you is 1100MB/sec is pretty insane. It loads BF4 maps so fast I'm always waiting on the server...
  • Mikemk - Monday, November 17, 2014 - link

    So you want to lose a GPU?
  • shank15217 - Tuesday, November 18, 2014 - link

    The protocol is called NVMe, a PCI-E drive doesn't mean much.
  • r3loaded - Tuesday, November 18, 2014 - link

    Actually both the legacy AHCI and the new NVMe protocols can be used on a PCIe-attached drive. The consumer Plextor M6e and Samsung XP941 use AHCI for compatibility reasons, while the new Intel server drives use NVMe for better performance in server workloads.
  • Kristian Vättö - Monday, November 17, 2014 - link

    Every single controller house is working on a PCIe controller for SATA Express and M.2, but the development takes time.
  • warrenk81 - Monday, November 17, 2014 - link

    honest question, not trying to be snarky, but how has apple been shipping PCIe SSDs for almost two years and no one else is?

Log in

Don't have an account? Sign up now