Ivy Bridge Testing – ATTO Disk Benchmark
Moving on from Sandy bridge testing now. Ivy Bridge is a better suited partner for this card. I managed to bribe fellow reviewer Ed “Bobnova” Smith, into letting me borrow his test setup. Mostly since after much discussion between the two of us about bus saturation. We both where somewhat questioning of it bus saturation was the culprit causing the numbers I saw in ATTO.
Ivy Bridges native PCIe 3.0 support is a vast improvement in this department. However its still a step back in performance from Sandy Bridge E. Which is primarily a workstation and server platform. Supporting a much wider I/O path. Still Ivy Bridge shows its some weakness in performance. It only goes to reinforce just how strong this card really is.
ATTO Disk Benchmark
Raid Controller | LSI MegaRAID 9271-8i |
Drives | Intel 330 60GB SSD x 5 |
Raid Level | Raid 0 |
Stripe Size | 16KB |
Queue Depth 4
Sticking with the smaller stripe size that seems to be greatly beneficial to small file performance and benchmarks like AS SSD and Crystal disk. You can clearly see a large jump in performance, over that of the same setup in the Sandy Bridge test setup. While the small numbers are generally unchanged,the read and write performance go up significantly in the later portions of the benchmark. For the first time we really see the read performance out pace the write performance.
Queue Depth 10
In the higher queue depth we see the numbers for the small file sizes increase on writes. But fall off some under the read category. This is mostly due to the small stripe size. A larger stripe size in this situation would likely yield much better performance. We see some diminishing returns later in the benchmark as well due to the same reasoning.
Raid Controller | LSI MegaRAID 9271-8i |
Drives | Western Digitial VelociRaptor 300GB 10RPM HDD x 2 |
Raid Level | Raid 0 |
Stripe Size | 16KB |
Moving on from the SSD’s we move to the VelociRaptors. Which is a good example of just what this card can do for conventional storage. Granted these are sever grade disks, with low seek times and short head stroke. However most of what we are seeing is the controller doing what it does best as a cache.
Queue Depth 4
In comparison to the 5 drive SSD setup, this is almost all drive controller. Since its a two drive setup we see very little in the way of diminishing returns from a large stripe set. In fact we are outperforming the SSD’s in this situation You can detect a few places in the middle of the benchmark where the drives haven’t quite caught up to the cache but nothing horrible.
Raid Controller | LSI MegaRAID 9271-8i |
Drives | Western Digitial VelociRaptor 300GB 10k rpm HDD x2 |
Raid Level | Raid 0 |
Stripe Size | 1MB |
Queue Depth 4
Moving on to a larger Stripe size we see the first real disk limitation during testing. Once you hit the 64 the controller peaks out the write capability and cache capability of the card. In conjunction with the on disk cache. You can quickly see the read performance take a hit to the fact that the drives are still attempting to have everything written to them. After this point both the write and read results take major hits. You start to see read values that are uncached at this point. Reflecting the performance of the physical drives themselves and not the controller.
Queue Depth 10
With the increased queue depth we see the controller recover better than on the previous test. You do however see more overall up’s and downs. though the read performance is significantly better. This is a good example of real world performance of a storage server, under peak load. The read numbers never dip as low as the previous test. Showing the controller contributing to the performance quite a bit.
In implementation this is probably exactly the kind of performance you would see with this setup. Being that most server software will provide that extra layer of I/O caching and read and write buffering.