stripe size guidelines and performance

  • 445 Views
  • Last Post 06 April 2016
Brian Carp posted this 06 April 2016

I'm wondering if there are any updated guidelines or performance analysis using modern systems with respect to the Stripe Size settings for logical drives. The product manuals for the E-class series, which is copyrighted 2011 so it's possible that they haven't been updated in 5 years, includes the following text:

If you do not know the cache buffer or fixed data block sizes, choose 64 KB as your Stripe Size. Generally speaking: Email, POS, and web servers prefer smaller stripe sizes. Video and database applications prefer larger stripe sizes.

Modern disk drives typically have much larger cache buffers than the available options, so this will hardly ever be a factor. And the second part of this conflicts with conventional wisdom elsewhere on the web: many guides recommend larger sizes for random access (typically email/web servers) and smaller sizes for sequential access (video or large files, although fixed block sizes might be the overriding factor). The idea being that random access will often hit one drive at a time so you might as well cache a block of data, versus sequential access which will likely hit all of the drives in sequence so it's better to distribute the I/O and have them operating in parallel.

Does Promise have more specific recommendations based on their particular product design or even better with performance tests or benchmarks? If the 64K size is recommended for typical use cases, I am wondering what the typical use cases are for larger sizes and if anyone has reported performance improvements using them.

In my case, I want to create logical drives where the typical operation is multi-megabyte file sequential access, and I'd like to know if larger stripe size chunks are advantageous or detrimental for this use case.

Richard Oettinger posted this 06 April 2016

Hi Brian,

The recommendations in that document are still 'best practices' as they are based on the architecture of our RAID engine and the method used to compute stripe parity.

To acheive the best overall performance, the main points are still important:

- For RAID5 Logical Drives, create a Disk Array with nine Physical Drives.
- For RAID6 Logical Drives, create a Disk Array with ten Physical Drives.
- Create one Logical Drive per Disk Array.
- Balance the Logical Drive assignments between the two RAID controllers [LUN Affinity].

Based on the system architecture, the default 64KB stripe is the optimal value; this takes best advantage of the parity mechanism. This is currently used in many thousand SANs globally for large-block sequential file writing and reading.

Regards, Richard

Close