I have an E630F which I have populated with 10TB SAS drives. I've created a single RAID 6 LUN across 15 of the drives, with one as a global hot spare. Read performance is phenomenal, running about 24MB/sec out of each drive, for an aggregate of 600-700MB/sec across dual 8G FC ports to an ESX 7 server.
Write, however, are attrocious. I NEVER see any cache usage, despite having the LUN set for WriteBack mode @128K stripe size. I only get about 100MB/sec on writes to the logical drive, which is only 6-8MB/sec per drive. I've tested all the possible block sizes and 128K seemed to give the best overall write performance.
I don't understand why I never see any of the cache being used. I'm putting a snippet of my ctrl -v output:
HWRev: A6 WWN: 2101-0001-ffff-ffff CmdProtocol: SCSI-3 MemType: DDR3 SDRAM (Slot 1) MemSize: 2 GB (Slot 1) : DDR3 SDRAM (Slot 2) : 2 GB (Slot 2) FlashType: Flash Memory FlashSize: 2 GB NVRAMType: SRAM NVRAMSize: 512 KB BootLoaderVersion: 0.19.0000.30 BootLoaderBuildDate: N/A FirmwareVersion: 10.18.2270.00 FirmwareBuildDate: Feb 25, 2017 SoftwareVersion: 10.18.2270.00 SoftwareBuildDate: Feb 25, 2017 DiskArrayPresent: 1 OverallRAIDStatus: OK LogDrvPresent: 1 LogDrvOnline: 1 LogDrvOffline: 0 LogDrvCritical: 0 PhyDrvPresent: 16 PhyDrvOnline: 16 PhyDrvOffline: 0 PhyDrvPFA: 0 GlobalSparePresent: 1 DedicatedSparePresent: 0 RevertibleGlobalSparePresent: 1 RevertibleDedicatedSparePresent: 0 RevertibleGlobalSpareUsed: 0 RevertibleDedicatedSpareUsed: 0 WriteThroughMode: Yes MaxSectorSize: 4 KB PreferredCacheLineSize: 64 KB CacheLineSize: 64 KB Coercion: Enabled CoercionMethod: GBTruncate SMART: Enabled SMARTPollingInterval: 10 minutes MigrationStorage: DDF CacheFlushInterval: 12 second(s) PollInterval: 15 second(s) AdaptiveWBCache: Enabled HostCacheFlushing: Disabled ForcedReadAhead: Disabled ALUA: Enabled PowerSavingIdleTime: Never PowerSavingStandbyTime: Never PowerSavingStoppedTime: Never PerfectRebuildAvailable: 64 VAAIsupport: Enabled SSDTrimSupport: Disabled
On the same type of drives internal to a Dell PowerEdge server, I'm able to sustain easily 700MB/sec write to a RAID6 volume internal to the server. Reads from the Promise to the internal RAID LUN (w/ only 12 drives) are easily able to write at whatever the Promise can send to it. When copying from the internal LUN to the Promise, only 100MB/sec on write.
I'd love to know what I'm missing, as I'm quite surprised that 100MB/sec is the best on write that the Promise can handle.
Thanks!