E630f Sustained write performance seems low w/ no cache usage observed

  • Last Post 21 June 2022
  • Topic Is Solved
Chad Bersche posted this 21 June 2022

I have an E630F which I have populated with 10TB SAS drives.  I've created a single RAID 6 LUN across 15 of the drives, with one as a global hot spare.  Read performance is phenomenal, running about 24MB/sec out of each drive, for an aggregate of 600-700MB/sec across dual 8G FC ports to an ESX 7 server.

Write, however, are attrocious.  I NEVER see any cache usage, despite having the LUN set for WriteBack mode @128K stripe size.  I only get about 100MB/sec on writes to the logical drive, which is only 6-8MB/sec per drive.  I've tested all the possible block sizes and 128K seemed to give the best overall write performance.

I don't understand why I never see any of the cache being used.  I'm putting a snippet of my ctrl -v output:

HWRev: A6                              WWN: 2101-0001-ffff-ffff
CmdProtocol: SCSI-3
MemType: DDR3 SDRAM  (Slot 1)          MemSize:   2 GB  (Slot 1)
       : DDR3 SDRAM  (Slot 2)                 :   2 GB  (Slot 2)
FlashType: Flash Memory                FlashSize: 2 GB
NVRAMType: SRAM                        NVRAMSize: 512 KB
BootLoaderVersion: 0.19.0000.30        BootLoaderBuildDate: N/A
FirmwareVersion: 10.18.2270.00         FirmwareBuildDate: Feb 25, 2017
SoftwareVersion: 10.18.2270.00         SoftwareBuildDate: Feb 25, 2017

DiskArrayPresent: 1                    OverallRAIDStatus: OK
LogDrvPresent: 1                       LogDrvOnline: 1
LogDrvOffline: 0                       LogDrvCritical: 0
PhyDrvPresent: 16                      PhyDrvOnline: 16
PhyDrvOffline: 0                       PhyDrvPFA: 0
GlobalSparePresent: 1                  DedicatedSparePresent: 0
RevertibleGlobalSparePresent: 1        RevertibleDedicatedSparePresent: 0
RevertibleGlobalSpareUsed: 0           RevertibleDedicatedSpareUsed: 0

WriteThroughMode: Yes                  MaxSectorSize: 4 KB
PreferredCacheLineSize: 64 KB          CacheLineSize: 64 KB
Coercion: Enabled                      CoercionMethod: GBTruncate
SMART: Enabled                         SMARTPollingInterval: 10 minutes
MigrationStorage: DDF                  CacheFlushInterval: 12 second(s)
PollInterval: 15 second(s)             AdaptiveWBCache: Enabled
HostCacheFlushing: Disabled            ForcedReadAhead: Disabled
ALUA: Enabled                          PowerSavingIdleTime: Never
PowerSavingStandbyTime: Never          PowerSavingStoppedTime: Never
PerfectRebuildAvailable: 64
VAAIsupport: Enabled                   SSDTrimSupport: Disabled

On the same type of drives internal to a Dell PowerEdge server, I'm able to sustain easily 700MB/sec write to a RAID6 volume internal to the server.  Reads from the Promise to the internal RAID LUN (w/ only 12 drives) are easily able to write at whatever the Promise can send to it.  When copying from the internal LUN to the Promise, only 100MB/sec on write.

I'd love to know what I'm missing, as I'm quite surprised that 100MB/sec is the best on write that the Promise can handle.


Order By: Standard | Latest | Votes
R P posted this 21 June 2022

Hi Chad,

The problem is that your enclosuer is in write-thru mode, nothing is cached.

WriteThroughMode: Yes

Most likely you have a bad battery or a battery with a hold time of less than 72 hours.

The two options are to replace the battery or to disable adaptiveWBcache.

It's quickest and easiest to disable adaptiveWBcache. From WebPAM...

De-select Adaptive Writeback Cache and your write performance will go back to what it had been.

From the CLI...

administrator@cli> ctrl -a mod -s "adaptivewbcache=disable"


  • Liked by
  • Chad Bersche
Chad Bersche posted this 21 June 2022

Thank you so much for the response.

In a weird parallel universe sort of thing, I had placed an order for replacement batteries, which arrived in shipping yesterday, and then I installed today.  Oddly, there was no notice that the array had switched to WriteThrough in the logs, at least that I noticed.  Also, the batteries both reported Green in WebPAM.  I do recall one of them showing a hold time of 71 hours, which based on your info meets the threshold to switch modes.  

Now that I have the new batteries in the unit, you're absolutely correct and everything is back at a much better rate, of about 700-800MB/s on write performance, which is on par with what I was seeing on my other internal RAID controller.  

I very much appreciate this little tidbit, and will certainly keep this archived so that should the same problem resurface, and I'm unable to source batteries, then I'll have a way to keep the array performing, though at a reduced hold time on the batteries.

I'm estatic that the array is back to running as I had well been accustomed to.  I couldn't for the life of me figure out how the batteries and the performance were related, since everything still showed green.

Kudos and appreciation!