Older R4 multiple drive failures?

  • 699 Views
  • Last Post 19 May 2017
Jeff den Broeder posted this 18 July 2016

I have an early R4 that I purchased in August of 2012. I'm now having regular drive failures, currently replacing a drive every month. The current failure happened a week after the last one. It's not always the same bay. Do the R4s have a life span? Or could it be something else?

Thanks,

Jeff

Order By: Standard | Latest | Votes
William Chan posted this 26 July 2016

I have the same problem of hard disk (drive/slot) failures, but always on the same bottom bay.  It works again after swapping with some used harddisk, rebuilding...  It happens whenever R4 is ON for hours.  The firmware cannot not be updated too.  Hope that anyone share similar experience and can let us know how to deal with it, or should just throw it away, as it expires!

 

Thanks

William

Joe Engledow posted this 26 July 2016

There really isn't an expiration date for the Pegasus internals themselves. They should be good as long as the drives are compatible and in good working order.

We'd like to check each of these Pegasus storage systems out at our website, http://support.promise.com. Please register your email address then register the product then make the web support case, alternately we would be happy to help you at (408) 228-1500. If they are out of warranty we ask a small $59 fee for a single web support case during Pacific time zone business hours. There are options to buy support for 6 months or a year. If there is a problem with the Pegasus and it cannot be used, we will tell you. 

We can also help you get the firmware updated. One hint that we've found is if you start the Pegasus without drives it will take the firmware flash. There are other workarounds which really necessitate a support case to be opened.

Not every drive will work with a Pegasus1. Please check the download center under "legacy products" and make sure your drives are on the compatibility list. The Pegasus2 compatibility list is not valid for your Pegasus1.

Jeff den Broeder posted this 19 October 2016

Updated the firmware, bought 4 new drives listed as compatible ... 1 month later (yesterday) one of the drives fails ... 

Stéphane LAURENT posted this 23 October 2016

I have the same problem with my PEGASUS1 R4 : bay 3 is always crashing my drives...I there a major  failure, should I throw it away ?

 

Venkatachalam Settu posted this 23 October 2016

We need to check the logs for the unit to identify the issue.Have you noticed any error on drive 3 under events tab in the Promise Utility? What is the firmware version running on the unit? Please check the firmware version under subsystem information tab in the Promise Utility?

Thanks.

William Chan posted this 23 October 2016

Kind of sharing of experience, may or may not the same with other cases.  Almost crash of the same drive bay. But when the hard disk is inserted to other bay of the same R4, they are fine.  So, after so many controlled test, I could conclude that it is the problem of bay.  I reported the case to Promise, support service, after checking of logs, the help me to apply for sending back the R4 to Taiwan for repair.  But no indication of pricing and I have to bear the delivery cost (from Hong Kong) and certain other costs (which I dont remember now...).  What I still remember is that it was a very lengthy back and forth process to check my R4, doing the maintenance application, I asked for indicative repairing costs, but replied with NO.  But suddenly, after about two months of communication and trouble, the Taiwan team adviced me not to send back since the repairing costs, including shipment, will be close to buy a new one.  

So I now just disable (I mean not using) the malfunction bay, using it as a R"3"!!  I bought another R6.  Hope this helpful.

Henrikas Paukstys posted this 09 March 2017

Have the same problem with R6, changed the drive a few times in bottom SLOT6, after reboot everytime I got failure (DEAD) drive. Had a solution from support, to delete array and create new one. Till reboot of R6 works fine, after reboot again fails drive only in slot6. Interested think that after factory reset and deleting and creating new array, the drives has old array numbers, can not understand why is it? I'll try to format all the drives with disk utility, then reinsert, create new array, if the drive numbers will be still the same, than the only solution is to build RAID5 array from 5 slots, and 6st slot leave as spare. But I'm nervous couse I loose some performance and capicity of RAID partition.

Hariprasad Velmurugan posted this 09 March 2017

Hi Henrikas,

It seems that slot 6 issue  need to be analyzed complete subsystem report in order to find out the exact issue. Please call technical support at 408-228-1500 or reach out through support case at  https://support.promise.com.

Below are the steps to save the logs

Open the promise utility then unlock the utility

Click on subsystem information in the pegasus utility


Then click on save service report.

Please find the below Product Manual for reference
http://promise.com/DownloadFile.aspx?DownloadFileUID=1707

Regards

Hari

Henrikas Paukstys posted this 18 May 2017

after two months issue with slot 6 came back. I switched PD5 with PD6, to initialize is it a drive problem. On third unit restart after night, have the same red light, and guess on wich slot? :) No 6 SO there is not a drive problem. Something is goin on with the hole unit system. Before two months (with the same 6 slot issue) I had a response form Promise Support, to delete array and create new. It was OK for 70 days, I was happy, that solved this problem. But now I see, I was wrong. Today sent system log to Promise Support, waiting for response. After unit reboot randomly after 5 minutes, PD6 is marked as DEAD, due to removal. I can bring it back with terminal command online, but it shouldn't be like this.

I'm on Mac OS 10.11.6, 6x3TB Seagate ST3000DM001. Pegasus firmware is up to date.

Don't want to do array recreating few times on year. Loosing my time on copying, synchronizing.

Maybe someone else has the same issues? And have the right solution?

Venkatachalam Settu posted this 18 May 2017

Hi Henrikas,

It might be a problem with slot 6. You have to work on the unit without drives.Please take a backup of your data and reconfigure the RAID with remaining 5 drives. Please leave a slot 6 as an empty and monitor the unit then let us know if you are facing any issues.

Thank you.

 

Henrikas Paukstys posted this 18 May 2017

can I use slot 6 with spare drive? for temporary backups. Or I shoudn't? 

Venkatachalam Settu posted this 18 May 2017

If you use the spare drive in slot 6 and it may fail any point of time.

 

 

Henrikas Paukstys posted this 18 May 2017

can you see any reasson of slot 6 failing on my attached system report?

Venkatachalam Settu posted this 18 May 2017

Hi Henrikas,

As per the report, I have noticed that multiple drives failed in the same slot(PD 6) which indicate slot 6 is not responding to the controller. This may be an issue with backend of the slot which leads to kick the drive offline

Since It is a first generation of the Pegasus unit and moved into the legacy items.

Thank you.

Henrikas Paukstys posted this 18 May 2017

I don't care if spare drive loose data. How I understand if there slot 6 problem, the spare drive in bay no6, only can fail, not all RAID5 array on other bays?

Devendra Kumar posted this 19 May 2017

Hi Henrikas,

Yes, you can use that drive separately as Pass-thru drive. Which means the drive in the slot 6 will not be a part of array if that drive failed only those will available in the drive number6.

Thank you

Henrikas Paukstys posted this 19 May 2017

Ok, can you tell me what pegasus checking 5 minutes after boot? controler self test or physical drives test? Couse unit works ok 24/7 without failing slot 6 (PD 6 forced online) It seems that slot 6 can not pass some system test.

Other case: before creating new array from 5 drives RAID5, tested r/w speed when PD6 dead, read only 400 MB/s (6 drives RAID 5 read 800), I hope this half down performance couses critical array status? And when I will use 5 drives on non critical array, read speed will be higher than 400 MB/s :)

Close