Two new hard drives going bad at the same is very unusual unless there is a bad batch of drives. But when the drives are old, it's more likely. There was a study done by a large datacenter several years back and one of the more interesting findings was that when one drive had failed, another drive was ready to fail. And I've seen this in action, I got a call from IT many years back complaining that a rebuild in progress had stopped. Looking at it I saw that during the rebuild a second drive had failed. Luckily it was RAID6, I manually started a rebuild on the spare and told him to replace the other failed drive after the rebuild was completed.
You have a RAID5. If we assume both drives are not bad, it's still important to do things in the proper order. If one drive failed then another, then one drive is stale and you cannot use it. The procedure would be to force the last drive that went offline, the one that took the LD offline then start a rebuild to the remaining drive as it can't be put back in the array.
The event logs should show the order of things and allow the correct procedure to be determined. Without the event logs, you don't know what happened or what to do about it. The important question what happened, was there a brownout? If so both drives are probably OK. Do you see lots of bad block errors and command timeout errors on both drives? If so they are probably bad drives and recovery is problematic.