HI,
I've been able to install Esxi 6.7 on Mac Pro 7,1.
everything is recognized besite the Pegasus R4i 32GB
I need drivers to build a custom ISO.
any help will be apprecited .
Thanks in Advance,
HI,
I've been able to install Esxi 6.7 on Mac Pro 7,1.
everything is recognized besite the Pegasus R4i 32GB
I need drivers to build a custom ISO.
any help will be apprecited .
Thanks in Advance,
Hi All,
Back in ESXi 4.0 days Promise released a stex driver for VMware. It may work with the newer ESXi, it may not.
You can also still download it from the Promise downloads page.
Hi Meir,
Sorry, there is no driver available for Pegasus R4i from Promise, as you know it's plug and play device with
built in driver in Mac OS.
Regards,
Promise Team
do you know about any future thoughts?
since it could be a big thing if it could work together IMO.
for now it will work with Intel or samsung pci-e chipset.
I hope your team will consider to step into Esxi area.
regards,
Hi Meir,
Sorry, right now no, but in future if any update as you suggested, will update you or update on our website.
We have Enterprise storage devices which works with Esxi.
Thank you,
Regards,
Promise Team
I would like to see this too.
If anyone has a workaround please post it. For example, you could run ESXi hosted on mac OS and serve your pegasus raid. That is a waste of this hardware- my 7,1 has 28 cores and 1.5 TB ram; Fusion supports much less- 128GB ram and less cores.
I saw a hacky workaround in this thread.
Since you can run a windows Guest or Mac Guest, I wonder if it's possible to do a direct passthrough using the promise windows or mac drivers. I've figured out a few tricky things so far.
Maybe This. Use a very light mac OS to host an iSCSI server that uses the full capacity of the raid.
Hi,
Just an update. This will WORK. and the ESXi feature is OLD... and I was at VMware 9 years and never heard about it- and I was a VMware Certified Pro for a while too!
VMWare DirectPath IO allows your VM to access your PCI cards DIRECTLY. Various uses (all for working around lack of ESX drivers) include:
-Accessing 10Gbe NICs from a VM because there is no ESXi Driver
-Accessing High performance GPUs from VMs for compute (everything from scientific to cryptomining)
-Everything else such as storage ... which includes... and WORKS with... the Promise Raid R4i card.
DirectPath I/O Simple Video: https://tinkertry.com/vmware-vmdirectpath-and-intel-vt-d-explained-in-3-minutes
This may not solve all the problem for you. You may want to access the RAID as VMFS so you can store ESXi virtual machines on it.
I recommend creating a virtual machine using the DirectPath I/o technique, and using it as an "iSCSI" target. Configuring iSCSI is a pain. But there are plenty of articles on how to do it. I have used a Windows virtual machine as an iSCSI target before when I didn't have VSAN. This actually sped up ESXi performance. for accessing its own storage.
You can also run a mac virtual machine (I'm assuming this is why you have this thing), and use iSCSI target software to share back the machine's own storage. That one is probably a costly purchase; I am guessing somebody has homebrew'd something up already for this.
I booted a catalina VM last night using DirectPath I/O and was able to see all my promise raid drives. The rest is trial and error.
Other steps:
-Share the iSCSI target using the software / vm you chose with VMDirectPath.. and then configure it from ESXi.
-Automatically launch your iSCSI target when your ESX server boots up using the local.sh boot script:
vim-cmd vmsvc/power.on <vm number from vim-cmd vmsvc/getallvms>
-Since ESXi is already booted you'll probably need to ask it to mount its own iSCSI datastore- whatever you have named it:
esxcfg-volume -m datastore2
Good luck. I should have it all working this weekend.
As an FYI,
Same machine now running great with ESXi 7.0.3. I regularly attach the promise RAID using direct hardware access to a mac virtual machine.
There are some challenges getting ESXi running correctly on this (mac pro 7,1) hardware. But as of this year it will install out of the box cleanly. It works correctly with the 10gbe ethernet and any sata drives you attach inside the machine in addition to USB and thunderbolt devices. I am leaving ESXi installed on USB and plug that stick into the motherboard.
I may get another box and serve the R4i from a small linux container for VMFS duties.
Final update:
I did try several versions of linux using direct hardware access on ESXi latest version. Direct Hardware access / passthru is working pretty well for a variety of devices. So the virtual machines definitely have no problem "seeing" devices such as the promise R4i and I am using it actively from a mac VM on one machine.
There are no linux drivers that will work with the R4i. R4 "stex" drivers exist in the linux systems I am using. I checked for their presence and verified the R4i with "lspci" command. I manually loaded the stex driver using modprobe. It does not work and it's understandable. A PCI card "R4i" interfaces with a host Very differently than an external unit does. The driver needs to work with the bus interface, not a thunderbolt or high speed usb interface.
To access the RAID, I am pursuing a light-ish windows virtual machine with an iSCSI target and potentially NFS or some other storage system that can be used "as a share" from other virtual machines. iSCSI is very useful for vmware VMFS volumes - the volumes that the virtual machines store their virtual drives on.
The challenges are:
-waiting until the windows machine comes up before mounting it (this is doable in local.sh)
-having one virtual machine that is always on and always booted completely before any other virtual machines
-having one virtual machine that can not vmotion nor be off until the machine is about to be shut down.
The file sharing vm connecting to the other vm's still suffers from a lot of these issues too, including timing (vms using storage must come up after this vm).
Regardless, it seems the only reliable way to handle accessing the promise r4i from ESX is by deploying a windows server VM.