Skip to main content

Clustering Solaris with SunCluster 3.2 on VMware ESX 3.x




We have been building and even doing our performance/benchmarks tests on the ESX 3.x and Oracle RAC. And Sun folks too want to demo their cool software on ESX.

However, VMware ESX has a feature called Raw Device Mapping (RDM), which allows the guest operating systems to have direct access to the devices, bypassing the VMware layer. More information on RDM can be found in VMware documentation. The following documents could be starting points:

http://www.vmware.com/pdf/vi3_301_201_intro_vi.pdf

http://www.vmware.com/pdf/vi3_301_201_san_cfg.pdf

RDM works with either Fibre Channel or iSCSI only. In the setup here, a SAN storage box connected through Fibre Channel was used for mapping LUNS to the physical hosts. These LUNS could then be mapped onto the VMware guests using RDM. SCSI reservations have been found to be working fine with RDM (both SCSI-2 Reserve/Release and SCSI-3). These RDM devices could therefore be used as shared devices between the cluster nodes. However, of course they can also serve as local devices for a node.

One point to note here is that the virtual SCSI controllers for the guest OSes need to be different for the local and the shared disks. This is a VMware requirement when sharing disks. Also the compatibility mode for RDM, to allow direct access to the storage from the guest, should be “Physical”. For detailed information, please refer to VMware ESX documentation.





Decent article, check it out.

Comments

Popular posts from this blog

Security: VMware Workstation 6 vulnerability

vulnerable software: VMware Workstation 6.0 for Windows, possible some other VMware products as well type of vulnerability: DoS, potential privilege escalation I found a vulnerability in VMware Workstation 6.0 which allows an unprivileged user in the host OS to crash the system and potentially run arbitrary code with kernel privileges. The issue is in the vmstor-60 driver, which is supposed to mount VMware images within the host OS. When sending the IOCTL code FsSetVoleInformation with subcode FsSetFileInformation with a large buffer and underreporting its size to at max 1024 bytes, it will underrun and potentially execute arbitrary code. Security focus

OS Virtualization comparison: Parallels' Virtuozzo vs the rest

Virtuozzo's main differentiators versus hypervisors center on overhead, virtualization flexibility, administration and cost. Virtuozzo requires significantly less overhead than hypervisor solutions, generally in the range of 1% to 5% compared with 7% to 25% for most hypervisors, leaving more of the system available to run user workloads. Customers can also virtualize a wider range of applications using Virtuozzo, including transactional databases, which often suffer from performance problems when used with hypervisors. On the administration side, customers need to manage, maintain and secure just a single OS instance, while the hypervisor model requires customers to manage many OS instances. Of course, the hypervisor vendors have worked hard to automate much of this process, but it still requires more effort to manage and maintain multiple operating systems than a single instance. Finally, OS virtualization with Virtuozzo has a lower list price than the leading hypervisor for comme...