Skip to main content

Server Virtualization drives cross domain management

A new set of challenges
Without the tools to optimize and manage end-to-end virtualization, IT is unable to do system-wide capacity planning, lacks visibility into server and storage resource allocation and utilization, has difficulty troubleshooting and managing change, and can no longer carry out historical analysis. The impact can be severe.

Challenge #1: System-wide capacity planning becomes impossible
The data center exists to maximize application performance, and IT keeps service level agreements (SLAs) to track its success. Meeting service level objectives requires IT to optimize resource capacity and planning, but the growth of complexity in mixed environments has made this increasingly difficult to do. In an age where virtual machines can be created easily, manual capacity planning across the infrastructure becomes impossible. The inability to do accurate capacity planning forces IT to over-provision resources. This is particularly true with storage, but extends to other domains as IT is forced to provision more performance and bandwidth while trying to stay ahead of application demand. Without the means to correlate critical applications with the underlying infrastructure, IT is taking shots in the dark.

Challenge #2: Utilization and relationships become murky
Even a purely physical network is a challenging entity, given its hundreds or thousands of computing objects that make up a complex web of relationships. Add in virtualization as an additional layer of abstraction, and complexity grows by leaps and bounds.

Maintaining visibility into server and storage allocation and utilization in this environment is a very challenging assignment. Manual correlation and root-cause analysis can become impossible, even though the need for correlation and visibility are growing across the infrastructure. Individual devices come with their own diagnostics, but device-level information cannot solve the problem of correlating computing behavior from a variety of sources.

Challenge #3: Device failure leads to SLA failure
In a virtualized infrastructure, failed devices kick off multiple device failures downstream, resulting in a diagnostic and predictive nightmare. For example, in a compact physical network a failed host bus adapter (HBA) sends out an alert, making it simple to find and replace. But in a complex network with virtualized layers, a failed HBA is not the only element alerting IT—so are the dozens (or more) of devices that the HBA failure is affecting downstream.

With all the alerts coming in at once, manually tracking down the problem ranges from time-consuming to impossible. Worse, the alerts do not only reach a single server or storage administrator. Because devices and applications throughout the infrastructure are affected by the failure, specialized IT administrators and database administrators are receiving alerts as well. And because the affected applications are failing or slowing down, SLAs go unmet and end-user calls start pouring in.

Challenge #4: Historical analysis withers in the face of virtual mobility
In a virtualized environment, virtual server resource allocation can change at the drop of a hat. However, if historical trending is tied to the physical machine via the WWN (and most are) then the trending is cut loose from the VM and storage usage analysis becomes impossible. This is a significant loss, leaving IT without a way to track SLA success rates across virtualized infrastructure or to report on resource usage patterns. The lack of historical analysis makes it increasingly difficult to plan for growth and to protect SLAs.



Link

Comments

Popular posts from this blog

Security: VMware Workstation 6 vulnerability

vulnerable software: VMware Workstation 6.0 for Windows, possible some other VMware products as well type of vulnerability: DoS, potential privilege escalation I found a vulnerability in VMware Workstation 6.0 which allows an unprivileged user in the host OS to crash the system and potentially run arbitrary code with kernel privileges. The issue is in the vmstor-60 driver, which is supposed to mount VMware images within the host OS. When sending the IOCTL code FsSetVoleInformation with subcode FsSetFileInformation with a large buffer and underreporting its size to at max 1024 bytes, it will underrun and potentially execute arbitrary code. Security focus

OS Virtualization comparison: Parallels' Virtuozzo vs the rest

Virtuozzo's main differentiators versus hypervisors center on overhead, virtualization flexibility, administration and cost. Virtuozzo requires significantly less overhead than hypervisor solutions, generally in the range of 1% to 5% compared with 7% to 25% for most hypervisors, leaving more of the system available to run user workloads. Customers can also virtualize a wider range of applications using Virtuozzo, including transactional databases, which often suffer from performance problems when used with hypervisors. On the administration side, customers need to manage, maintain and secure just a single OS instance, while the hypervisor model requires customers to manage many OS instances. Of course, the hypervisor vendors have worked hard to automate much of this process, but it still requires more effort to manage and maintain multiple operating systems than a single instance. Finally, OS virtualization with Virtuozzo has a lower list price than the leading hypervisor for comme...