Skip to main content

Server Virtualization drives cross domain management

A new set of challenges
Without the tools to optimize and manage end-to-end virtualization, IT is unable to do system-wide capacity planning, lacks visibility into server and storage resource allocation and utilization, has difficulty troubleshooting and managing change, and can no longer carry out historical analysis. The impact can be severe.

Challenge #1: System-wide capacity planning becomes impossible
The data center exists to maximize application performance, and IT keeps service level agreements (SLAs) to track its success. Meeting service level objectives requires IT to optimize resource capacity and planning, but the growth of complexity in mixed environments has made this increasingly difficult to do. In an age where virtual machines can be created easily, manual capacity planning across the infrastructure becomes impossible. The inability to do accurate capacity planning forces IT to over-provision resources. This is particularly true with storage, but extends to other domains as IT is forced to provision more performance and bandwidth while trying to stay ahead of application demand. Without the means to correlate critical applications with the underlying infrastructure, IT is taking shots in the dark.

Challenge #2: Utilization and relationships become murky
Even a purely physical network is a challenging entity, given its hundreds or thousands of computing objects that make up a complex web of relationships. Add in virtualization as an additional layer of abstraction, and complexity grows by leaps and bounds.

Maintaining visibility into server and storage allocation and utilization in this environment is a very challenging assignment. Manual correlation and root-cause analysis can become impossible, even though the need for correlation and visibility are growing across the infrastructure. Individual devices come with their own diagnostics, but device-level information cannot solve the problem of correlating computing behavior from a variety of sources.

Challenge #3: Device failure leads to SLA failure
In a virtualized infrastructure, failed devices kick off multiple device failures downstream, resulting in a diagnostic and predictive nightmare. For example, in a compact physical network a failed host bus adapter (HBA) sends out an alert, making it simple to find and replace. But in a complex network with virtualized layers, a failed HBA is not the only element alerting IT—so are the dozens (or more) of devices that the HBA failure is affecting downstream.

With all the alerts coming in at once, manually tracking down the problem ranges from time-consuming to impossible. Worse, the alerts do not only reach a single server or storage administrator. Because devices and applications throughout the infrastructure are affected by the failure, specialized IT administrators and database administrators are receiving alerts as well. And because the affected applications are failing or slowing down, SLAs go unmet and end-user calls start pouring in.

Challenge #4: Historical analysis withers in the face of virtual mobility
In a virtualized environment, virtual server resource allocation can change at the drop of a hat. However, if historical trending is tied to the physical machine via the WWN (and most are) then the trending is cut loose from the VM and storage usage analysis becomes impossible. This is a significant loss, leaving IT without a way to track SLA success rates across virtualized infrastructure or to report on resource usage patterns. The lack of historical analysis makes it increasingly difficult to plan for growth and to protect SLAs.



Popular posts from this blog

DeepLearningTrucker Part 1

Avastu Blog is migrating to; 1st Jan 2009 live


I will send out emails personally to those who are using my link(s) on their sites.

Thanks much for your co-operation and hope you enjoy the new site and its cool new features :-)

Not like the site is unlive or something..on the contrary, its beginning to get a lot of attention already. Well most of the work is done, you don't have to worry about anything though:

What won't change

Links/Referrals: I will be redirecting the links (all links which you may have cross-posted) to - so you don't have to do anything in all your posts and links. Although, I would urge however that you do change the permalinks, especially on your blogs etc yourselfThis blog is not going away anywhere but within a few months, I will consider discontinuing its usage. I won't obviously do …

Cloud Security: Eliminate humans from the "Information Supply Chain on the Web"

My upcoming article, part - 3 data center predictions for 2009, has a slideshot talking about the transition from the current age to the cloud computing age to eventually the ideation age- the age where you will have clouds that will emote but they will have no internal employees.

Biggest management disasters occur because internal folks are making a mess of the playground.

Om's blog is carrying an article about Cloud security and it is rather direct but also makes a lot of sense:

I don’t believe that clouds themselves will cause the security breaches and data theft they anticipate; in many ways, clouds will result in better security. Here’s why: Fewer humans –Most computer breaches are the result of human error; only 20-40 percent stem from technical malfunctions. Cloud operators that want to be profitable take humans out of the loop whenever possible.Better tools – Clouds can afford high-end data protection and security monitoring tools, as well as the experts to run them. I trust…