Skip to main content

Posts

Showing posts with the label Performance

VMware does a performance study on AMD's RVI

Nice read, this doc. In a native system the operating system maintains a mapping of logical page numbers (LPNs) to physical page numbers (PPNs) in page table structures. When a logical address is accessed, the hardware walks these page tables to determine the corresponding physical address. For faster memory access the x86 hardware caches the most recently used LPN->PPN mappings in its translation lookaside buffer (TLB). In a virtualized system the guest operating system maintains page tables just like in a native system, but the VMM maintains an additional mapping of PPNs to machine page numbers (MPNs). In shadow paging the VMM maintains PPN->MPN mappings in its internal data structures and stores LPN->MPN mappings in shadow page tables that are exposed to the hardware. The most recently used LPN->MPN translations are cached in the hardware TLB. The VMM keeps these shadow page tables synchronized to the guest page tables. This synchronization introduces virtualization over...

VMware completes B-hive acquisition; prepares for application performance!

With this acquisition, VMware will leverage the B-hive team and technology to enhance the VMware portfolio of application and infrastructure management products by offering proactive performance management and service level reporting for applications running within virtual machines. B-hive technology brings insight into the performance of applications and the ability to automate changes using VMware’s industry-leading datacenter virtualization and management suite, VMware Infrastructure, to reallocate resources as needed to ensure service level objectives. In addition, with this acquisition, B-hive’s R&D facility and team will form the core of VMware’s new development center in Israel. Press

Virtualization maybe great but I/O, compliance and security concerns still remain!

Xsigo takes an interesting approach in addressing the I/O concerns. The solution is to virtualize server I/O. That is, turn normally fixed and static I/O channels, host bus adapters, and network interface cards into more dynamic resources whose capacity can expand and contract based on virtual server needs. If I/O virtualization could be achieved, it would resolve a persistent problem server administrators have as they stack virtualized applications on the same hardware. Until virtualized I/O becomes commonplace, applications with heavy or fluctuating I/O demands aren't being virtualized, lest they end up causing I/O backups. Two early solutions have emerged and more are sure to follow. Startup Xsigo off-loads I/O traffic to an attached appliance that virtualizes it (see diagram, p. 20). The approach requires replacing standard HBAs and NICs on the server with Xsigo custom cards and investing in the Xsigo appliance. Pricing starts at $30,000. Xsigo's appliance can generate up t...

GEAR6 TO SHOWCASE CENTRALIZED CACHING SOLUTIONS FOR BIOINFORMATICS AT BIO-IT WORLD IN BOSTON

BOSTON – April 23, 2008 – Gear6, accelerating I/O for real time application performance, will showcase the benefits of scalable caching appliances for bioinformatics at the Bio-IT World Conference and Expo, April 28-30, at the World Trade Center in Boston. The company will demonstrate how these new solutions enable significantly faster completion times for common sequence-search algorithms such as BLAST, while simultaneously lowering total storage costs by boosting the performance of existing storage systems. Gear6 will be presenting its innovative centralized storage caching solutions in Booth 202. Resource-intensive, sequence-search bioinformatics applications typically run on large server clusters with tremendous computing horsepower. Despite this computing power, the performance limitations of disk-based storage infrastructure routinely cause significant bottlenecks resulting in: * A need to over-provision memory across individual clients to compensate for suboptimal performanc...

MAX PLANCK INSTITUTE chooses Woven Systems Ethernet Faric Switching

Santa Clara, Calif. — April 29, 2008 — Woven Systems™ Inc., the leading innovator of Ethernet Fabric switching solutions based on its patented vSCALE™ technology for data centers and high-performance computing (HPC) clusters, today announced that the prestigious Max Planck Institute for Gravitational Physics (Albert Einstein Institute, Hannover, Germany) is using Woven’s EFX 1000 10 Gigabit Ethernet (10 GE) Fabric Switch and TRX 100 Ethernet Switch in a large HPC cluster to search for gravitational waves predicted by Albert Einstein’s General Theory of Relativity. The Woven’s Ethernet Fabric provides access to more than one petabyte of data supplied by a worldwide network of gravitational wave detectors. The data is distributed to compute cluster nodes via the Woven all-Ethernet solution. “Gravitational wave research is one of the most exciting fields of science. It will open a complete new window to the universe, and requires very large-scale and sophisticated computing tech...

Neterion virtualizes server-to-storage I/O with 10GbE adapters

The newly released Neterion X3100 Series adapters enable multiple guest operating systems of a virtual server environment to share one physical adapter through the use of physically separate I/O channels. The X3100 offers 17 independent hardware I/O paths directly in silicon, each with independent reset and control that can be matched to a 16-core CPU virtualized server with one extra path for management. Neterion's president and CEO, Dave Zabrowski, says having true independent I/O paths directly in silicon overcomes the severe manageability limitations of other firmware-based implementations of I/O paths, which can't perform true I/O virtualization. Neterion's silicon-based architecture, he says, enables applications on virtual machines to deliver Quality of Service (QoS) with dynamic allocation of I/O bandwidth that can instantly increase to full 10Gbps throughput when required. Here for more news

Storage Virtualization: Gear6 VP interviewed;CacheFx's Centralized Caching Mechanism discussed

This was an truly interesting briefing i have had in the storage. Gary was discussing about the most neglected part in the storage. Scalability by performance! Ok so if we breakup the vendors we have today: Traditional model: add bulks of disks and expect the disk scalability problems to go away. this is old and very cost-prohibitive. Smarter vendors: Pack all kinds of disk in a box and sell disk per disk on-demand model. This is cheaper but does by no means address your problems in performance and caching scalability. Niche vendors: Should start addressing the performance problems as a default built-in option in their torage delivery model. This helps them grow painlessly. Anyways check out our interview here: Tell us about yourself, why and how you got started at Gear6? I’ve spent the last decade in the network storage industry working at several systems and networking companies. One thing that always surprised me was the type of configuration acrobatics that IT administrators ofte...

Coming up: My chat with Marathon technologies; everRun VM to be hot topic!

I'll be chatting up with Jerry and others tomorrow at Marathonand we will walk through the trend of and the path of accepting the software appliance approach. Marathon is gaining a lot of traction and just like Vyatta , I think these guys are pzrt of the next big wave we should watch out for! P.S: I will soon be writing an cool article on the next wave, something which I will call and maybe present in an upcoming keynote as the "CC 1.0" through "CC 4.0" phase, originally I wanted to do the Virtualization 5.0 through 8.0 but we may not be talking anymore about virtualization in the next decade. High availability, global delivery and high performance will be central. We'll also be calling FastScale folks, I'll be chatting up with Lynn LeBlanc, CEO at FastScale. Check out our previous interview with Jerry.

FastScale Technology Joins HP BladeSystem Solution Builder Program

We did a demo with FastScale sometime back and would have loved to have talked to the folks in Cannes. but not all is lost we will be chatting up with the CEO soon and ask where they are heading with this HP partnership. Santa Clara, Calif., March 10, 2008 – FastScale Technology, Inc, provider of next generation software virtualization and provisioning solutions, today announced that it has joined the HP BladeSystem Solution Builder Program. By utilizing the technologies of FastScale and the capabilities of HP Virtual Connect, which permits real-time changes between server environments and local area network (LAN) and storage area network (SAN) domains across the data center, customers can deploy enterprise-class applications on their BladeSystem servers in seconds, whether servers are physical or virtual. FastScale’s flagship products, FastScale Composer Suite and FastScale Virtual Manager, address the urgent market need for more dynamic software infrastructure in large IT org...

SMC to ship 10G ethernet solutions but they sure are pricy!

Doing the math, if I had to purchase 100 Servers for VMware or Citrix virtualization, and had to replace 6 NICs with 2 of these, I'd be spending , if I went for 2 NICs from SMC then at the retail price of those NICs I'd get a lot more volume and speed as well, probably, but I'd be spending a whopping 260,000 Euros or $ 400,000 just on the NICs!!! Anyways their demo is here: This demonstration highlights how the combination of Solarflare’s virtualization-optimized Solarstorm™ Ethernet controller and 10Xpress® 10GBASE-T PHY delivers true 10 gigabit performance over standard Ethernet links for the most advanced applications. Solarflare-based 10GBASE-T NICs and switches, now shipping from SMC Networks, enable enterprises and data centers to easily and cost-effectively upgrade their existing networks and deploy demanding next-generation applications such as server and storage virtualization, streaming rich-media on demand and Web 2.0 that have ever-growing bandwidth requirements...

FastScale ENHANCEs data center automation capabilities with NEW RELEASE OF FASTSCALE COMPOSER SUITE

This just in... Santa Clara, Calif., February 20, 2008 – FastScale Ò Technology, Inc., provider of next generation software virtualization and provisioning solutions, today announced the newest release of its flagship product, FastScale Composer™ Suite. FastScale Composer Suite is the first technology of its kind that fully automates the process of building, deploying and managing server software environments and delivers an end to end server software management solution for enterprise class data centers and Web farms. The latest release, FastScale Composer Suite 2.0, adds new capabilities and enhancements focused on scalability and configuration management of large, multi-thousand server installations. FastScale Composer Suite completely automates the labor-intensive, error-prone task of building, streamlining and managing server software environments throughout the lifecycle. With FastScale Composer Suite, software environments average 99% smaller than traditional golden image...

Real Customer Stories: Rackspace customers suffer for "departmentalization fever"; won't share virtual infra

Well, I didn't hear that from John Engates, who I spoke to a few days back myself. But customers indeed are the ones who are playing really tough. John does have a point about the I/O, security and Performance as well. I am taking up this initiative myself, within my new employer ( AtosOrigin ) to have some of these optimization tools as a default design option within our Data Centers. Anyways here are the stories: Engates said that in fact security was not a major concern when it came to virtual machines (VMs) - until someone finds a way of tunnelling into the hypervisor. He was clear however to point out that security issues still remain within VMs themselves and that they need to be managed from a security standpoint just like physical boxes. Rather, his customers were concerned about sharing a physical server because they would not have visibility into what other customers were doing on the hardware. Since it makes no sense to virtualise an application that hammers the hardwar...

xkoto grows 150% in 2007; relocates HQ to Mass, U.S

WALTHAM, Mass., Jan. 29, 2008 – In the fourth quarter of 2007, data virtualization middleware solutions provider xkoto Inc. , reported significant revenue growth of more than 150 percent over the same period in 2006. The company's growth and momentum was fueled by the increasing demand from enterprise customers requiring highly reliable and scalable solutions capable of ensuring continuous access to corporate data. Highlights of the quarter and full year include the significant expansion of xkoto’s management team, distribution channel and partner programs, as well as new customer adoption in North America and Europe and a major new release of xkoto’s flagship product, GRIDSCALE ® . “Customers and prospects in virtually all data intensive segments require high performance, highly reliable, scalable access to data for mission-critical systems. As a result, we’re witnessing the rapid adoption of xkoto’s proven data virtualization technology,” said David Patrick, President and CE...

NetEx HyperIP Acceleration Software Scales to Wire Speeds

This just in from our JPR folks. We will chat up with NetEX folks coming Wednesday! MINNEAPOLIS, MN – January 28th, 2008 – NetEx, the leader in high speed data transport over TCP, today announced a new version of its award winning HyperIP® software technology that boosts performance by almost 50 percent to deliver near-native wire speed performance of 800 Mb/s and currently residing on a non-proprietary, off-the-self Intel appliance. This new speed standard enables NetEx customers to purchase the HyperIP data acceleration solution on an industry standard platform at 1.5 Mb/s and then add performance as needed up to 800 Mb/s all in the same appliance with no hardware upgrades required. Customers can purchase the performance they need instead of being forced to purchase a specific hardware model with vendor-imposed performance limits as offered by other WAN optimization products that lack the capabilities to compete in enterprise replication and recovery environments. As bandwidth requi...

Marathon Technologies CTO interviewed; HA in Virtual Infrastructure discussed!

I had a great conversation with Jerry, the CTO of Marathon Technologies, a couple of days back. What I really find very interesting is that they are committing to Open Source and Xen technology. They are teaming up with Citrix to bring virtualization and the HA (High Availability) to the masses. Jerry's profile (from the site): Jerald H. Melnick, Chief Technology Officer Jerry is responsible for Marathon’s technology roadmap that is driving the convergence of high availability and virtualization technologies. Under Jerry’s leadership, Marathon is developing the first-ever fault tolerant-class software solution for virtual environments. Built in partnership with XenSource, this cutting edge product was awarded “Best New Technology” at VMworld 2007. Before joining Marathon, he held executive positions at PPGx, Inc. and Belmont Research as well as management and technical roles at Digital Equipment Corporation, where he was responsible for the development and deployment of mission-cr...

Virtualization performance: What happens to Virtual I/O?

With the ever-growing adoption of server virtualization, the equilibrium between processor and I/O resources is now being disrupted. At the risk of oversimplifying, over the years, as a result of Moore's law, processor performance capacity had grown faster than other system elements resulting in a glut of CPU resources. Server virtualization is a way to leverage this overcapacity thereby improving processor efficiency albeit at the expense of increased contention over I/O resources. As a result, system designers are increasingly forced to favor servers weighted heavily toward I/O capacity -- e.g., larger, more expensive rack configurations are selected over lower cost servers having identical CPU and memory characteristics -- due solely to the fact that they can accommodate more Ethernet NICs and Fibre Channel host bus adapters (HBAs). Link

Did you try Marathon's ROI calculator?

If not yet, then I'd suggest you do it. Try loading it with IE, Firefox issues are being resolved! Asuumptions for the calculator: Server consolidation is conservatively estimated based on the number of servers: less than 50 servers virtualized – 8:1 consolidation ratio between 50 to 200 servers virtualized – 10:1 consolidation ratio 200 or more servers virtualized – 12:1 consolidation ratio Server consolidation ratio is halved for productivity and business critical servers being protected with everRun VM. Downtime costs are based on industry estimates for 350 person company. The ROI model prorates for a bigger or smaller company based on number of servers inputted. While server virtualization improves IT efficiency, the model does not assume any reduction in staff. Additional benefits such as savings from reduced provisioning time and reduced recovery time have been excluded from this model. Design and implementation costs estimates double at 200 servers to account fo...

When Virtualizing, take everything into account!

Why else do you think that it may seem to be such a scary scenario to many wanting to put virtualization in their production. We all know how it goes; test and development environments are all taken rather "less seriously". Companies romance with it, IT staff has something tp play around with and production can wait till the real standard has been established. While this might take longer time than the virtualization vendors may want, this is the reality. We are expecting the server virtualization sales to grow from $ 800 M to atleast $ 7 Bn. But I want ot ask some serious questions in the virtualization conference , where I will be speaking at the strategy session to CxOs and IT managers, sure IT admins should also sit in the hall. I would want to ask simple things like: VM backup: You've virtualized, did you cover up the backup scenario? Security: How did you do it? Apps: Have you really tested/becnhmarked your apps on the Virtual platforms? You are in the test and deve...