Skip to main content

Posts

Showing posts from November, 2008

Social Clouds

Well, you and I know its internet. Cloud vendors will give all sorts of fancy names and battle it out. Steve has some interesting observations there. Sun Microsystems has been under particular pressure to realign; analysts and even Sun employees such as Tim Bray have been outspoken in their pleas for Sun’s executive team to jettison unprofitable ventures in favor of some kind of cloud strategy. CEO Jonathan Schwartz has hinted in recent months of some wood behind what Sun calls its Grid effort, and will this week roll out Sun’s JavaFX 1.0 front end technology to compete with Flash/Air and Silverlight. JavaFX could be one of the casualties if Sun decides to pare technologies along with the 18% of its employees it’s trimming. Other cuts might include the NetBeans development environment, which has kept pace with or even bettered Eclipse in quality but not in uptake, and OpenOffice, the free Office replacement. Unfortunately for Sun, Google Docs has stolen some of the strategic thunder w

National Cloud Computing Day: Kill Shelfware, get Cloudware

Can't get any crazier than this. check it out... The event is being organised by KashFlow to encourage UK small businesses to evaluate online applications and speed up the migration from traditional word processing, spreadsheet, accounting, email and contact management systems installed on computers to their web-based counterparts. Small businesses will be able to share their experiences throughout the day online. A post-event survey will reveal important small business opinions about how cloud computing will help the UK to work more efficiently. KashFlow managing director Duane Jackson revealed, "Throughout the world businesses are discovering web-based software and it's important that UK small businesses don't get left behind. They are already enjoying the benefits of a wide range of online services which are virtually indistinguishable - and often superior - from there installed counterparts. "The Cloud Computing Day challenge is simple - for small businesses t

Sun Microsystems future uncertain

Silicon Valley has a new parlour game: what's going to happen to Sun after the cuts? Rumours suggest it will merge with EMC, that HP or IBM will take it over, that Fujitsu will buy its hardware business, or even that the StorageTek storage business unit will be spun off. Do Sun's leaders want to be redeemed or to retire? The company has bit the downsizing bullet and announced up to 6000 job losses following on from a $1.7bn quarterly loss. Its made its big open storage announcement and there are server announcements coming too. But investors have seemingly discounted these moves as the company's share price is at $3.17 with its market capitalisation at $2.34bn, about the same as its cash pile and so effectively rendering the company worthless. This is lower than the $3.62 stock price before the cuts were announced, and before a raft of technology and product announcements like the open storage one. The Register

High Speed Cloud Computing: Startup ManjraSoft takes a shot at it!

Revolutionary new software which harnesses the power of networked computers to analyze data at high speeds is being developed by new start-up company Manjrasoft Pty Ltd and researchers within the University of Melbourne, Australia. The technology enables Cloud computing, the next generation of utility/distributed computing, supporting high speed application processing across Windows desktops and servers “Whether it be complex drug development problems, investment risk or conveyor belt logistics, the software “Aneka” offers versatility and cost savings to businesses by extracting greater productivity from existing infrastructure and data intensive applications,” said Associate Professor Raj Buyya of the University of Melbourne’s Computer Science & Software Engineering Department who has led the research. The uniqueness of the software “Aneka” (Sanskrit for ‘many in one’) offers the choice of multiple processin

OpenService announces Risk-Based Security Metrics in InfoCenter 5.1

A quick look at the OpenService Architecture OpenService, a provider of log and IT risk management solutions, has announced a new risk-based security metrics reporting system to help corporations monitor their IT risk trends and determine the effectiveness of their security controls. Available in the new release of its software product, InfoCenter 5.1, this solution was beta tested and first deployed on IBM BladeCenter servers. According to OpenService, InfoCenter 5.1 measures risk automatically by analyzing events reported by security and network devices, operating systems, databases and applications. The risk level of each event is scored as a function of threat, vulnerability, and asset value - an industry standard approach for calculating risk. The company said that the algorithm it developed, called Risk-Weighted Event Scoring and Thresholding (RWEST), scores and correlates events from a wide range of servers, devices and applications and, based on this, supports the visualization

Cloud Computing Governance: Regulatory and Legal requirements shouldn't be forgotten!

An RCC, or a Regulated Cloud Computing is all we must strive for. More here... "Businesses must ensure they select only cloud computing services that enable them to avoid risk entirely or manage it to a reasonable level," says Scott. Cloud computing is still animmature business model and issues around risk and compliance still need to be ironed out. Kelly Dempski, director of research at Accenture Technology Labs in France says everyone is still trying to figure out how best to use the cloud computing model. "We are still in the period of learning and just beginning to come up with best practices," he says. A lack of business process management to go with the services offered by cloud computing is another reason businesses should be cautious about the model. Although companies are saving up to 40% on project costs by deploying CRM applications using the cloud computing model, the benefits could be short term says Michael Maoz, analyst at Gartner Research. These serv

Gartner: Microsoft's Cloud to be more robust and green

The company is planning to invest heavily in its online infrastructure to meet the expected demand. Providers, including arch-rival Google as well as Adobe and Amazon , are now increasingly able to replace and augment the functionality of local PCs with web-based services running on large distributed server farms. For Microsoft, this capability represents an attack on its traditional PC operating system and office application business. To counter this, it will set up 20 new data centres over the next 20 years at a cost of a billion dollars each. Debra Chrapaty, Microsoft's VP of global foundation services, speaking to the US media, said "We're going to reinvent the infrastructure of our industry,". It took Chrapaty's team two years to set up Microsoft's first cloud data centre, which opened in Washington State last year. Because of the high energy requirement, the server farm was purposely located close to a new hydroelectric power station. It took the te

Information Discovery in the Cloud: How are my feedstats doing?

We rely on each others brains. We freely pick stuff on the web, "find" things and store them in our local storage (some call it stealing ), and we keep listening to each other. So I was curious how my feedstats did, and as you see its doing pretty well.

Cloud Computing APAC: SalesForce goes to India

Salesforce.com sees India as one of the potential market for cloud computing. "Cloud computing is the next stage of SaaS. As we have had an experience of more than 10 years in this foray, cloud computing was the obvious next step. When we were holding our press conference about the same in San Francisco, a lot of Indian companies were very excited and willing to employ this technology", said Doug Farber Vice President Operations, Asia Pacific Salesforce.com. Bandwidth is a problem in India and everyone is aware of that. So, how will a technology like such which is dependent on bandwidth survive in India? Farber said "Yes bandwidth is a problem in India. But we have found out ways to work around it. For example, one of the things that we have done is something called offline PDA. A person is connected through his iphone or laptop in which the data is stored and synchronized as and when the device gets the connection back and on. So, there are mechanisms to work around su

Cloud Signaturing: Signaturing your Data Centers

Dave Graham, a Cloud Computing member at the Google Group has been working on some neat COS (Cloud Optimized Storage) mindmap. Check it out here , he isn't done yet but has spent good amount of time there. So I thought, we do and talk about signaturing on storage level, how about signaturing the whole Data Center? With Feds and Governmental interventionism looking rather inevitable, we would rather work on some pretty much well signatured data centers! This is my first attempt to start on the Xing Mindmap stuff, I guess it will keep me busy the coming days. So say hello to the rather embroyonic version of " Cloud SIGnature Framework" The final draft will be posted on the Xmind site

Storage Wars in tight economy: HP and IBM battle as EMC, Sun languish

So, Big Blue cranked up its PR machine, and pulled some statistics out of its, er, sales database and wanted everyone to know that more than 5,000 companies worldwide have replaced iron from HP, Sun Microsystems, and EMC since 2004 and moved to them to IBM alternatives. Breaking this down a little, IBM is claiming that in less than one year, more than 150 customers have moved onto its System z mainframes from HP and Sun platforms. When pressed for more details about where these customers were coming from, IBM's PR people said they would get me some answers, but all I got was static. And since the inception of the so-called Migration Factory that IBM set up "several years ago," more than 1,300 customers have been moved to Power-based servers from Sun and HP platforms, and this year alone, another 800 customers have moved onto System x iron (which presumably also includes BladeCenter blade servers). Yes, IBM is mixing different categories and different time scales, but this

SLAs in the Cloud: Disaster-Proofing the Cloud

Good read this: It is virtually impossible for a cloud vendor to offer a strong SLA for two reasons. First, the cost advantage of the cloud is based on shared resources, although IBM (nyse: IBM - news - people ) is now pushing the idea of creating and running private clouds for its large customers. But even if a cloud is private, the fact that many applications are running on a shared infrastructure increases the risk of catastrophe. The second reason is that offering a really strong SLA, one that covers the lost revenue from an outage, is just too risky. It means putting the vendor's entire enterprise at risk, essentially selling a form of insurance on the cheap. That's why in the cloud and elsewhere, SLAs will never be that strong. Without SLAs to provide much comfort, the real remedy to managing catastrophic outages is redundancy. If your cloud infrastructure fails for a critical system, you must be able to bring up a redundant infrastructure that performs the same funct

Motley Fool: "VMware blew it"

Well, according to the Fools and also VMware's own staff, this Transitive was a great firm for a potential acquisition. Read on... You blew it, VMware (NYSE: VMW ) . Or maybe you didn't. But whatever the behind-the-scenes machinations, virtualization toolmaker Transitive is now in IBM 's (NYSE: IBM ) hands. (Big Blue acquired the firm last week for an undisclosed sum, Computerworld reports.) Transitive should have been yours, VMware. Here's what your team said about the company in a blog post from May, ahead of the digital VMworld.com conference: Transitive does something quite interesting -- they can dynamically translate from one machine architecture to another. This can be quite complementary to VMware and our flavor of virtualization. You can, for instance, take your apps compiled for the Solaris/SPARC platform, move them to your new x86 box running ESX and Linux and go to town. IBM's interest is easy to understand. Transitive's code is baked i

Harrah Entertainment bets on Salesforce Cloud

Harrah's deployment of crucial business applications on Salesforce's platform is surely one of the biggest bets to date by a major enterprise on so-called cloud computing, an information technology architecture in which third party vendors deliver software to customers over the Internet. Harrah's will rely on Force.com for applications, such as room reservations, that are directly linked to revenue generation. The applications will be used by agents based at casinos, at Harrah's 50 branch offices and by about 250 independent reps who work on behalf of the company. Among other things, the new architecture will allow reps to make room requests over the Web and receive confirmations back within hours. Additionally, airline schedules will be integrated directly into Harrah's travel management system, which will make it easier for the casino operator to schedule travel for VIP guests. Source

VMware loses top security researcher as well!

Well these are rather grim news from a security standpoint. VMware's Determina's acquisition and its rather dubious Bluelane acquisition, where no one apparantly made any profit, are putting VMware under solid pressure. I really don't know if all this is because of the restructuring within VMware or merely that some other shift is going to happen. VMware surely is on some rather contradictory paths given that it is under tremendous pressure from the security community to provide some solid answers around Security and Compliance. VMware has made progress around the PCI DSS participation but strategically it is on the losing side. Bluelane's acquisition has left a lot of people with some bitter taste in their mouths. Speaking to a Bluelane employee, who wished anonymity, there was a lot at play since Feb 2008. during Cannes VMworld, some sort of deal was struck to buy Bluelane, VMware too placed its bets but didn't end up committing till the very end. Bluelane languis

Skytaps 10 reasons for Cloud Adoption madness in 2009

-- Eliminates infrastructure constraints: cloud-based services allow companies to dynamically scale virtual environments quickly and cost- effectively based on business demand. -- Turns upfront Cap-Ex into Op-Ex: Cloud services provide the same powerful computing resources without the large upfront investment. Additionally, usage is billed hourly, enabling organizations to pay for what they use without investing in unused capacity. -- Brings software to market faster: Development organizations can support critical business initiatives by delivering applications faster using the cloud. A virtual lab enables development and testing environments to be provisioned immediately and built-in collaboration tools cut unproductive cycle time, resulting in shortened delivery schedules. -- Applications run unchanged in a virtual lab: Companies do not need to modify or rewrite applications for the cloud using a virtual lab, so they can use cloud resources as an

A smarter CC strategy

Fool says: Impressive, yes? I'll say, but for cloud computing to endure, it'll have to give IT managers and other industry participants more than just a good do-it-yourself, zero-infrastructure starter kit. NetSuite (NYSE: N) just booked a deal with Hewlett-Packard (NYSE: HPQ) that should help to achieve that goal. HP will grant NetSuite access to its channel of 15,000 resellers in exchange for support resources. The result will presumably be a broader embrace of cloud computing among traditional systems integrators and value-added resellers -- firms that profit by customizing packaged software to fulfill a business need. Think of what Accenture (NYSE: ACN) and India's Infosys (Nasdaq: INFY) do, but on a smaller, more specialized scale. Cloud computing has been considered a bane for VARs and SIs because of there's no "software" to customize; all the code resides on a distant web server. Source

IBM launches a Resilient Cloud Validation program

IBM calls it the Resilient Cloud Validation program. Big Blue hopes to work with cloud providers to offer a program that reassures businesses that a cloud doesn’t go down often as well as helping answer other questions that keep businesses from trusting in the cloud model. Coincidentally (yeah, right) companies hoping to gain that seal of approval will need to work with IBM’s cloud consulting practice. IBM is also announcing as part of that practice that it can help answer a question I’ve long bothered cloud providers with — When is it most cost effective to outsource your application to a cloud and when should you build your own, or at least buy your own, servers? IBM has been pretty quiet about its cloud efforts. In part because it didn’t want to hack off large customers buying a ton of IBM servers by competing with them. The computing giant hasn’t been pushing its own cloud business until a half-hearted announcement at the end of July, about a month and half after a company exec had

Microsoft's Cloud Strategy: How they plan to rout Google

This could come to Google as a massive surprise. The economy is under pressure. A lot of Microsoft's customers will be forced to think to move their applications, if not the whole data center, to the clouds. Google may have had luck with the silent rise in the search/query 1.0 world, but this is more complex. Microsoft is investing heavily in its 30 20 odd hi-fi data centers and they surely want to get into the data crunching game as well. Will the consumers listen? And who will they turn to is the big question. The rush for gold has begun but I hope that those billions of dollars haven't been thrown in for nothing. What if no one comes to your data center? We'll soon do a detailed SWOT analysis on the Cloud Data Center build up. For now, check out the BW's article: Corporate America is increasingly leaving computing to the experts. Why go to the trouble and expense of building and managing complex systems to handle your spiraling data-crunching needs when another comp

Google laying off 10,000 workers; Under reporting employee head count!

Google has been quietly laying off staff and up to 10,000 jobs could be on the chopping block according to sources. Since August, hundreds of employees have been laid off and there are reports that about 500 of them were recruiters for Google. By law, Google is required to report layoffs publicly and with the SEC however, Google has managed to get around the legal requirement. In fact, one of the ways Google was able to meet Wall Street’s Q3 earnings expectations was by trimming “operational” expenses. Google reports to the SEC that it has 20,123 employees but in reality it has 30,000. Why the discrepancy? Google classifies 10,000 of the employees as temporary operational expenses or “workers”. Google co-founder Sergey Brin said, “There is no question that the number of workers is too high”. Source

VMware loses another high profile executive; Security Chief Nand to run openDNS

The head of VMware's security group has left to join San Francisco's OpenDNS, a startup that provides Internet infrastructure services. Nand Mulchandani took over as CEO of the DNS (domain name system) service provider on Nov. 5, replacing founder David Ulevitch, who will remain as the company's chief technology officer, according to a company spokeswoman. Mulchandani is the latest VMware executive to depart after company co-founder and CEO Diane Green was ousted in July of this year. In September another VMware co-founder, Chief Scientist Mendel Rosenblum, resigned. Richard Sarwal, who led the company's research and development efforts, also left around the same time. Mulchandani had been with VMware just over a year, after the virtualization software vendor acquired his security company, Determina. As VMware's senior director for security products, Mulchandani was in charge of VMware's security strategy, considered critical to the company's future success.

Open Source Storage Management with Aperi, an IBM driven project

With U.S and many other nations in deep recession, we should not fool ourselves and our consumers. We ought to also help them also take a look at the storage management capabilities and strengths of industry leader sponsored initiative. Surprisingly enough, HP and EMC aren't yet on the board. Brocade CA Cisco Emulex Fujitsu IBM LSI Logic NetApp Novell YottaYotta Check out their demos here and here and also the installation demo is here I do see that their blog is not frequently maintained, the project schedule says that it should be going RTM by Jan 2009 .

My next Cloud Ideation lecture for a University MBA class

I will lay emphasis on HPC, Genome, Research and Neural implants. Sergey's wife Anne is on to something which has been my childhood dream , both scary and exciting. One of those heady things that will happen to us and maybe the next generations as we evolve further into super human beings. Ideation Age is upon us. Cloud will be the vehicle that will take us there. Anyways couple of shots, I may speak about: I have been asked by several researchers and academics to share some thoughts and in some occasions co-create stuff. There will be many of such activites in the year 2009.

Data Center Pedictions 2009- Part 2: Cloud Computing, MNSPs and Mobility will have long term impact

Today we have some great use cases (possible input for eventual practices) and amazing (Amazoning?) models coming out of the cloud. Clearly we see that the years of crunching experience that firms like Google (search, which I have often called a Query 1.0 Framework), and Amazon (which has moved, or should I say, had foundations laid in the Query 2.0 model), have adopted will be putting them in pioneers role. So these crunch gods are today leading the show, we do have many players such as Microsoft, IBM, HP etc go after this model , and many may seem to make a quick and an early tie with Google and Amazon, but is it enough for them to succeed as Cloud gods? We know one thing for sure, Amazon, due to its accidental choice of Query 2.0 platform , may have an edge above the rest of the parties. For now. But what does the future hold for data crunching at minimal cost? Obviously there is a dire need to think up of a business model which can seamlessly help eke uit a plan that will help firm

VMware does a performance study on AMD's RVI

Nice read, this doc. In a native system the operating system maintains a mapping of logical page numbers (LPNs) to physical page numbers (PPNs) in page table structures. When a logical address is accessed, the hardware walks these page tables to determine the corresponding physical address. For faster memory access the x86 hardware caches the most recently used LPN->PPN mappings in its translation lookaside buffer (TLB). In a virtualized system the guest operating system maintains page tables just like in a native system, but the VMM maintains an additional mapping of PPNs to machine page numbers (MPNs). In shadow paging the VMM maintains PPN->MPN mappings in its internal data structures and stores LPN->MPN mappings in shadow page tables that are exposed to the hardware. The most recently used LPN->MPN translations are cached in the hardware TLB. The VMM keeps these shadow page tables synchronized to the guest page tables. This synchronization introduces virtualization over

MSFT or VMware , Dell/EqualLogic wins anyways!

When the decision was made to go with VMware at ManageNet Hyper-V was not ready for production environments. The company had also considered Virtual Iron, but was not confident in the level of local support compared with VMware. Dell wins in the end Despite their differences in virtualisation software selection, both companies at least have one thing in common - Dell. UXC purchased Dell rack-mounted servers and blades for its new infrastructure build out and ManageNet commissioned iSCSI storage systems from the Dell-owned EqualLogic. Rick Becker, Dell's vice president for software and Solutions, said internally the company migrated 326 PowerEdge 2650 servers to 21 blade servers saving an estimated US$800,000 in operating expenses. "We believe by adopting technology and negating the need for server space we will never need to build another data centre," Becker said. Source

IBM buys Transitive, a multi-platform virtualization vendor

ARMONK, NY - 18 Nov 2008: IBM (NYSE: IBM) today announced it plans to acquire Transitive Corporation, a privately held technology company headquartered in Los Gatos, California, with a research and development team in Manchester, United Kingdom. Financial terms were not disclosed. Transitive is a leader in cross-platform virtualization and a pioneer in developing technologies that allow applications written for one type of microprocessor and operating system to run on multiple platforms -- with little or no modification. As a result, the technology will enable customers to consolidate their Linux-based applications onto the IBM systems that make the most sense for their business needs. Transitive's breakthrough technology has earned the company 48 worldwide patents and numerous industry awards. This acquisition is part of IBM's strategy to help clients optimize the efficiency and productivity of their computing infrastructure and improve the utilization of the servers that ru

CapGemini and Amazon sign agreement to sell stuff in the Cloud

Capgemini has announced an agreement signed between Capgemini UK plc and Amazon Web Services, extending its Outsourcing portfolio with cloud computing services. Capgemini's new Center of Excellence, focused on cloud computing, will help its enterprise clients take full advantage of integrating cloud computing into their IT and business strategy. Capgemini's Cloud Computing Center of Excellence will initially have a team of Amazon Web Services-trained professionals located in North America, Europe and India to help clients evaluate and implement Amazon's current offering; this will evolve as other provider's cloud computing capabilities mature. The centre will also offer Cloud Consulting, Development, Migration and Back-up Services. Under the right circumstances, leveraging the fast developing cloud computing market has the potential to create significant value for enterprise customers in terms of scale, flexibility, reducing time to market, environmental efficiency and

Amazon unleashes CloudFront, a content delivery web service

Amazon is beginning to get my attention and coming closer to what I would like in a typical Cloud Services vendor. Why? These all are what I call pre-requisites for Global Sustainable Cloud: Service Highlights Fast – Using a network of edge locations around the world, Amazon CloudFront caches copies of your content close to end users, lowering latency when they download your objects. The service also gives you the high, sustained data transfer rates needed to deliver large popular objects to end users at scale. Simple – A single API call lets you get started distributing content from your Amazon S3 bucket through the CloudFront network. And, since there’s no need to negotiate with a sales person, anyone can get started in just minutes. Designed for use with other Amazon Web Services – Amazon CloudFront is tightly integrated with Amazon S3, which holds the definitive versions of your content. CloudFront also integrates with other AWS Services, like Amazon Elastic C

EU wants Data Centers to comply to CoC

My talk today at a dutch congress, was about security and compliance. Funny thing to see is the bewilderment of the public. This first initial nudge by the EU is a mere reminder that a very resounding governmental interventionism is definitely in the offing. Compliance will come in many ways: There will be strong focus on an organization's CSER policy, one may be asked to demonstrate it periodically GRC (Governance, Risk and Compliance) will play major role in every strategy and organizations will be tested and funded (under-funded) accordingly A snippet on the expected compliance w.r.t Data Management and Security The EU is asking data centre owners and operators to "voluntarily" sign up to a Code of Conduct (CoC) which will include oversight of their energy efficiency in what could be green regulation through the back door. The European Commission has issued its Code of Conduct for Data Centres Energy Efficiency and invited data centre owners and operators to sign u

Akorri plugs BalancePoint in VMware vCenter

LITTLETON, MA--(Marketwire - November 19, 2008) - Akorri, Inc., the leader in performance and capacity management for the virtualized data center , today announced the availability of the BalancePoint™ Plug-In for VMware vCenter -- a new capability sold with its award-winning BalancePoint software solution that allows VMware administrators to use BalancePoint directly from their VMware vCenter console, simplifying the management of virtualized environments. "The BalancePoint Plug-In for VMware vCenter provides 'single pane of glass' management of a virtualized data center from within the vCenter console," said Jeff Boles, Sr. Analyst, Taneja Group. "BalancePoint complements the element management aspects of vCenter by providing end-to-end performance management for all virtual and physical infrastructure components. This type of detailed insight goes well beyond basic utilization monitoring, and is absolutely critical to companies when they virtualize busines

First look at Microsoft's SCVMM

It was easy to pick up virtual machines we had installed earlier, and also point to another host system running Virtual Server 2005 R2 under Windows Server 2003 and manage those VMs. SCVMM 2008 also provides integration for WS 2008’s new clustering support, and can be used to set up fault-tolerant VMs, as well as VMs which will preferentially attach to hosts which are part of a cluster. SCVMM 2008 can also now manage a set of clustered VMs together as a single unit. Apart from the expanded feature set, the GUI has been slightly enhanced, but still looks pretty similar to SCVMM 2007. Because Microsoft has written the 2008 version around its PowerShell (PS) command shell and scripting language, scripts and command files containing scripts can be executed to speed up tasks like migrating a VMware VM using VMotion. We could use PS scripting to shortcut a lot of tasks, although one problem was how SCVMM 2008 would deal with VMs that needed patching. Currently the

Storwize Moves Corporate Headquarters in Response to Rapid Adoption of its Cost-Saving Capacity Optimization Solutions

Los Gatos, Calif., November 18, 2008 – Storwize Inc., the leading provider of real-time capacity optimization solutions, today announced that it has relocated its corporate headquarters to a larger facility in Los Gatos, California. The move supports the company’s rapid growth and continued investment in sales, marketing, and support. Storwize solutions help organizations dramatically reduce costs by providing up to 15-times more capacity from their existing storage infrastructure, creating cost and operational benefits throughout the data lifecycle. “Even with IT budgets down almost 30% over the past two years, organizations still have to support continued data growth without increasing cost, and they’re turning to capacity optimization as a solution,” said Peter Smails, Vice President of Worldwide Marketing at Storwize. “The phenomenal success we’ve experienced based upon corporations’ aggressive cost-saving measures necessitated our move to a larger facility to support our rapid

Visual Studio 2010 Lab Management and Virtualization

Good to see some serious efforts being undertaken to have lab management and automation. You (or the someone you know) may be asking yourself - why is this a good thing? Here's what the Visual Studio guys told me: * 30% of testing time is spent in setting up machines and labs * Under 30% utilization of test and dev assets * “No Repro” bugs often slip into production impacting project success The guys also told me: Unlike other tools, Microsoft’s capabilities around lab management are fully integrated to Visual Studio Team System allowing teams to collaborate more effectively and not have to deal with disparate tools. Lab management is fully integrated with the testing capabilities allowing generalists testers to take quick checkpoints on failures & record rich bugs with links to the environment in the bug that the developer can then open. It is also integrated into the build process allowing customers to automatically trigger a virtual environment provisioning, buil

Microsoft, HP very serious about SME's in EMEA

Vendors are gunning for diminutive businesses in the hope that as some build their small start-ups into beefy companies the bosses will stay loyal to the tech they started out with. Just don’t mention the economy, stupid. Microsoft and HP will offer US customers products, training and deployment services for storage, networking and something they have dubbed “server consolidation”. The tech beasts claimed that the tie-in will help customers decrease costs, increase biz agility, improve data access and protection, and raise employee productivity. Which is a big ask. Meanwhile, Hyper-V and MS System Centre support has been added to the industry-standard HP Virtual Connect 10Gb Flex-10 module that works with the Flex-1o NIC found, handily enough for HP, in its sparkly new ProLiant BL495c virtualisation blade server. There’s also a HP and Microsoft partner program, “Frontline”, aimed at Europe, the Middle East and Africa. Microsoft claimed that it has already registered 60 per c

PanoLogic and AVNET to co-sell Pano Stuff!

Pano Virtual Desktop Solution (VDS ) provides value-added resellers with an easy-to-deploy desktop virtualization solution MENLO PARK, Calif. – November 18, 2008 – Pano Logic™ , a developer of a server-based desktop virtualization solution, today announced a distribution agreement with Avnet Technology Solutions , an operating group of Avnet, Inc. (NYSE: AVT ). Under the agreement, Avnet will distribute the Pano Virtual Desktop Solution (VDS) to its value-added reseller partners in the U.S. and Canada, and provide delivery and installation support for partners through Avnet OneTech™ Services . Pano VDS provides partners with a unique desktop virtualization product that enables the desktop to be managed virtually by a customer’s IT support team. Partners can pair Pano VDS with industry-leading servers, storage, software and services, also available through Avnet, to create complete end-to-end solutions to meet all of their customers’ virtual desktop infrastructure needs.

CA unveils its Cloud strategy

CA EITM solutions address this need for virtualization management, and deliver the following benefits, among others: Improved Agility * Provision and monitor resources in the cloud: CA Data Center Automation Manager helps consumers and providers of cloud services to deliver, scale, and manage dynamic computing resources on demand. The solution enables enterprises, Infrastructure Utility providers, business process outsourcers, and cloud computing providers to seamlessly provision and monitor cloud computing resources to allow for overflow capacity during peak demands, rapid policy-based response to business demands, more dynamic failover, and highly efficient infrastructure. * Accelerate and automate virtualized data center provisioning: CA Data Center Automation Manager empowers IT organizations to integrate and automate virtual and physical server provisioning cycles. By automatically allocating resources in real time based on business policies, customers can accelerate the

Veeam Configurator has Host Profiles ready!

Setting up and properly configuring your ESX and ESXi hosts isn’t difficult. But over time, things can change. How do you know which hosts are still configured as intended, and which have "drifted"? Veeam Configurator helps to ensure that your ESX server configuration complies with corporate policies and standards across your entire VMware Virtual Infrastructure 3 (VI3), boosting administrator productivity in the areas of host configuration and configuration management. Veeam Configurator 2.0 automatically discovers ESX and ESXi configurations across the enterprise and creates Veeam host profile templates. These templates can then be applied to groups of VMware hosts, and periodic scans can uncover inconsistencies and allow administrators to enforce defined templates to ensure policy compliance. The templates can also be used to quickly provision a new or re-build an existing ESX host. With Veeam Configurator, administrators can quickly and easily

Pano Logic DELIVERS FIRST WINDOWS NATIVE VDI SOLUTION

New Pano VDS release delivers users a fully native Windows user experience without using RDP, providing for optimized audio, video and native support for PC-compatible USB devices, purpose built for virtualization MENLO PARK , Calif. – November 17, 2008 – Pano Logic™, developer of a server-based desktop virtualization solution, today announced Pano Virtual Desktop Solution (VDS) 2.5. The new release is the first desktop virtualization solution to optimize the Windows user experience without using the Microsoft Remote Desktop Protocol (RDP), which eliminates the dependence on Microsoft Terminal Services technology and provides native session support for Windows applications, video, and audio interfaces and native USB device drivers. These new capabilities are available through Console Direct™, a new technology in Pano VDS 2.5 that plugs directly into the Windows Operating System to deliver a fully native experience. In addition, new capabilities for Pano Manager improve IT’s abi

Microsoft's CloudApps arrive: Exchange Online, Sharepoint online!

With more and more stuff going into the cloud, we will soon see that the on-site virtualization efforts will slow down dramatically by next year. The companies who have been banking on selling onsite virtualization knowledge and software will soon see a decline and reduction of demand. With an ailing global economy, that is threatening the mere existence of several economies such as Iceland, Estonia, etc, you can forget about selling: - Expensive onsite virtualization projects (if they cannot align their consultation to the strategy of seamless migration of the firm to "some" interoperable cloud, be it Microsoft, Google or Amazon). - As CloudApps emerge shedding their older shelfware skin, it will be time to move into the next phase - The rest of the infrastructures, that have not adopted any form of production virtualization, will automatically default to the clouds. Why? They will be horrendously cheaper! (I know, security and all that, but in times like these, if its cheap

Cloud Computing Echo: EMC attempts to woo consumers with Decho

In my upcoming keynote on Cloud Computing and Privacy, I will take you all down the road of privacy. How privacy has evolved through centuries and where we stand today. I have repeatedly stressed on the RCC, a Regulated Cloud with its Keynesian touch and with acceptable levels of, with some stentorian control where necessary, freedom. Cloud security concerns are huge and out there in the open. As governmental interventionism goes to rescuing other parts of the industry such as Auto industries ,Open and GM, (GM is Opel's Mom since 1929) pleading the state's help. Anyways, all I am saying is that, yes there is a great opportunity but there also lurks a massive danger. Anyways, here's the news: He was Microsoft’s general manager of platform strategy but left earlier this year to join Maritz at Pi. At Microsoft he wrote a platform economics blog . In his review of Nicholas Carr's The Big Switch Fitzgerald wrote: "Victory will go to those who best exploit both the clo