When we were choosing computer models, we set out to choose not the fastest, latest models, but ones that would be a good representation of what most people may have. Certainly, the faster models of these computers will perform even better.
Similarly, we had a greater focus on XP simply because it's more prevalent at this point, but we did want to get an understanding of how Vista performed as well.
The baseline PC we used was a brand new Fujitsu Lifebook A6025, with an Intel Core Duo running at 1.86 GHz, 1GB RAM, running Windows XP SP2.
We chose three Mac models to compare alongside a name brand PC: a MacBook, a MacBook Pro, and a Mac Pro.
The MacBook was a 2GB RAM machine, running a 1.83 GHz Core Duo processor. The MacBook Pro was a 4GB RAM machine, running a 2.16 GHz Core 2 Duo processor. And, the Mac Pro was a 4GB RAM machine, running a Quad Core configuration with two 2.66 GHz Dual-Core Intel Xeon processors.
There are three kinds of tests that we ran: one step tasks, multi-step task tests, and quantitative benchmarks.
The first set of tests was focused on time to complete one step tasks in Microsoft Office 2007, Internet Explorer, and file/network i/o. These one step tasks mean that from the human point of view, once you clicked a mouse, or key, the test would run to completion without further human interaction. For example, launching applications, scrolling large documents, printing, etc.
The second set of tests were "task tests". These were primarily cross platform, and required multiple steps from beginning to end. These were focused on the interaction between Mac OS X and the virtualization environment. For example, if you were to receive a PDF enclosure in Outlook, and you wanted to open it up in Mac OS X Preview; or you wanted to click on an email link in Safari, and you wanted to use it to create a new email message in Outlook.
The third set of tests are quantitative benchmarks using a utility to test CPU, graphics, disk, etc... In our case, we experimented with Sandra 2007 benchmarks, but found the results to be erroneous and not usable for the virtualization environments. We threw out these results and focused on the first two sets.
For those interested in the benchmarking methodologies, see the more detailed testing information in Appendix A. For the detailed results of the tests used for the analysis, see Appendix B. Both appendices are available on the MacTech web site.
...and according to Arstechnica is didn't really show us a winner, I honestly haven't had the time to do virtualization on Mac extensively :
Benckmark links and Arstechnica link