Multiple servers vs 1 big server performance

My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is “logical”, meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load.

Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They’re gonna use Blade Servers.

Now, I’m not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is.

Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem?


Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

Best Answer

Given their requirements sound a bit ‘wooly’ and are actually quite low I’d be strongly tempted to virtualise this. I’d start with just two blades and some shared storage, then you can create, modify and delete their VMs as required, you’ll lose very performance and gain a huge degree of flexibility, plus you can scale-out linearly and with no user impact.

You may Also Like:

Leave a Comment