X
Business

Blades vs. standard racked servers for virtualization

There are some battles that are not worth fighting. Blades vs. standard architecture servers is one of those battles, yet it rages on.
Written by Ken Hess, Contributor

I've seen some debate about blade servers versus standard architecture server systems for virtualization but there are no definitive answers from any of the so-called experts. I can see the arguments on both sides of this one but for someone who works with both architectures, there are significant pros and cons to each technology. I think that the question deserves a unbiased analysis of the two technologies. And, before you decide or believe that I'm going to try to sway your opinion, I don't think that one has clear superiority over the other. They're different.

Everyone has his preferences when it comes to operating systems, hardware and software but it seems that the industry trend is galloping headlong into blade server-filled data centers. Many industry watchers question this practice while others laud the paradigm shift from standard architecture hardware toward blades.

Blades

Blades have a real density advantage and with rack space at a premium, you can see why higher density is a good thing. Many pundits cite centralized management as a tick in the blade column's plus side. It's true that blades do have centralized management. From a single interface, you can see and work with all of your blades. You can create server profiles, add network, add SAN, setup VLANs, manage power and generally work with your systems in a very powerful way. You can also connect to your blade management system via SSH (Secure Shell) and issue keyboard-interactive commands to your systems. For example, you can issue a reset command to a single blade system that has virtually the same effect as physically reseating the blade.

These are some very big blade server advantages.

On the downside, blades are expensive. Really expensive. They also require special power connections. You're also locked into a single vendor when you buy a chassis or enclosure. You can't buy a Sun blade, an HP blade and an IBM blade and expect to plug them into the same enclosure. Hopefully, some third-party company will see a need for this type of interoperability and create an enclosure that can house any blade mash-up. Blades also have an interesting quirk: You can't add or remove network or SAN connectivity with the blade powered on. That's a big negative.

Contrary to popular belief (probably by those who write about this stuff but have never actually touched a blade or worked in a data center), blades aren't plug-n-play. Their setup is very manual. In fact, I can setup a standard architecture system far faster than I can setup a blade. Further, I can setup a standard server to Production Ready status faster than I can the equivalent blade. Why? Blade setup is more complex and requires that you power off the blade for certain tasks. Once I've configured a blade and it's operational, they're pretty much the same animal as a standard server system.

Though blade system density is higher, often you'll see a maximum of two enclosures per rack because of the weight of a full enclosure. Two full enclosures can weigh in at over 900 pounds. That's a lot of stress in a small area. So, the density is offset by the lack of ability to place four enclosures into a rack.

Blades can also be a little quirky. That is to say that they're a little flakier than standard systems. You'll find yourself having to update firmware more often, reseat individual systems fairly often and deal with some downtime if you need to make what most would consider to be minor changes.

Standard Systems

By standard system, I mean a 19" rack-mounted 1U, 2U, 4U, etc. server with its own distinct power connections, network connections, local disks and HBA (SAN) cards that plugs into a KVM or console server. Some of the advantages of a standard system are the disadvantages of a blade system and vice versa. Standard systems have plug-n-play components such as network, disk and power. There's no need to power down a system to make those kinds of changes. Having a local DVD drive is also a big advantage. I like having the ability to pop in a DVD, install software or an OS and go on my way. Sure, you can do this via the iLO but there are some reliability problems that you just don't experience when you kick it old school.

Standard servers are independent, which means that I can insert them into any 19" rack anywhere. I don't need any special wiring, power or enclosure. If I need to, I can put the server on the data center floor to make it work--no rack required. Standard architecture systems are cheaper--or more affordable I should say than their blade counterparts.

The bummer about rack systems is that they're cumbersome to work with. If you have a full rack of systems, it's very difficult to maneuver, remove a top, unplug everything, slide out the system, do your work and then reverse the process. It's a lot of walking from the back of the rack to the front of the rack, which in a large data center can give you your cardio exercise for a week*.

Standard architecture systems also generate a lot of unnecessary heat and pull a lot of power compared to blade systems. Although they're getting better, rack systems need a lot of cooling and high volumes of air flow to keep their components at a good working temperature--even when idling, they generate a lot of heat. SSDs are helping to cure some of the heat problem as are low voltage CPUs but there's still work to be done in energy efficiency.

A Similar Argument: Physical vs. Virtual

It's kind of funny to have an argument like this at such a late date but I still hear the physical versus virtual argument among some of my colleagues. From my experience, if it can be virtualized, then it should be virtualized. There are very few workloads that can't be virtualized. But, that doesn't discount virtualization as a viable technology.

The same goes for blades versus standard systems. They perform the same functions. They support the same operating systems. They have their advantages. And, they have their disadvantages. There's no single correct answer for every situation.

However, there are some right answers:

Use Blade systems if,

  • You can afford them and are setup for them.
  • You need higher density and can support the associated weight.
  • You have skilled, trained blade administrators.
  • You don't mind dealing with their quirks.
  • You're OK with vendor lock-in.

Use Standard Architecture systems if,

  • You're on a tighter budget.
  • You don't want vendor lock-in.
  • You want rapid deployment.
  • You don't mind standard (old school) management procedures.
  • You have sufficient cooling, power and air flow to accommodate standard systems.

And, trust me, no data center has just one architecture in it. Most large data centers have every kind of system imaginable, so don't be afraid to mix and match yours. Remember, just like physical or virtual, there are advantages and disadvantages to both. Don't let anyone tell you that there's only one right answer or a single ultimate architecture for your applications, systems or data centers.

*Granted, most of us could use it. Myself included.

Editorial standards