VDI and its future

By , January 15, 2010 19:13

Today I had a very interesting talk (as always!) with Dr. Bernhard Tritsch.

We were discussing Cloud Computing and Virtualization, and came to the conclusion, that due to the massive speed-bump of CPUs and the not so massive speed-need-bump of the software, and due to the fact, that also memory is getting cheaper and cheaper, and also due to the fact, that PDAs are getting better and better (Power-saving, more CPU-cycles, better screen resolutions) and finally due to the fact, that hypervisor technology is also getting better and better, we will see way more client virtualization than VDI-like solutions that are based in Datacenters. With that will also come the dead of the “Cloud”, as we know it today. Currently there are discussions around whether “Private Cloud” should better be called “Internal Cloud” (I agree!), and what the real “features” of Cloud Computing are (besides those, that we already know from traditional IT-Evolution).

So, what are the drivers and technologies, that might enable such a change?

Classical VDI solutions were and are introduced due to a couple of drivers or reasons:

  1. Save on people resources for system management
  2. Save energy, reduce consumption
  3. Have greater flexibility in moving to newer hardware
  4. Have a better utilization of systems
  5. And many more

These need to be rectified in order to survive as a reason for doing something. Let’s look at those in detail:

Save on people resources for system management

This requirement was translated into re-centralization and consolidation, because systems management software at that time was not capable of doing distributed and efficient system provisioning and management. This is changing rapidly, and is supported by virtualization, as virtualization enables a “unique” or “unified” platform and therefore “special configuration” tend to vanish.

Save energy, reduce consumption

In order to achieve this, the idea was to get rid of systems on the desktop that only worked 8 hours a day, but were consuming energy over 24 hours, and were way oversized for the ordinary task at hand. But, if you look at the power consumption of for example the Intel Atom series, combined with the ability to physically (but controlled by software) switch off unused parts of the system, even down to the CPU-core, most of these systems nowadays even use less power then the so-called thin-client used in VDI infrastructures.

Have greater flexibility in moving to newer hardware

This requirement is now solved with providing a unified or unique platform by using hypervisors, that do provide an exactly same hardware abstraction regardless of underlying hardware. And with the advance of putting the hypervisor already into the BIOS of the system, this will make virtualization even more widespread adopted.

Have a better utilization of systems

This is driven by two main aspects: First, most systems are very expensive, and therefore you get a better TCO or ROI, if you really use it to its full capabilities. Second, it was also perceived to help with the first aspect on this list. Putting more tasks onto a single system reduces the overall number of systems that need to be managed. But: The fact that CPU-speed-bumbs are way faster then software needs, the cost of acquisition (CAPEX) goes down from generation to generation of those systems. And, combined with the fact, that the newer systems are capable of switching off unused parts during operation also reduced the operation expenses (OPEX), at least on the energy side.

And many more

So, all these combined lead to the question, if all those solutions to the original problem definition still apply, or will still apply going into the future.

If we now combine it with technologies like “system transportation” or “live migration” (for the end-user perspective, even a cold migration might be sufficient), for example to enable the movement of my desktop (here not only the “GUI”, but the “whole system” by using its “image”) onto a PDA (because the PDA runs a hypervisor, and is capable of running a complete VMware image (for example)), when I leave the office, or transfer it back to the system on my desk, when at home or in the office (all these PDAs do have a lot of memory, and a do have WLAN), why should I host that image on a centralized hypervisor in a big Datacenter?

Let’s transfer all of this onto Cloud Computing. Most of the arguments above also apply to the hopes and spirits behind the current trend to “cloudify” things. Most of the underlying and used technologies can be seen as an evolution of current technology, and therefore need to be put into context. Read http://blogs.sun.com/IFR_in_Clouds/ for some more ideas around this topic.

So, thanks to Benny for a vivid and entertaining discussion this morning!

Matthias

Panorama Theme by Themocracy