Open? OpenStorage

By , June 6, 2006 09:40

Folks,

long time ago, but still not widely known… ;-) So I need to remind you:


Open Storage Program for SunCluster (OSP)

Around christmas 2001 (yes, really, that long ago), I did ask a simple question on a Sun-internal technical (not sales!) alias dealing with SunCluster. The question was something like:


How much more Sun Servers can you sell, if we would simply say “YES” instead of “NO” to the question of SunCluster in connection with non-Sun storage?

Within a week, I got listings of opportunities summing up to a 3 digit million dollar number (and remember: servers were more expensive in those days).

That then simply started the OSP. Today that OSP covers nearly every major storage vendor, so, SunCluster can be used in conjunction with nearly every thinkable storage solution.

And, what’s more: These configs have higher value, because all involved parties did testings, and certifications, and do support these configs.

So, again: Just check:


Open Storage Program for SunCluster (OSP)

Provisioning: Sun Grid, Outsourcing and Outtasking…

By , June 6, 2006 06:05

Folks,

continuing the last entry on provisioning, there are also different currents in the field.

Some time ago I visited a large bank and insurance company’s IT service provider. They have the idea to outtask everything, that does not offer value and that does not contain their IP. The interface to the delivering partners shall be handled by agreed-upon blueprints. So, the pieces that will be delivered to the datacenter will be preconfigured systems, including software from different parties (up the stack from OS crossing middleware up to the app-server tier), configured to the specifications of the IT service provider.

These blueprints shall be defined and updated on a regular basis by the IT service provider and the delivering parties.

So far, this sounds highly reasonable, and has been best-practice for a couple of years (even across the whole industry, not only this customer).

Still, I see a couple of problems with this approach:

  • The delivering party gets large control of what the IT service provider can do, because the delivering party can simply state: “Can not be done.”
  • The process of defining the Blueprints is repetetive, and every single repetition gets more and more boring. And could be automated also… ;-)
  • It does not reflect the current market trends (if you believe our SunGrid vision), because these blueprints define the building blocks of an internal grid, and the process of putting the IP-laden apps onto these building blocks is partially done by the delivering party, and partially done by the IT service provider (and is not part of the blueprint), which does not give a consistent, exchangeable, replaceable, repeatable way of operation.

If we look at what the trends are, I would recommend a different approach (which other parties have already also started years ago. I for example was hired as a technical consultant by a different bank helping them in keeping their RCM (Reliable Configuration Management) system up to date. This RCM is still in place and works perfectly for this customer even after nearly a decade of operations):

  • For the hardware, use so-called staging centers. These are facilities of the delivering party, where all components are pre-assembled, put into racks, and loaded with the delivering parties’ software (mainly: OS)
  • If you want more, you can replace these by getting fully configured application environments, using Sun’s DCRI (Data Center Reference Implementation), with the accompanying services SRDCRI (Sun Ready DCRI). Here, complete environments are preconfigured including custom apps.

These two solutions underline the classical approach of outsourcing/outtasking, but still contain the risk of not being “flexible enough”, or “vendor influence”. Still, they add a lot of value, as they take out the risk of doing everything yourself, and the vendors do support these setups. And, as the vendors do this often, the benefit from automation is paying, will say, it’s cheaper, then doing it yourself.

If you want to have more control or more flexibility, you need to take the driver’s wheel into your own hands, and might consider provisioning. This can be additive to the above, or could also be a replacement (we might offer DCRI capable modules for N1 SPS, for example. JET, our utility for augmenting the JumpStart network installation method, is already part of N1 SPS, so your investments there can be saved and re-used).

This way you achieve the following:

  • Control of the definition process, because provisioning can be seen as a tool to map definitions onto hardware. And the blueprints do exist in machine-readable and executable format, because they are part of the provisioning system.
  • Flexibility, because you can easily replace the underlying hardware, because upper level configs and provisioning tasks do not change (putting an EAR into an appserver is an app-server task, and has no hardware or OS specifics to it)
  • And, last, but not least, you do set up yourself for what’s coming with stuff like the Sun Grid, because you surround your processes by tools. And the target of the provisioning task or tasks can also be changed easily. So, if it is a landscape of servers in your datacenter or a landscape of servers on the SunGrid is irrelevant to the provisioning itself.

I hope I gave some insights into the thinking, we at Sun (at least myself) have w.r.t. what provisioning is meant to achieve. I do hope, these small snippets here do help you in putting all the market hype around provisioning into a context, that can be used in your environment. And if you start considering re-thinking former decissions, and aligning your IT processes to the ever changing world, feel free to get into contact with me.

Provisioning should not force yourself to adapt to specific vendor’s ideas, It needs to be able to adapt to your needs, and still be capable of adopting to market driven demands (therefore enabling yourself to benefit from the things that are coming).

And to answer one of the comments/questions to my last entry: Sandbox is not as easy to define as “only in development” or “only a small project, but therefore the complete lifecycle”.

The reason for this not-so-easy answer is simple: Still every customer, every prospect, every project is different. The most significant indicator of success is: The less interfaces, the easier to use it as a sandbox. So, if you have a project, that has a well-defined scope, and that does not influence other departments, then such a project can be used as a sandbox. If the people from the customer side involved in that project do like the project, and subscribe to the benefits, this will be an injection point in the customer, because they will do the “word-of-mouth” type of internal advertisements. This is even more the case, if that project has big external visibility (for example a life-cycle management of a web-server infrastructure, where the provisioning can start by being used as a CMS (content management system)).

But, most important is: You need the buy-in of the people, that are involved, because finally, they have to maintain the system afterwards. And as the introduction of provisioning aims at simplifing the processes, it is imparative, that the people who shall use these systems need to “love” them.

Makes sense?

Panorama Theme by Themocracy