Category: General

Support? What’s that?

By , June 20, 2006 08:41

We all in IT do have a phrase, that’s creating lots of headaches:



Not supported!

Still, no-one really knows, what this implies or means. Or: Everybody has his own interpretation/meaning of this term. So, let’s try and start to define, what the term “supported” means, what it implies, and what we all really want. Because clarification helps communication, and that leads to more relaxed conversations.

Support, as defined (today) in wikipedia:


Support may refer to the following:

  • Support (mathematics)
  • Support (mobile framework), in mobile computing
  • Support (technical analysis), in security trading
  • Military combat support (see combat engineers, anti-tank, artillery)
  • Military service support (see combat medic, military intelligence, military logistics)
  • Sympathy (emotional support)
  • Supports in engineering and construction include arch, beam (structure), column, balcony


From that, we all see, that the thing, we think about, isn’t even mentioned here!

Let’s start with mathematics (because that is the only exact science, right?):


In mathematics, the support of a real-valued function f on a set X is sometimes defined as the subset of X on which f is nonzero. The most common situation occurs when X is a topological space (such as the real line) and f is a continuous function. In this case, the support of f is defined as the smallest closed subset of X outside of which f is zero. The topological support is the closure of the set-theoretic support.

In particular, in probability theory, the support of a probability distribution is the closure of the set of possible values of a random variable having that distribution.

Now, we even know less then when we started, right? So, let’s try and find a different one:

Webster states:


Entry Word: support
Function: noun
Text: 1 something that holds up or serves as a foundation for something else (if you don’t add a couple more supports to that tower of blocks, it’s going to fall down)
Synonyms brace, bulwark, buttress, mount, mounting, shore, stay, underpinning
Related Words column, pedestal, pilaster, pillar; arch, bracket, cantilever; crutch, mainstay, peg, post, stake, stanchion, stand, stilt, truss; base, foundation, frame
2 an act or instance of helping (the team’s victory owes a lot to Joe’s strong support in left field) — see HELP 1
3 something or someone to which one looks for support (Grandfather has long been the extended family’s emotional and financial support in times of trouble) — see DEPENDENCE 2

Entry Word: support
Function: verb
Text: 1 to promote the interests or cause of (my parents support the local schools both by volunteering and by fiercely opposing funding cuts at town meetings)
Synonyms advocate, back, champion, endorse (also indorse), patronize
Related Words adopt, embrace, espouse; abet, aid, assist, prop (up), second; bolster, boost, buttress, reinforce; bail out, deliver, rescue, save
Phrases stand up for
Near Antonyms baffle, foil, frustrate, interfere, oppose, sabotage, thwart; desert, disappoint, fail, let down
2 to pay the living expenses of (a young widow supporting a sick mother as well as two small children on a teacher’s salary)
Synonyms maintain, provide (for)
Related Words finance, fund, stake
Phrases foot the bills for, take care of
3 to hold up or serve as a foundation for (pillars supporting the bridge)
Synonyms bear, bolster, brace, buttress, carry, prop (up), shore (up), stay, underpin, uphold
Related Words steady, truss, underlie
4 to continue to declare to be true or proper despite opposition or objections (we support the students’ right to speak out on local issues that affect them) — see MAINTAIN 2
5 to give evidence or testimony to the truth or factualness of (her grades don’t support her claim that her after-school job isn’t affecting her grades) — see CONFIRM
6 to provide (someone) with what is useful or necessary to achieve an end (sent reinforcements to support the troops already in the thick of battle) — see HELP 1
7 to put up with (something painful or difficult) (he could never support the thought of having to go on living without his beloved wife at his side) — see BEAR 2

So, just for a starter, let’s use definition 6 of webster’s verb:



to provide (someone) with what is useful or necessary to achieve an end (sent reinforcements to support the troops already in the thick of battle) — see HELP

So, it boils down to “help”. Should it be that simple? No, it is not, because there are a few constraints attached to that:

  • In order to help, you need control
  • In order to help, you need ressources

In IT speak, this means: You need to “own” the stuff, otherwise you can not support. You need a contract, because otherwise you can not get ressources.

The most critical part is the “control” part, because here, most mis-understandings occur.

Some (mostly software) companies claim support, if they allow the usage with something, they do not control. That’s clear and good, because otherwise, they would never achieve broad adoption (Example: Microsoft Windows). Some other, more system-oriented companies only claim support, if they control the complete stack (example: Apple MacOS X, runs only on Apple Hardware, they even thought about putting in hardware to PREVENT mis-use).

Still, for the average end-user, these differences are not transparent, because they all use the same word: “support”.

So, we all should be more precise in USING that term. It might be good to replace that phrase appropriately by things like:

  • will (not) work
  • (not) allowed
  • might (not) work
  • (not) tested
  • (not) certified

And always add, which pieces will be covered by a support contract, because these are only the pieces, that are “owned”, and can therefore be “patched” (which opens a different can of worms), or maintained.

With that: Floor open for discussion!

Consolidation, a challenge!

By , June 19, 2006 04:55

Last week I was talking to a customer. He has a big problem.

The topic of the meeting has been consolidating, and benefitting from Oracle’s licensing w.r.t. Sun. So far, the meeting went well, and then, during a small break, customer described his problem and asked for solutions to the following:

In his datacenter and under the desks in the offices, he often finds (and: Don’t we all do that?) small systems, running for years, some even decades, and doing real important stuff. These systems have never been part of the “real” datacenter, but still, can not be replaced, because they are vital parts of the overall environment.

Now, he faces the following problem:

These systems, as stated, are old, some are still Intel x386 boxes (mostly running Linux apps!). These systems are not fully loaded, but seem to run at some 20-30% usage (which we all know, is average, even today!). So, replacing these systems one-by-one with current boxes will not really yield a better CPU/performance ratio, because then these new systems would be used in the below 1% range. That’s a waste of resources, regardless of the fact, that such systems do cost way below 1000$ today.

So, he is looking for solutions to consolidate these systems, and have a proper way of doing accounting on the different workloads put onto these systems.

I simply stated, that we can do that, and he agreed to have a different meeting later this summer. Still, I’d like to outline, what we will be proposing:

  • With Solaris 10 we have Zones/Containers.
  • These can be put under control of the Solaris Ressource Manager and the Fair Share Scheduler.
  • With Solaris accounting tools, augmented by tools like TeamQuest, there are really flexible ways to generate accounting infos that can be processed by standard billing systems.

So, bare in mind: Consolidation does not always mean: Putting smaller systems on big systems. It can also mean: Making small systems even smaller!

So, small systems can now become even smaller, and I leave you with my initial answer to this customer:

“I do have an answer, but the initial statement might not be, what you expect:

The problem is not the technical solution, the problem is the acceptance of ‘partitioning’ OpenSystems. In the Mainframe world no-one really worries or even thinks about “sharing” pieces of the OS, or the hardware with a different business unit. In the OpenSystems world, no-one today really trusts this. With the certification of Solaris 10 as trusted, at least, we have the necessary technology and approvals to do just that.”

So, it’s up to you to start consolidating all your “long-forgotten” iron… ;-)

Happy Birthday, OpenSolaris!

By , June 14, 2006 12:45

OpenSolaris 1 Year Anniversary

What shall I say?

HAPPY BIRTHDAY, OpenSolaris!

I’m now working at Sun for more than 8 years, and ALWAYS did use Solaris/x86 on my laptops. Starting with Solaris 2.5.1, quickly updated to Solaris 2.6 on a Toshiba 480 CDT (still running on it!), then Solaris 2.6, 7, 8 AND 10 on my Toshiba Libretto 110CT (yes, I have Solaris 10 03/05 up and running on that small box, with only 64 MB of RAM, and only 233 MHz! When I find more time, I will report here, how I did it (Thanks, Casper!)). Then Solaris 8, 9, and 10 on a Dell Inspiron 5000, and now Solaris 10 and OpenSolaris (dual boot) on my Toshiba Tecra M2.

Before I joined Sun, I also did use Solaris/x86 on a couple of servers at the company I have been working for, so, honestly, Solaris was one of the reasons I joined Sun.

During all these years I did buy some copies of SuSE Linux, but never really used them. I found Linux too much of a hassle in patching and keeping it up-to-date. So, SuSE got “money for nothing” from me… ;-)

Although it sometimes was difficult to get all work done under Solaris, today, I don’t really have anything missing (OK, I have to admit, we all MISS Adobe’s Acrobat Reader for Solaris/x86! Johnny L., fix that!). That can easily be seen at the messages, I get, when I sometimes, just for fun, boot the laptop into Windows. I always do get the “your virus protection definition file is out of date for more than 2 weeks, please update”…

So, please all, let’s get up, and sing:

Happy Birthday to You, OpenSolaris!

tags:

Open? OpenStorage

By , June 6, 2006 09:40

Folks,

long time ago, but still not widely known… ;-) So I need to remind you:


Open Storage Program for SunCluster (OSP)

Around christmas 2001 (yes, really, that long ago), I did ask a simple question on a Sun-internal technical (not sales!) alias dealing with SunCluster. The question was something like:


How much more Sun Servers can you sell, if we would simply say “YES” instead of “NO” to the question of SunCluster in connection with non-Sun storage?

Within a week, I got listings of opportunities summing up to a 3 digit million dollar number (and remember: servers were more expensive in those days).

That then simply started the OSP. Today that OSP covers nearly every major storage vendor, so, SunCluster can be used in conjunction with nearly every thinkable storage solution.

And, what’s more: These configs have higher value, because all involved parties did testings, and certifications, and do support these configs.

So, again: Just check:


Open Storage Program for SunCluster (OSP)

Provisioning: Sun Grid, Outsourcing and Outtasking…

By , June 6, 2006 06:05

Folks,

continuing the last entry on provisioning, there are also different currents in the field.

Some time ago I visited a large bank and insurance company’s IT service provider. They have the idea to outtask everything, that does not offer value and that does not contain their IP. The interface to the delivering partners shall be handled by agreed-upon blueprints. So, the pieces that will be delivered to the datacenter will be preconfigured systems, including software from different parties (up the stack from OS crossing middleware up to the app-server tier), configured to the specifications of the IT service provider.

These blueprints shall be defined and updated on a regular basis by the IT service provider and the delivering parties.

So far, this sounds highly reasonable, and has been best-practice for a couple of years (even across the whole industry, not only this customer).

Still, I see a couple of problems with this approach:

  • The delivering party gets large control of what the IT service provider can do, because the delivering party can simply state: “Can not be done.”
  • The process of defining the Blueprints is repetetive, and every single repetition gets more and more boring. And could be automated also… ;-)
  • It does not reflect the current market trends (if you believe our SunGrid vision), because these blueprints define the building blocks of an internal grid, and the process of putting the IP-laden apps onto these building blocks is partially done by the delivering party, and partially done by the IT service provider (and is not part of the blueprint), which does not give a consistent, exchangeable, replaceable, repeatable way of operation.

If we look at what the trends are, I would recommend a different approach (which other parties have already also started years ago. I for example was hired as a technical consultant by a different bank helping them in keeping their RCM (Reliable Configuration Management) system up to date. This RCM is still in place and works perfectly for this customer even after nearly a decade of operations):

  • For the hardware, use so-called staging centers. These are facilities of the delivering party, where all components are pre-assembled, put into racks, and loaded with the delivering parties’ software (mainly: OS)
  • If you want more, you can replace these by getting fully configured application environments, using Sun’s DCRI (Data Center Reference Implementation), with the accompanying services SRDCRI (Sun Ready DCRI). Here, complete environments are preconfigured including custom apps.

These two solutions underline the classical approach of outsourcing/outtasking, but still contain the risk of not being “flexible enough”, or “vendor influence”. Still, they add a lot of value, as they take out the risk of doing everything yourself, and the vendors do support these setups. And, as the vendors do this often, the benefit from automation is paying, will say, it’s cheaper, then doing it yourself.

If you want to have more control or more flexibility, you need to take the driver’s wheel into your own hands, and might consider provisioning. This can be additive to the above, or could also be a replacement (we might offer DCRI capable modules for N1 SPS, for example. JET, our utility for augmenting the JumpStart network installation method, is already part of N1 SPS, so your investments there can be saved and re-used).

This way you achieve the following:

  • Control of the definition process, because provisioning can be seen as a tool to map definitions onto hardware. And the blueprints do exist in machine-readable and executable format, because they are part of the provisioning system.
  • Flexibility, because you can easily replace the underlying hardware, because upper level configs and provisioning tasks do not change (putting an EAR into an appserver is an app-server task, and has no hardware or OS specifics to it)
  • And, last, but not least, you do set up yourself for what’s coming with stuff like the Sun Grid, because you surround your processes by tools. And the target of the provisioning task or tasks can also be changed easily. So, if it is a landscape of servers in your datacenter or a landscape of servers on the SunGrid is irrelevant to the provisioning itself.

I hope I gave some insights into the thinking, we at Sun (at least myself) have w.r.t. what provisioning is meant to achieve. I do hope, these small snippets here do help you in putting all the market hype around provisioning into a context, that can be used in your environment. And if you start considering re-thinking former decissions, and aligning your IT processes to the ever changing world, feel free to get into contact with me.

Provisioning should not force yourself to adapt to specific vendor’s ideas, It needs to be able to adapt to your needs, and still be capable of adopting to market driven demands (therefore enabling yourself to benefit from the things that are coming).

And to answer one of the comments/questions to my last entry: Sandbox is not as easy to define as “only in development” or “only a small project, but therefore the complete lifecycle”.

The reason for this not-so-easy answer is simple: Still every customer, every prospect, every project is different. The most significant indicator of success is: The less interfaces, the easier to use it as a sandbox. So, if you have a project, that has a well-defined scope, and that does not influence other departments, then such a project can be used as a sandbox. If the people from the customer side involved in that project do like the project, and subscribe to the benefits, this will be an injection point in the customer, because they will do the “word-of-mouth” type of internal advertisements. This is even more the case, if that project has big external visibility (for example a life-cycle management of a web-server infrastructure, where the provisioning can start by being used as a CMS (content management system)).

But, most important is: You need the buy-in of the people, that are involved, because finally, they have to maintain the system afterwards. And as the introduction of provisioning aims at simplifing the processes, it is imparative, that the people who shall use these systems need to “love” them.

Makes sense?

Provisioning: Why customers have difficulties to really go that route…

By , June 2, 2006 07:08

I’m highly involved in provisioning tasks (we call all that “N1”) in the german market, and what I have been learning during the last 3-4 years from talking to customers and doing PoCs in different accounts can be summed up quickly as follows:

  • Larger accounts tend to not decide, because a decision towards one single product puts them into the hands of a single vendor.
  • Larger accounts also tend to not decide, because their organisations are larger, and deciding to do provisioning also might need to be prepared by internal organisational “adoptions” or “adaptations”.
  • Larger accounts also tend to have a “dual-vendor” strategy, which prohibits a decision for a single vendor in the arena of datacenter provisioning. As long, as there are no real interchangable products (and I think, we all will wait long on these!), a change from one provisioning tool to another will always be a large re-implementation. The sad thing here is, that the companies still live the old, sneaker-net way, and will not benefit from the advantages of provisioning. Remember: Even doing it with one vendor will benefit in overall cost savings, and even a later change would not be as expensive as perceived, because the structural and organizational changes will be re-usable!

But: This only applies to companies, that want to start to provision the complete datacenter. With many large accounts we are doing provisioning, but this is for smaller pieces of their environment. And that works very well, if it is contained in the small “sandboxes”.

  • Smaller companies on the other hand are normally more willing to go this route, as they can adopt quicker to changes, and so we see larger adoption here. And: These smaller companies are willing to go and also are going the complete way. Remember: although these companies are smaller, the amount of work needed to be done is not smaller, if we look at the configuration and setup of the tools.

What’s your feeling w.r.t. these topics?

Are you willing to start provisioning today?

Feedback welcome!

Provisioning and ITIL

By , June 1, 2006 09:36

We recently had a longer email discussion about the influence of provisioning tools like our N1 portfolio onto the ITIL disciplines. (N.B.: I’m not ITIL-certified, so I might seem a little bit biased.) The discussion started around the following email w.r.t. an annoucement:


From the itSMF International Board Report:

“itSMFI has agreed to endorse an initiative by a group of software vendors (BMC Software, CA, Fujitsu, HP and IBM) which plans to develop an open, industry-wide specification for sharing information between Configuration Management Databases (CMDBs) and other data repositories. The group plans to submit a draft specificatino to an industry standards organisation later this year.”

Does anyone know more about this?

This discussion was leading to the role of N1 P1 and N1 P2 (check Dave Levy’s blog for more info on these upcoming versions of the N1 product portfolio) and on the role of CMDB’s in these environments. Let me quote my colleague Jorgen Skogstad:


In my simple world: N1 SM, N1 SPS, P1, P2.. and similar ‘stuff’ are all CMDBs (aka Configuration Management DataBases).

Continuing later on, he opened the following question:


IMHO technology will always be just technology and can be applied in many ways. Some good, some bad. The same applies when taking technology to achieve an efficient IT operation; easier said than done. If it was easy, why are our customers struggling of deliver business aligned services at an efficient cost? Why are they even going down the outsourcing route.. so and so forth. :)

In simple terms; what would happen if you in _theory_ had 100 Sun systems running 100*4096 containers? That equals 409600 “systems” that each individually demand maintenance and support. Couple that with the service complexities running on top of it; for instance full blown application life-cycle management. Without a properly defined model to approach that problem you will inevitably run into problems.

Didier Kirszenberg then made a swing and stated:


Most customer lauch their ITIL initiative foccusing on “Help Desk” and CMDB. They try to fill their CMDB by inventory process. This can be efficient for infrastrucure but not for the fast moving applications (using photo to capture a movie). N1 SPS is maybe the only tool in the market able to do the “application release mangement” and with his central database bring to the product choosen as the unique and central CMDB the exact view or the application asset at all time.

Jorgen then replied:


Didier,

I agree with you; SPS is not a CMDB as per se. It performs more the function of the DSL in ITIL terms, but then again the DSL is a functional part of the CMDB. In any case, I’ve written up two papers on the topic of ‘applied provisioning’ + ‘the itil dsl in an agile infrastructure environment’. Sort of states the same that you’re mentioning beneath. If you’re interested, have a look at them. You may find them here:

http://www.skogstad.com/papers/AUUG-2005-Applied-application-and-service-provisioning-0210905-FINAL.pdf

http://www.skogstad.com/papers/The-ITIL-DSL-in-an-agile-infrastructure-environment.pdf

Then I jumped in:


Folks,

but still, I do not like us to “reduce” the value of N1 SPS (and the upcoming N1 P2 product) to the “Release Management” part.

If you use a release management tool, and the release management does keep a record, it automatigally becomes a CMDB, right?

Doing it the “back way” by trying to focus on “gathering, what’s there” in order to “populate an inventory” might be a way to “handle the past”, but is this really, what we want?

Or, to turn it around:

Does a car manufacturer really want to keep a record of every lightbulb in every switch in every dashboard in order to service the “light switch”? Or is the “light switch” the thing he wants to be able to “replace” as a total, or even the “switch board” as a complete unit?

As such, the “gathering, what’s there” approach might be wrong…

Thought-provoking?

Jorgen answered:


Agree; in ITIL terms SPS fills a number of roles. It facilitates Release Management, Configuration Management, Change Management .. but also is central in the CMDB + DSL. It does not “implement” either one in it’s entirety, but it provides an engine to manage them efficiently. If you look to my ‘applied’ paper, that’s what it is stating. The other paper states how the DSL in ITIL terms should function in an agile infrastructure environment which is what SM, SPS, P2 … etc .. provides, right.

We then turned to the definition of the CIs (configuration items), and Jorgen continued:


Hence if you are provisioning systems, services, applications etc through SPS, SM etc that _are_ in fact CI’s that can fail, be patched, brought down, requested for ‘change’ and so forth you would want the CMDB (aka for example Remedy) to have state information about that CI. Without a proper history of that, the Servicedesk staff and other users of the ITIL processes will not be able to perform their work ..

In simple terms; to have an efficient ‘enabling ITSM’ CMDB system it has to know about all CI’s out there. This includes servers, switches, routers and so forth, but more importantly applications, services and aggregate services. They fail, they are managed, changed, developed and so forth. So, unless state information and history is available within the ITIL processes you immediately have a problem of communication and ownership. :)

.. hence the importance of the itSMFI initiative. If it would be possible to tie “all” systems together, this would aleviate the problem and enable a more consistent way of managing an IT environment.

My main idea was:

If we do use provisioning tools to provision complete infrastructures, and if we have a “compare” feature inside these tools, that can simply find “deviations” to the “specifications”, do we then really need to have a tool to “gather, what’s there”? Because if we determine, that there are deviations, we can simply “re-provision”, right?

I need to leave out the further discussion, because it got to specific to upcoming products, but rest assured, we address the issues!

Simply put: “Our goal is to offer a pragmatic implementation of a CMDB.” (Doan Nguyen)

Feedback is welcome!

My Birthday

By , May 24, 2005 08:02

Today is my Birthday, so I thought, I might use it to start a Blog. You might be wondering, why that?

The simple reason for that is the fact, that I try to work, but too many people pass by to congratulate (and that on a non-special birthday, like the 41st (mathematically speaking, it’s the 42nd, because the day I was born is the first, finishing the first year hits the second birthday, et.al.)), so I can not do real work. So, in order to at least do something, I thought about starting my Blog.

It’s a nice fact after more then 7 years with Sun, that so many want to come by and shake hands.

And after that long period of living on this planet, Sun is the company that I’m working for for the longest time. Tell’s me something about this company (or the others before Sun).

Before I dive into more specific things later on in this blog, I’d like to give you an idea of the person behind this blog.

As you can derive, I’m 41 now, being born in Frankfurt/Main, Germany, raised in a suburb of Wiesbaden, where I finished school in 1983. I studied Mathematics and Computer Science at the University of Darmstadt, where I finished with my Diploma in January 1990. I then joined a spin-off of the University, the so-called Zentrum fuer Graphische Datenverarbeitung. Some of you might know, that the University of Darmstadt is famous for its contributions to the world of computer graphics. Prof. Encarnacao is among the three renowned professors in the world. I was working on projects with multiple companies, the most notable was Honeywell, where I did help in building the building automation system for the then new airport in Munich. After five years I was transfered to another spin-off, the Fraunhofer Gesellschaft, where I joined the datacenter operations team. I was responsible for setting up the 3rd WWW-conference (the one, that gave birth to java!), and handled all the stuff from web-master, post-master, and all Sun Systems there. As part of my job, I participated in several Sun beta-tests, most notably the Solaris 2.6 Beta-test. I wrote an article about that in german’s iX periodical, which led to Sun’s interest in me. I took no chances at that time and joined Sun at Feb 1st, 1998 as a project engineer in the Professional Services Organisation.

Inside Sun I’ve been working on far to many projects to list here. Main topics where (and still are): HA (aka SunCluster), and lately N1.

I’m a Datacenter Ambassador, Solution Architect, Member of the CETC (Customer Engineering Technical Council) in the Datacenter Practice of the now called Client Solutions Organisation in Germany.

Panorama Theme by Themocracy