Category: General

Functionless

By , September 26, 2017 09:41

In the microservices world, terms like serverless and function are the newest kids on the block.
But, they are only evolutionary steps towards the next step:
And as happened to me last week just by accident, when I wanted to describe function and serverless, I accidentally talked about “functionless”. So, we now need to create the term “functionless” as the next evolutionary step after function and serverless.

Just as Burr Sutter said in his today’s keynote are Red Hat’s Tech Exchange conference: Serverless is a misnomer, because what serverless means, is, they’re there but you can ignore them.

So functionless is, when some other logic takes care of creating services from functions, and you don’t need to worry about the how.

So, basically, serverless is the Ansible of Microservices:
Describe the end-state, and some cool tool builds it for you…

Leaving Oracle/Sun

By , June 25, 2012 22:51

Today I wrote the following email internally to Oracle:

14.5 years of Sun/Oracle and now it’s time for me to say good-bye. In Germany we had the opportunity to participate in a volunteer reduction program, so this time I decided to “grab a slot” in that program.

I’ve seen people come, and I’ve seen people go during these 14.5 years.

Even worse, others have seen me come and now will see me go, among them those that had been my first contacts to Sun (XXX YYY, our really great chef de reception in our Langen office, XXX YYY, who was the one hiring me into Sun, as well as XXX YYY, who was my first bosses assistant in those days. These are the three I had contact with even before I was a “Sunnie”, and who still are here at Oracle. Thanks for the warm welcome!). Then over the years I’ve added many new contacts and many new friends through the myriads of communication channels we had and jobs I had, be it in Professional Services, be it as an ordinary SE, or later as an ambassador or finally a PFT. Even better, meeting all of you in person was really worth all the trouble and expenses, be it at “champion-meetings”, through the ambassador program and its meetings or even later via the multiple CECs. I’ll remember all that as one of the best (if not the best!) work experience I ever had, and I’m glad to state, that you all influenced and shaped the Matthias Pfützner we all know today. This place was the place I loved, because of you all!

But, times change, and the industry changes with that, and today my interpretation of Moore’s Law is: Every 2 years you need to reduce your workforce by the factor of two (because that’s what’s happening to HW revenues, due to the fact, that software demands no longer really grow). So I decided to leave the HW-(even server)-only part of Oracle, and am going to look for some new challenges.

I don’t know yet, where the tides will drift me ashore, but you can rest assured, it will be something, where I again will have the opportunity to again shape this industry and participate in bleeding edge technology roll-out experiences.

There are so many people I would like to thank in person for my chance to work with and meet them, that it’s impossible to do so. I hope I have added all of you over the last years to my Xing, LinkedIn, Twitter, Google+ or even Facebook friend lists. If not, you can find me there and remind me to add you.

My last working day at Oracle will be this Friday, the 29th of June, 2012.

All the best to all of you that I will be leaving behind. I’m sure, we all will meet again somewhen somewhere, and as Paul McCartney said in his Unplugged session: “You’ve been a great bunch …”

Matthias

Nostalghia…

By , April 20, 2011 13:33

Two days ago, so it seems (if it’s not a fake!), my former boss, Scott McNealy, joined twitter as http://twitter.com/SMcNealy.

That poured some pieces of Nostalghia into my veins, and reminded me of a wish and dream I always had during my time with Sun.

I always wanted to stay at Sun, so that I would have been part of Sun for more than half of Sun’s existence. I missed the coming true of that dream… :-(

Sun was founded on February, 24th, 1982, and was bought by Oracle on January, 27th, 2010. That made up for a good 10199 days.

I joined Sun on February, 1st, 1998, and am now still with Oracle. So my days at Sun were 4378.

My dream would have come true on January 10th, 2014, but, again, Sun missed that opportunity by 1444 days… Because on January, 10th, 2014, Sun would have existed 11643 days, and I would have been there for 5822 days.

Performing that calculations also brought back some nostalghia, my good old HP 41CX, now also re-incarnated as an iPhone app!

Matthias

One more time: Lecture again

By , April 30, 2010 15:54

Last summer, Ulrich Gräf and myself did provide a lecture at Technical University Darmstadt, on “Innovative Operating System Elements”. The corresponding blog-entry from last year can be found here: “It’s official: Ulrich and myself will be giving lecture at TU Darmstadt on Operating Systems”. So, again this summer we’ll do it again: “Innovative Operating System Elements”.

Again, also, the slides will show up under: “Innovative Operating System Elements SS 2010”.

Our intention for the content currently is:

  • 16.04.: OS – What’s that? (UG)
  • 23.04.: IO (UG)
  • 30.04.: Storage (UG)
  • 07.05.: CPU and scheduling (UG)
  • 14.05.: Posix (UG)
  • 21.05.: High Availability (UG)
  • 28.05.: Cluster Methods (DU)
  • 04.06.: Networkfeatures in OSes (MP)
  • 11.06.: Security (UG)
  • 18.06.: Management / SAN / Filesystems (MP)
  • 25.06.: Filesystems (UG)
  • 02.07.: Virtualisation / VM (MP)
  • 09.07.: Virtualisation / OS (UG)
  • 16.07.: OS Generation 5 (UG)

You’ll notice a “DU” as lecturer in there. At May 28th, both Ulrich and myself will be at the “GUUG” in Köln, so we both are unable to give the lecture. Therefore we found Detlef Ulherr, one of Sun’s Solaris Cluster Engineer’s, who will be providing that lecture.

And, again, as last year, we intend to have an excursion to the “Heinz-Nixdorf MuseumsForum” in Paderborn, the largest Computer Museum of the world. Let Ulrich know, if you’re interested in joining us on that excursion.

We’re looking forward to an again interesting, funny and hopefully useful lecture. Enjoy!

Matthias

Answer to Brian Madden’s 2015 Desktop article

By , April 29, 2010 15:44

In “What the Windows desktop will look like in 2015: Brian’s vision of the future” Brian describes his vision of the Desktop in 2015.

A couple of small comments:

  1. Three months ago I posted something similar: See here: “VDI and its future”
  2. With the Backend DC Apps becoming “virtualized” (no more need for an OS, running natively on a Hypervisor, see here: “Oracle WebLogic Suite Virtualization Option”), why not simply assume, that every Desktop app also will be running natively on the hypervisor?
  3. As these Apps then no longer require an OS, there might not be a need for Windows Apps any longer.
  4. The “old OS discussion” will then become a “hypervisor discussion”.
  5. If we assume also, that APIs for deploying apps “into the cloud” will become homogeneous and compatible with “deploying to run on a hypervisor” (we could call a cloud a “hypervisor of OS functions”), we might then enter paradise, as we no longer need to care, which app runs where or how to access it.

So, from all these thoughts, OSes will get irrelevant. That’s why MS et.al. are starting to invest into the cloud game, as they need to transform their business model from “paying for owning apps” to “paying for getting a specific service”.

Even your “user workspace” idea might shift, as it might become a part of: “IT Futurology and the Terabyte iPod”.

Yes, your article might be “real” in that it keeps in mind, that things change slowly, still, other posts of yours do miss opportunities, as you still seem to be to windows-bound… ;-)

Still, it’s a good reading! Thanks for that!

Matthias

Updates again: Small Home NAS Server…

By , March 31, 2010 16:54

A while ago I wrote about the ongoing changes to my small NAS-Home-Server and promised pictures and updates.

So, now, that the Chenbro case did arrive, I set up the whole system. The first task was to disassemble the PSU from the Chenbro case, as I intend to use a real external 120W PSU. That saves heat-generators inside the case, and allows for less rotations of the fans… The case fans aren’t as silent, as the CPU fan, so, yes, sadly, they’re audible…

Second task was to disassemble the electric case security switch, as the Jetway board doesn’t make use of such info.

You can see the missing parts in image one below… ;-)

After that, I had to build some mounting kits for the 1.8″ Toshiba disks, as the 2.5″ adapter had its holes upside-down, whereas the case had them sideways. So I bought some 7.5mm x 7.5mm (1mm thick) L-shaped aluminum part, cut off 2 x 10cm pieces, and drilled holes into it and created screw sockets (2.5mm). As the adapter has some overhead resistors, I needed to add some distance between the alu and the adapter, which I did by using some plastic. You can see that in the following images.

Then I found, that the board does have two fan sockets, but only one was temperature controlled in the BIOS. So, you see, I needed a Y-cable to connect both fans to the controlled outlet. More on that later…

Next disappointment was: The external 80W PSU did die during power-on, as the 5 disks did draw to much power. So, during power-on I need to currently pull out one of the 3.5″ disks, and re-plug it in a bit later. A new 120W PSU is on its way to me right now…

Next disappoint was: The Jetway board has not been able to see the 2 disks attached to the NM10 chipset as SATA (AHCI) disks, they show up as IDE. OK, no problem for ZFS, it works, but it’s a bit slower.

So, with these three disappointments I wrote an email to Jetway, asking for a remedy for these failures. To my great pleasure they did respond immediately, and less than a week later I now have a pre-production version of the new BIOS, that allows to set the onboard SATA port to either IDE or AHCI, and also allows to set parameters for the second FAN socket. So, I switched back to two distinct cables for the two fans, so that they can be set individually. The only thing they did not implement was the “delayed power-on” for the individual disks. But I guess, that will be solved, once I have a more powerful external PSU.

So, power-consumption now is at 43W, when idle, ~55W, when all disks are active. Nice thing is: Hot-swap of disks is do-able, so, once one dies, it’s easy to replace it with a bigger one.

Now, I’m only waiting for the OpenSolaris 2010.03 version to appear, so that I can do a clean OS re-install.

So, with that,here are the images, enjoy!

Matthias








Cloud, DevOps, ITIL, Provisioning – A reflection on James Urquhart’s and Dan Wood’s articles

By , March 30, 2010 16:57

James Urquhart (@jamesurquhart) posted a series of articles on operational management in the cloud on his blog: The Wisdom of Clouds.

Following are my comments on his series and the discussions that followed on Twitter

But first, the links to James’ articles, entitled “Understanding cloud and ‘devops'”:

It also refers to his series around Payload description and Application packaging as well as Dan Woods’ article on Virtualization’s Limits

Dan states:

“But automated provisioning and management of cloud computing resources is just one of three elements needed for intelligent workload management. The other two are the ability to set up and configure an application that is going to run on a cloud or virtualized server, and then making sure the data ends up where it is needed. Here’s where the hard problems arise.”

Dan is right, but also wrong in his arguments.

Let’s look back a bit in IT history: 5-10 years ago, the notion of “provisioning” did try to shape the way, DCs should have been managed. Terms like SODC (service oriented datacenter) and OM (operational maturity) were hip. Still they neglected a couple of trivial things, like: Inconsistent upgrade paths of software stacks, and the inherent “need” of app-users to tweak the apps according to their perceived needs.

Let’s look at the latter first: Why did that culture of “tweaking” or “tuning” apps happen? Because in many cases the HW had not been fast enough to fulfill the needs of the end-users. That’s, why tuning was very popular, and happened close to always. But there’s a side-effect to that:

R. Needleman, Editor in Chief of Byte Magazine decades ago, once wrote to this topic in an editorial:

“And no matter what hardware you have, it’s really hard to learn to play piano.”

This might be proof to Dan’s statement, but it also is proof to a dilemma, that many hardware selling and creating companies today have: The need of the software w.r.t. CPU-cycles didn’t keep up with Moore’s Law. That’s why we see more and more underutilized systems, and we experience a shift towards appliances. Because this seems to be the only way for a hardware creator and vendor to survive: Create the margin from something different than the hardware. Add stuff to the stack, so that a competitive advantage occurs across the stack. That’s for example, why Oracle bought Sun. From this also comes a second thing: Standardization. In order to be able to exchange the underlying hardware for cheaper and more powerful hardware, app-deployers and -users now tend to no longer tweak and tune as much as they did decades ago. Today, we see way more “standardized” deployments of software stacks, than we saw decades ago. This is also triggered with the broad acceptance of virtualization. V12N does at least provide a standardized layer for the Operating System, so that here no longer any tweaking or tuning is needed. That also in turn led to the notion of also applying such methods to the apps on top of the OS and we see so-called “images” being the element of access in virtualized environments.

Back to Dan’s argument, and his problem statement:

I’ve been in provisioning for more than a decade now, and I’ve seen 100% automated setups, from Deutsche Bank’s working RCM (Reliable Configuration Management) over to its next version, RCMNG (Next Generation), to the never deployed APE (A Provisioning Environment) at Daimler Chrysler over to the things, that are in production at BG-Phoenics or Deutsche Bahn. These things do work, and, yes, they do a 100% automated bare-metal install, up to app deployment and app-configuration management even up to the content provisioning.

So, back to James’ points, which also addresses the former pain-point mentioned above!

The main problem of all these environments is the fact, that the “meta data”, that James refers to, needs to be adopted and kept up-to-date over the lifetime of an environment to the ever changing pieces it is build of. Never assume, that the install for version X of app C can be used also for version Y of app C. Here, a big maintenance effort has to be done, and with the diversity of the apps themselves, even across versions, this is something, that can’t be neglected. And in an environment, where time-to-market and fine-tuned setup is key, spending time on shaping the meta-handling simply didn’t occur or has not been worthwhile.

So, with the advent of V12N and the term “Cloud Computing” we now get into an era, were due to the more standardized deployments of OSes as well as Apps, and with the fact, that most of the “configuration” of the apps can already now be done during installation, that amount of work needed to manage the “meta data” changes and gets smaller. That in turn allows to again think about provisioning on a broader scale.

James describes in his “Payload description” article and its predecessor exactly the things, that had been the factors for companies like TerraSpring or CenterRun to create their provisioning tools. James calls the envelop a pCard. CenterRun did call this, over a decade ago, a resource. In CenterRun, resources can inherit capabilities (parameters, deployment routines, et.al., it’s a really object oriented approach!) from other resources and can also ask their targets of installation (called hosts, which can by physical or virtual, a virtual host in turn can be an “entity” like a web-server-farm, where you can deploy content or also “apps” into) for their specific capabilities, like payload spare-room, or OS-version, or CPU type, or you-name-it.

So, what’s been needed in order to successfully use tools like CenterRun (and, yes, that’s not the only tool of that time! There’s been way more!) was a modeling of the overall stack, breaking it down into generic, but specific enough resources and hosts, so that deployment can be done over a longer period of time. Pitfalls mostly were, that thinking of “hosts” did limit people to believe, that a host is a “physical machine”.

Now, that we see, that James’ ideas are nothing new, and had already been proven to work close to a decade ago, why did those not have been a great success over the time or are even seen by James as part of the solution to his problem statement? Or even Dan’s ideas of the need for “Systems Management” at a higher level?

I do see mainly two reasons for that, both already being mentioned above:

  • It’s tedious to manage all of the needed meta-data of the whole stack.
  • The stack did change to often to make it worthwhile to use “provisioning” or “automation” of the stack. I once stated: “If you want to automate chaos, you’ll get chaos automatically!”

So, why do people like Dan or James believe, and why do I agree, that now, with the notion of “Cloud Computing”, it’s time again to think about “provisioning”?

First, as mentioned above, the complexity of the stack is reducing itself due to the fact, that V12N is helping with standardization: Less options, easier to manage!

Second, many later-on config and tuning options are now options to the installer, or will simply never be performed. There’s a couple of reasons for that: Again, CPU-cycles are now more easily available, so that fine-grained tuning no longer is a necessity. And, many config-options are now install-time options, making also the handling easier, because the steps to achieve a given goal are reduced. And then many customers learned the hard way, that tweaking a software to its limits killed a possible upgrade-path to newer versions, as some features or “tweaks” had simply disappeared in newer versions. So, CUs tend now to stick to more off-the-shelf installs, hoping to be able to quicker upgrade to newer versions. This in turn also reduces the complexity of the pCard (James’ speaking) or the meta-data-modeling, making it possible to perform such tasks.

Third, we see a “reduction” in options for tasks or problems. There’s a concentration going on in the IT-industry, which in some publications is called “industrialization of IT” or “commoditization”. With that comes the reduction of for example software-solutions for a given task, and also a concentration in the hands of single companies. That leads to more integrated software-stacks, which in turn also simplifies the meta-data, and makes it feasable to start again looking at provisioning of the whole stack. Like in the car industry, you’re no longer looking for the individual parts to build a car from, you’re buying it “off-the-shelf”, or, in the car-manufacturing part of the story, you’re no longer looking for people to build the car, but since the invention of Ford (construction-belt), you’re looking at automating the building of the car.

So, what now is James saying in the so-far 2-part DevOps series?

He’s going back to what I stated above as “Operational Maturity” (ITIL speak). No longer management of individual pieces and being forced to react to changes in those resources, but “designing” stuff, so that they can benefit from whatever underlying layers are available.

In my world, there are also constraints that need to be acknowledged: In order do design stuff, you need to have at least two things: Freedom (and capabilities!) to “implement” your dreams and simple enough elements to build the implementation of those “dreams”. If you would be forced to create a “one-off” for the implementation of your dream (or design), then some basic requirements might be difficult to achieve, like “elasticity” or “rapid deployment”.

So, also here the basic rules of “managing constraints” is still in place. Yes, James is right in that the focus shifts from OSes and servers to applications. That’s why the term “appliance” was created a while ago, and why all vendors today start shifting their focus to easily provide “services” in form of an appliance. An example today from the company I work for is the Exadata 2 DataBase machine. Order it, get it, and use it latest two days after delivery. No more tweaking, configuring, and exception handling if the pieces don’t work as expected. You get, what you want and what you need.

This appliance approach, when brought to the “Cloud” needs rules, so that these appliances can happily live together in the cloud. That’s what James describes in his second article of the series.

Still, my mantra from years ago, applies: “If you automate chaos, you’ll get chaos automatically!”

But: Today it gets easier to manage the chaos, as there are less switches and glitches to manage, due to more standardized elements of the stack. That also in turn makes it easier for the “provisioning tool provider”, as the tools themselves no longer need to be over-sophisticated, but can be stripped down to simpler approaches. That’s, why for example, in Oracle Enterprise Manager Grid Control the provisioning part gets more important over time, and will be an important part of the systems management portfolio. Without the elasticity management capabilities and the deployment capabilities, you no longer can manage, and therefore sell, software.

But, let’s not forget: Here, we’re talking about the “back-end” side of things! The “front-end” side, with the “desktop-computing” part, I did cover in my former post: VDI and its future

Finally, I’ll leave you with Tim O’Reilly, who did publish his thoughts on the Internet Operating System, which Sam Johnston calls the cloud… ;-)

Enjoy!

Matthias

Another one joins the forces…

By , March 29, 2010 17:22

…of OpenSolaris based home-NAS solutions…

Read: Joining the ZFS Revolution

And again another one, read: ZFS Home NAS

So, we see: ZFS is the trigger for many to select OpenSolaris as the basis for home-build NAS solutions. There must be a reason for that, don’t you think?

Matthias

Small Home Server, Update…

By , March 15, 2010 11:00

In http://blogs.pfuetzner.de/matthias/?p=495 I wrote, that I bought a new motherboard. It did arrive on Friday, and as I still don’t have a new case (I ordered the Chenbro ES34069 today, sadly, in Europe, it’s only available with PSU, waste of money, but I’m willing to spend it for the size and features of the case), I simply put it into the old case, and left the case open. I also did not yet connect the external USB disks via SATA, will be done, when the new case arrives.

So, did my feelings mislead me? No! I simply unplugged the boot disk from the old board, removed an additional IDE converter (the new board has the smaller, but power-including IDE cable setup, so no need for additional power cables to the boot disk!), plugged the disk into the board, connected the power-switch and external PSU to the board, pressed the power button, quickly checked the BIOS setting (no need to change anything, default values were good, but I will later turn on the “shutdown on overheating” option), pressed “e” when grub came up, and added a “-r” to the boot-line, and pressed “b”. First step, I didn’t have the USB disks connected, but I could have done so, things went straight forward, and the system came up. On the second reboot I added the USB disks, and again, all went well. The ONLY two small changes, I needed to do, were:

mv /etc/hostname.rge0 /etc/hostname.rge1
vi /etc/power.conf

Reason for that?

The new board has a slightly different RealTek network adapter, so it could not re-use the old settings, and did create a new device, namely rge1. And, for sure, the hardware paths to the boot disk and the USB disks are slightly different, so I needed to adopt the values in the /etc/power.conf file to the new values.

That’s been it! And, although the board has a CPU fan, that fan is so quiet, I don’t really hear it, although the current case is open. I can highly recommend that board!

Stay tuned for the next update somewhen this week, when the new case will arrive!

Matthias

Small power efficient home NAS server, revisited

By , March 9, 2010 15:41

Two weeks ago, my small home-server died (I assume, of heat, but am not sure, as it had been running more than a year without glitches, still it’s dead).

As I also wanted to upgrade it regardless, this is a good time to do so… ;-)

So, my new preferences are:

  • Disks no longer attached externally via USB. (USB 2.0 isn’t as fast as SATA)
  • Less power draw (5 external PSUs aren’t that efficient)
  • All in one chassis (less waste of space, possibly also a bit better noise reduction)
  • Still IDE for the boot disk

So, in order to have the 4 disks directly attached via SATA (all my 4 USB disks are internally SATA, so I will detach them from their cases, and attach them directly to the SATA ports on a new motherboard) I need a board that has at least 4 SATA ports.

I also wanted to stick with Intel Atom (although, if they would add VT-x support, I would really appreciate that! But that’s not yet a real requirement, so I can live without that feature).

From the new Atoms the N450 or N470 Atoms aren’t useful to me, as these CPUs are 32-bit only. So, in order to be able to do a bit more than pure NAS, I want the new D510 CPU.

To that effect, I was scanning the new Intel Atom D510 boards. There aren’t many yet (the Asus not yet buyable), the Gigabyte, the Jetway, or the Zotac being the ones, that do fulfill my current needs.

When I ordered my new board, I was not aware of the Gigabyte board. And the Zotac has WiFi, which I was not willing to pay for, as well as the fact, that it uses Mini-DTX, and not Mini-ITX (OK, not a big issue!). From the Jetway, I love the fact, that it does not need a PSU, as I can run it with a standard external Laptop 12V PSU, which I already have from my old setup.

So, yesterday, I did order the Jetway NC96-510-LF board. The nice thing is: I can re-use the RAM from the old board, as well as the external PSU. I also hope to not need to re-install the OS-disk, as this board seems very close to the old Intel board. I hope, a reconfigure boot will solve that…

So, now, my only concern is a new chassis… ;-)

Requirements for that are:

  • Mini-ITX capable
  • No PSU
  • Hosting 4 x 3.5″ disks
  • Hosting 1 x 2.5″ disk (my 1.8″ disk is mounted to a 2.5″ adapter)
  • Low noise fan (if at all!)

Any recommendations for such a case?

Matthias

Panorama Theme by Themocracy