Updates again: Small Home NAS Server…

By , March 31, 2010 16:54

A while ago I wrote about the ongoing changes to my small NAS-Home-Server and promised pictures and updates.

So, now, that the Chenbro case did arrive, I set up the whole system. The first task was to disassemble the PSU from the Chenbro case, as I intend to use a real external 120W PSU. That saves heat-generators inside the case, and allows for less rotations of the fans… The case fans aren’t as silent, as the CPU fan, so, yes, sadly, they’re audible…

Second task was to disassemble the electric case security switch, as the Jetway board doesn’t make use of such info.

You can see the missing parts in image one below… ;-)

After that, I had to build some mounting kits for the 1.8″ Toshiba disks, as the 2.5″ adapter had its holes upside-down, whereas the case had them sideways. So I bought some 7.5mm x 7.5mm (1mm thick) L-shaped aluminum part, cut off 2 x 10cm pieces, and drilled holes into it and created screw sockets (2.5mm). As the adapter has some overhead resistors, I needed to add some distance between the alu and the adapter, which I did by using some plastic. You can see that in the following images.

Then I found, that the board does have two fan sockets, but only one was temperature controlled in the BIOS. So, you see, I needed a Y-cable to connect both fans to the controlled outlet. More on that later…

Next disappointment was: The external 80W PSU did die during power-on, as the 5 disks did draw to much power. So, during power-on I need to currently pull out one of the 3.5″ disks, and re-plug it in a bit later. A new 120W PSU is on its way to me right now…

Next disappoint was: The Jetway board has not been able to see the 2 disks attached to the NM10 chipset as SATA (AHCI) disks, they show up as IDE. OK, no problem for ZFS, it works, but it’s a bit slower.

So, with these three disappointments I wrote an email to Jetway, asking for a remedy for these failures. To my great pleasure they did respond immediately, and less than a week later I now have a pre-production version of the new BIOS, that allows to set the onboard SATA port to either IDE or AHCI, and also allows to set parameters for the second FAN socket. So, I switched back to two distinct cables for the two fans, so that they can be set individually. The only thing they did not implement was the “delayed power-on” for the individual disks. But I guess, that will be solved, once I have a more powerful external PSU.

So, power-consumption now is at 43W, when idle, ~55W, when all disks are active. Nice thing is: Hot-swap of disks is do-able, so, once one dies, it’s easy to replace it with a bigger one.

Now, I’m only waiting for the OpenSolaris 2010.03 version to appear, so that I can do a clean OS re-install.

So, with that,here are the images, enjoy!

Matthias








Cloud, DevOps, ITIL, Provisioning – A reflection on James Urquhart’s and Dan Wood’s articles

By , March 30, 2010 16:57

James Urquhart (@jamesurquhart) posted a series of articles on operational management in the cloud on his blog: The Wisdom of Clouds.

Following are my comments on his series and the discussions that followed on Twitter

But first, the links to James’ articles, entitled “Understanding cloud and ‘devops'”:

It also refers to his series around Payload description and Application packaging as well as Dan Woods’ article on Virtualization’s Limits

Dan states:

“But automated provisioning and management of cloud computing resources is just one of three elements needed for intelligent workload management. The other two are the ability to set up and configure an application that is going to run on a cloud or virtualized server, and then making sure the data ends up where it is needed. Here’s where the hard problems arise.”

Dan is right, but also wrong in his arguments.

Let’s look back a bit in IT history: 5-10 years ago, the notion of “provisioning” did try to shape the way, DCs should have been managed. Terms like SODC (service oriented datacenter) and OM (operational maturity) were hip. Still they neglected a couple of trivial things, like: Inconsistent upgrade paths of software stacks, and the inherent “need” of app-users to tweak the apps according to their perceived needs.

Let’s look at the latter first: Why did that culture of “tweaking” or “tuning” apps happen? Because in many cases the HW had not been fast enough to fulfill the needs of the end-users. That’s, why tuning was very popular, and happened close to always. But there’s a side-effect to that:

R. Needleman, Editor in Chief of Byte Magazine decades ago, once wrote to this topic in an editorial:

“And no matter what hardware you have, it’s really hard to learn to play piano.”

This might be proof to Dan’s statement, but it also is proof to a dilemma, that many hardware selling and creating companies today have: The need of the software w.r.t. CPU-cycles didn’t keep up with Moore’s Law. That’s why we see more and more underutilized systems, and we experience a shift towards appliances. Because this seems to be the only way for a hardware creator and vendor to survive: Create the margin from something different than the hardware. Add stuff to the stack, so that a competitive advantage occurs across the stack. That’s for example, why Oracle bought Sun. From this also comes a second thing: Standardization. In order to be able to exchange the underlying hardware for cheaper and more powerful hardware, app-deployers and -users now tend to no longer tweak and tune as much as they did decades ago. Today, we see way more “standardized” deployments of software stacks, than we saw decades ago. This is also triggered with the broad acceptance of virtualization. V12N does at least provide a standardized layer for the Operating System, so that here no longer any tweaking or tuning is needed. That also in turn led to the notion of also applying such methods to the apps on top of the OS and we see so-called “images” being the element of access in virtualized environments.

Back to Dan’s argument, and his problem statement:

I’ve been in provisioning for more than a decade now, and I’ve seen 100% automated setups, from Deutsche Bank’s working RCM (Reliable Configuration Management) over to its next version, RCMNG (Next Generation), to the never deployed APE (A Provisioning Environment) at Daimler Chrysler over to the things, that are in production at BG-Phoenics or Deutsche Bahn. These things do work, and, yes, they do a 100% automated bare-metal install, up to app deployment and app-configuration management even up to the content provisioning.

So, back to James’ points, which also addresses the former pain-point mentioned above!

The main problem of all these environments is the fact, that the “meta data”, that James refers to, needs to be adopted and kept up-to-date over the lifetime of an environment to the ever changing pieces it is build of. Never assume, that the install for version X of app C can be used also for version Y of app C. Here, a big maintenance effort has to be done, and with the diversity of the apps themselves, even across versions, this is something, that can’t be neglected. And in an environment, where time-to-market and fine-tuned setup is key, spending time on shaping the meta-handling simply didn’t occur or has not been worthwhile.

So, with the advent of V12N and the term “Cloud Computing” we now get into an era, were due to the more standardized deployments of OSes as well as Apps, and with the fact, that most of the “configuration” of the apps can already now be done during installation, that amount of work needed to manage the “meta data” changes and gets smaller. That in turn allows to again think about provisioning on a broader scale.

James describes in his “Payload description” article and its predecessor exactly the things, that had been the factors for companies like TerraSpring or CenterRun to create their provisioning tools. James calls the envelop a pCard. CenterRun did call this, over a decade ago, a resource. In CenterRun, resources can inherit capabilities (parameters, deployment routines, et.al., it’s a really object oriented approach!) from other resources and can also ask their targets of installation (called hosts, which can by physical or virtual, a virtual host in turn can be an “entity” like a web-server-farm, where you can deploy content or also “apps” into) for their specific capabilities, like payload spare-room, or OS-version, or CPU type, or you-name-it.

So, what’s been needed in order to successfully use tools like CenterRun (and, yes, that’s not the only tool of that time! There’s been way more!) was a modeling of the overall stack, breaking it down into generic, but specific enough resources and hosts, so that deployment can be done over a longer period of time. Pitfalls mostly were, that thinking of “hosts” did limit people to believe, that a host is a “physical machine”.

Now, that we see, that James’ ideas are nothing new, and had already been proven to work close to a decade ago, why did those not have been a great success over the time or are even seen by James as part of the solution to his problem statement? Or even Dan’s ideas of the need for “Systems Management” at a higher level?

I do see mainly two reasons for that, both already being mentioned above:

  • It’s tedious to manage all of the needed meta-data of the whole stack.
  • The stack did change to often to make it worthwhile to use “provisioning” or “automation” of the stack. I once stated: “If you want to automate chaos, you’ll get chaos automatically!”

So, why do people like Dan or James believe, and why do I agree, that now, with the notion of “Cloud Computing”, it’s time again to think about “provisioning”?

First, as mentioned above, the complexity of the stack is reducing itself due to the fact, that V12N is helping with standardization: Less options, easier to manage!

Second, many later-on config and tuning options are now options to the installer, or will simply never be performed. There’s a couple of reasons for that: Again, CPU-cycles are now more easily available, so that fine-grained tuning no longer is a necessity. And, many config-options are now install-time options, making also the handling easier, because the steps to achieve a given goal are reduced. And then many customers learned the hard way, that tweaking a software to its limits killed a possible upgrade-path to newer versions, as some features or “tweaks” had simply disappeared in newer versions. So, CUs tend now to stick to more off-the-shelf installs, hoping to be able to quicker upgrade to newer versions. This in turn also reduces the complexity of the pCard (James’ speaking) or the meta-data-modeling, making it possible to perform such tasks.

Third, we see a “reduction” in options for tasks or problems. There’s a concentration going on in the IT-industry, which in some publications is called “industrialization of IT” or “commoditization”. With that comes the reduction of for example software-solutions for a given task, and also a concentration in the hands of single companies. That leads to more integrated software-stacks, which in turn also simplifies the meta-data, and makes it feasable to start again looking at provisioning of the whole stack. Like in the car industry, you’re no longer looking for the individual parts to build a car from, you’re buying it “off-the-shelf”, or, in the car-manufacturing part of the story, you’re no longer looking for people to build the car, but since the invention of Ford (construction-belt), you’re looking at automating the building of the car.

So, what now is James saying in the so-far 2-part DevOps series?

He’s going back to what I stated above as “Operational Maturity” (ITIL speak). No longer management of individual pieces and being forced to react to changes in those resources, but “designing” stuff, so that they can benefit from whatever underlying layers are available.

In my world, there are also constraints that need to be acknowledged: In order do design stuff, you need to have at least two things: Freedom (and capabilities!) to “implement” your dreams and simple enough elements to build the implementation of those “dreams”. If you would be forced to create a “one-off” for the implementation of your dream (or design), then some basic requirements might be difficult to achieve, like “elasticity” or “rapid deployment”.

So, also here the basic rules of “managing constraints” is still in place. Yes, James is right in that the focus shifts from OSes and servers to applications. That’s why the term “appliance” was created a while ago, and why all vendors today start shifting their focus to easily provide “services” in form of an appliance. An example today from the company I work for is the Exadata 2 DataBase machine. Order it, get it, and use it latest two days after delivery. No more tweaking, configuring, and exception handling if the pieces don’t work as expected. You get, what you want and what you need.

This appliance approach, when brought to the “Cloud” needs rules, so that these appliances can happily live together in the cloud. That’s what James describes in his second article of the series.

Still, my mantra from years ago, applies: “If you automate chaos, you’ll get chaos automatically!”

But: Today it gets easier to manage the chaos, as there are less switches and glitches to manage, due to more standardized elements of the stack. That also in turn makes it easier for the “provisioning tool provider”, as the tools themselves no longer need to be over-sophisticated, but can be stripped down to simpler approaches. That’s, why for example, in Oracle Enterprise Manager Grid Control the provisioning part gets more important over time, and will be an important part of the systems management portfolio. Without the elasticity management capabilities and the deployment capabilities, you no longer can manage, and therefore sell, software.

But, let’s not forget: Here, we’re talking about the “back-end” side of things! The “front-end” side, with the “desktop-computing” part, I did cover in my former post: VDI and its future

Finally, I’ll leave you with Tim O’Reilly, who did publish his thoughts on the Internet Operating System, which Sam Johnston calls the cloud… ;-)

Enjoy!

Matthias

Another one joins the forces…

By , March 29, 2010 17:22

…of OpenSolaris based home-NAS solutions…

Read: Joining the ZFS Revolution

And again another one, read: ZFS Home NAS

So, we see: ZFS is the trigger for many to select OpenSolaris as the basis for home-build NAS solutions. There must be a reason for that, don’t you think?

Matthias

Small Home Server, Update…

By , March 15, 2010 11:00

In http://blogs.pfuetzner.de/matthias/?p=495 I wrote, that I bought a new motherboard. It did arrive on Friday, and as I still don’t have a new case (I ordered the Chenbro ES34069 today, sadly, in Europe, it’s only available with PSU, waste of money, but I’m willing to spend it for the size and features of the case), I simply put it into the old case, and left the case open. I also did not yet connect the external USB disks via SATA, will be done, when the new case arrives.

So, did my feelings mislead me? No! I simply unplugged the boot disk from the old board, removed an additional IDE converter (the new board has the smaller, but power-including IDE cable setup, so no need for additional power cables to the boot disk!), plugged the disk into the board, connected the power-switch and external PSU to the board, pressed the power button, quickly checked the BIOS setting (no need to change anything, default values were good, but I will later turn on the “shutdown on overheating” option), pressed “e” when grub came up, and added a “-r” to the boot-line, and pressed “b”. First step, I didn’t have the USB disks connected, but I could have done so, things went straight forward, and the system came up. On the second reboot I added the USB disks, and again, all went well. The ONLY two small changes, I needed to do, were:

mv /etc/hostname.rge0 /etc/hostname.rge1
vi /etc/power.conf

Reason for that?

The new board has a slightly different RealTek network adapter, so it could not re-use the old settings, and did create a new device, namely rge1. And, for sure, the hardware paths to the boot disk and the USB disks are slightly different, so I needed to adopt the values in the /etc/power.conf file to the new values.

That’s been it! And, although the board has a CPU fan, that fan is so quiet, I don’t really hear it, although the current case is open. I can highly recommend that board!

Stay tuned for the next update somewhen this week, when the new case will arrive!

Matthias

Naked, but secure? How much security can a democracy endure?

By , March 10, 2010 14:13

Yesterday, I did (as usual) watch the actual edition of Quarks & Co. This edition had the title “Naked, but secure? How much security can a democracy endure?”

As also usual with this format, Ranga Yogeshwar, the moderator of this format, did not make any ostentatious claims, but listed simple facts. He had two persons in the studio with him, one that once was perceived as a terrorist and had been silently followed by Germany’s security offices, and one social psychologist, explaining the mechanisms of perceived “Angst” and the irrational reactions to that by the state and its people. One example: After 9/11 US citizens no longer wanted to fly by plane, because they feared to become a victim of another terror assault. So they travelled by car. The rate of incidents when using a car is way higher than when using a plane, so that irrational deviation led to some 1500 more dead persons in the year after 9/11. Another example: The airport at Frankfurt alone is paying 2 Million Euro PER WEEK for destroying the waterbottles et.al. that are raided at the security checks. The psychologist simply said: “Use that money to develop Afganistan, and Al-Quaida is history!”

These are just two small examples, of misguided reactions to terror-events, that had been induced by the government or irrational reactions of its citizens.

Think about it! And watch the replay on Saturday at noon on WDR3! Or download the feature as a podcast from: http://medien.wdr.de/download/1268164800/quarks/wdr_fernsehen_quarks_und_co_20100309.mp4. Sadly, it’s in German only… That’s the only caveat I have…

Thanks, Ranga, for a great edition (as usual!)!

Matthias

Small power efficient home NAS server, revisited

By , March 9, 2010 15:41

Two weeks ago, my small home-server died (I assume, of heat, but am not sure, as it had been running more than a year without glitches, still it’s dead).

As I also wanted to upgrade it regardless, this is a good time to do so… ;-)

So, my new preferences are:

  • Disks no longer attached externally via USB. (USB 2.0 isn’t as fast as SATA)
  • Less power draw (5 external PSUs aren’t that efficient)
  • All in one chassis (less waste of space, possibly also a bit better noise reduction)
  • Still IDE for the boot disk

So, in order to have the 4 disks directly attached via SATA (all my 4 USB disks are internally SATA, so I will detach them from their cases, and attach them directly to the SATA ports on a new motherboard) I need a board that has at least 4 SATA ports.

I also wanted to stick with Intel Atom (although, if they would add VT-x support, I would really appreciate that! But that’s not yet a real requirement, so I can live without that feature).

From the new Atoms the N450 or N470 Atoms aren’t useful to me, as these CPUs are 32-bit only. So, in order to be able to do a bit more than pure NAS, I want the new D510 CPU.

To that effect, I was scanning the new Intel Atom D510 boards. There aren’t many yet (the Asus not yet buyable), the Gigabyte, the Jetway, or the Zotac being the ones, that do fulfill my current needs.

When I ordered my new board, I was not aware of the Gigabyte board. And the Zotac has WiFi, which I was not willing to pay for, as well as the fact, that it uses Mini-DTX, and not Mini-ITX (OK, not a big issue!). From the Jetway, I love the fact, that it does not need a PSU, as I can run it with a standard external Laptop 12V PSU, which I already have from my old setup.

So, yesterday, I did order the Jetway NC96-510-LF board. The nice thing is: I can re-use the RAM from the old board, as well as the external PSU. I also hope to not need to re-install the OS-disk, as this board seems very close to the old Intel board. I hope, a reconfigure boot will solve that…

So, now, my only concern is a new chassis… ;-)

Requirements for that are:

  • Mini-ITX capable
  • No PSU
  • Hosting 4 x 3.5″ disks
  • Hosting 1 x 2.5″ disk (my 1.8″ disk is mounted to a 2.5″ adapter)
  • Low noise fan (if at all!)

Any recommendations for such a case?

Matthias

Panorama Theme by Themocracy