This blog has been moved over…

By , October 31, 2009 18:09

This blog has been moved over from http://blogs.sun.com/pfuetz. I now still need to clean up some of the old entries, especially some links seem to be a bit bizarre. So, stay with me, until I have fixed those. I’ll do that!

Matthias

Gründung einer OpenSolaris User Group im Rhein-Main Gebiet

By , October 27, 2009 10:20

Wann:
17. November 2009, ab 17:45

Wo:
Sun Microsystems
Geschäftsstelle Frankfurt
Amperestraße 6
63225 Langen

Liebe Unix, Linux, Solaris und OpenSolaris Freunde,

es gibt mittlerweile einige OpenSolaris User Groups in Deutschland. Daher wollen wir nun auch im erweiterten Rhein-Main Gebiet (Frankfurt, Wiesbaden, Mainz, Darmstadt, Offenbach, …) herausfinden, ob es genug Interessenten für eine solche Gruppe gibt.

Wir laden deshalb alle an UNIX/Solaris Interessierten zu einem ersten Treffen in die Sun-Geschäftsstelle nach Langen ein.

Eine Agenda für dieses Treffen haben wir auch schon, wenn es noch weitere Vorschläge gibt, sind wir gerne bereit, diese aufzunehmen…

17:45 Beginn und Begrüßung
18:00 Open Solaris User Groups, was sind das?
* Vorstellung
* Warum?
* Wohin?
* Ideensammlung
* Diskussion
19:00 Wie baue ich einen umweltfreundlichen Home NAS-Server mit Hilfe von OpenSolaris? (Vortrag von Matthias Pfützner)
19:30 Platz für einen weiteren Vortrag (Interessenten vor!)
20:00 Offene Diskussion
21:00 Ende der Veranstaltung

Um schon mal eine gewisse Vorstellung entwickeln zu können, was eine OSUG ist, hier eine kurze Information:

Eine OSUG ist eine informelle Zusammenkunft von an OpenSolaris interessierten Leuten, die sich regelmäßig oder unregelmäßig treffen, um Erfahrungen auszutauschen, und sich mit OpenSolaris zu beschäftigen. Das Ganze in zwangloser Atmosphäre, ggfls. bei Bier und einem Snack und immer auch mit ein paar kurzen aber knackigen Vorträgen.

Zur Abgrenzung: Was ist eine OSUG nicht?

Eine OSUG ist KEINE Sun Marketing Veranstaltung. Es ist eine Veranstaltung die davon lebt, dass sich möglichst Viele einbringen.

Näheres dazu siehe: http://www.opensolaris.org/os/usergroups/

Um den Anfang leichter zu gestalten, wollen wir den ersten Termin, wie oben erwähnt, in der Sun Geschäftsstelle in Langen abhalten. Danach würden wir aber gerne auch auf andere Lokalitäten zurückgreifen.

Beim ersten Treffen wollen wir auch die organisatorischen Fragen diskutieren:

* Wie häufig & wo wollen wir uns treffen?
* Was sind Vorschläge der Teilnehmer für die OSUG im Rhein-Main Gebiet?
* Wie soll die OSUG im Rhein-Main Gebiet genannt werden?
* Gibt es Freiwillige für Vorträge?

* Wer möchte eine XING Gruppe dazu aufbauen?
* Wer möchte eine eigene WebSeite dazu aufbauen?
* Brauchen wir eine Mailingliste? Wenn ja, wer kümmert sich darum?

Weil die Veranstaltung in der Sun Geschäftsstelle stattfindet, brauchen wir aus Sicherheitgründen eine Anmeldung, denn ohne Anmeldung darf man das Gebäude nicht betreten. Daher bitten wir um Rückantwort an:

mailto:Michael.Gottwald@Sun.COM oder mailto:Ulrich.Graef@Sun.COM oder mailto:Matthias@Pfuetzner.DE

Wir freuen uns auf eine zahlreiche Teilnahme und in der Zukunft viele weitere spannende und kurzweilige Treffen.

Michael Gottwald, Ulrich Gräf, Matthias Pfützner

Update: PS: Infos gibt’s auch bei IT-Connection in Xing und OpenSolaris bei Xing und auch im Blog von Michael als auch bei twitter: http://twitter.com/osugrm
P.P.S.: Und auch das Solarium erwähnt es nun!

Cloud and Desktop Virtualization

By , October 26, 2009 08:27

Benny’s Blog has an interesting article about virtualization (specifically desktop virtualization) and the cloud.

Benny is not as sure, that cloud will be the future.

I tend to agree. Benny, perhaps we need to shine a bit more light into what cloud is and what the roles of the different corners of the industry will be in the cloud.

First, it seems obvious, that cloud computing is a hype.

Second, it seems obvious, that many of those, that one meets at cloud conferences, CloudCamps, you-name-it, are highly enthusiastic about the Cloud. So, it’s not strange to observe, that you seem a bit out-of-place in such a setup.

But: First things first. It’s always important, to segment the market: Here, with cloud computing, we have: a.) Users of the cloud, b.) provider of apps in the cloud, c.) provider of the cloud, d.) provider of tools to build a cloud.

Many, many, years ago, it’ been predicted, that the Mainframe would die. IBM still lives quite well in building and enhancing that piece of dinosaur technology, so, a statement like “in five years time, all desktop apps will be replaced by apps from the cloud” is as correct as the statement, that the mainframe is dead.

We also know, that putting things into a cloud does offer new challenges: 1.) Latency, 2.) Trust, 3.) Security.

You don’t want to have to transmit every single bit via handshake back and forth. Some apps really do require short such shakehand-cycles. They simply are not easy (if at all) to be put into the cloud. (Example: Sending data via a pidgeon can still be faster then sending it via high speed network, use the math!).

You don’t want to put all your information, data, whatever to somebody, that you do not trust. Just google for the Sidekick desaster that Microsoft had with T-Mobile and Danger.

And, you also have to segment the user population: I.) End-user and II.) Corporate-User. They also have differing buying patterns, and therefore I also agree that a statement like “in five years time, all apps will be coming from the cloud” is simply wrong.

Still, cloud computing offers options, and they will be persued. That will have an effect on the classical desktop, that’s for sure. But, I doubt, that that alone will change the usage. I think, that more changes will come form the advances in mobile computing, especially on the PDA/Mobile Phone. You have a smaller screen real estate, you have worse keyboards, but many tasks can still be done very well on such a device. Classical things, like pure email-Web-Frontends might die, because most will be reading their emails directly from the phone.

So, what does that make of Microsoft and their classical desktop approach? They will still survive, quite nicely, and for way more then 5 or ten years. Because change in corporate infrastructures don’t happen over night… And people are lazy…

The Cloud idea itself will help shape the way datacenters are build, and the way, services in datacenters are deployed.

On a mainframe, there never has been the notion of “I need my own hardware for my services”. They have always been shared, and trust has been, that that can be done securely.

In the Open Systems world, that notion changed to: “I need my own hardware for my jobs.” Hardware was cheap, so that was doable. Nowadays we see, that operating cheap hardware gets more expensive then the hardware itself, so that trend is changing. We see (see also my last blog entry), that systems are way underutilized, and that therefore, there’s a trend to consolidate stuff onto single systems. With virtualization underneath, that also works quite well.

So, cloud computing is more about a change in management of assets and services and billing for these things, then a change in usage of tools from the individual end-user.

So, rest assured, your job will still be there in five years time… ;-) But you might be targeting different types of customers…

Matthias

The state of the IT industry as I see it… (Part 1)

By , October 14, 2009 04:24

During my summer vacation, and driven by the fact, that Oracle is acquiring Sun, I had been spending thoughts on the state of the IT industry lately.

Then last week, some colleagues forwarded a link to a fake interview with Bjarne Stroustrup on the invention of C++ and the drivers behind it. This interview now triggered my intent to write down some thoughts and publish them for discussion.

Additionally the new hype word “Cloud” made me want to add some comments to that as well. Specifically, as my possible new boss, Larry Ellison calls it “vapor water”.

So, let’s start with some basic facts, that I think, are important to understand what I’m trying to explain:

  • Moore’s Law
  • Software needs

The first point I guess needs no further commenting. Still the second point needs it. I did not perform a precise analysis, but it seems obvious (discussion point one!) that the needs that software versions add to the previous version are now outpaced by Moore’s Law. Will say: Hardware gets faster quicker, then software needs grow. One simple proof for that is the fact, that most servers in datacenters nowadays are utilized only around or below 15%, and another important proof is the rapid uptake of virtualization techniques.

As a sideeffect of this rapid growth in compute power, many companies have gone out of business or are close to getting out of business. Think DEC, Data General, Compaq, SGI, Cray, Apollo Computer and others like these. Now Sun Microsystems also will become history.

The question behind this all simply is: Why?

In order to start an answer, I will first deviate a bit… ;-)

In September 1990 there was a famous 15 year anniversary edition titled “Byte Magazine 15th anniversary summit, setting the standards” of the Byte magazine, which did try to shade an outlook into the next 15 years of the computer industry and its development by asking the 63 most influential people of the time to look back and from that predict the future. Sadly, Byte was torn down in 1998, and although it did continue as a web-presence for some years, it is no longer available, and had been removed from the web in February 2009. So also the link above no longer works.

A link to the table of contents of that edition at least can be found here. The cover page is here. And some of the predictions published in that issue can be found here (that page calls it a brief excerpt, sadly I dumped all my paper versions of Byte approx. 10 years ago into the bin in the hope of it being online forever. How naive, wrong and mistaken have I been… ;-( ) (discussion point two) (and: btw: I’d like to get a copy, if anyone can spare his/hers).

In that discussion, many diverse topics had been approached, and had been answered by many but not all of the 63 people. One of those was Donald Knuth, creator of the programming language Pascal. He is cited with:

Donald Knuth: …computers are going to double in speed every year until 1995, and then they’re going to run out of ideas.

How wrong has he been… ;-)

And as a proof point to my second topic above, I’d like to quote Mr Kernighan, inventor of the famous programming language C:

What about the software side of the equation? Or are all the changes coming in hardware?

Brian Kernighan: Software, unfortunately, is not nearly as easy to make go better as hardware seems to be. And the software will not get better fast enough, and so you’ll piddle away more and more of [the power] on stuff that doesn’t quite work the way you want it to….

And then there was the question:

What is the biggest obstacle to major new breakthroughs in computing?

In a word, software, not hardware.

Suffice it to say, that I assume these predictions from close to 20 years ago a valid proof point for my thesis, that software didn’t grew and doesn’t grow in demand as quickly as the hardware grew and still grows in speed and capabilities.

After this short deviation into the past using a look into the most important publication of that era back to the question on why the IT industry is in the state that it is in today.

It seems obvious, that some (shall I say: many?) have miscalculated the differences in development speed between software and hardware. Or assumed, that the overall demand for compute power might still ramp up and make up for the growth in produced CPU capacities to still be able to sell as much (in Dollars) as before (that would have required to sell way more units then before). It did work for quite a while during the gold-rush era of the early 21st century and the dot-com bubble, where many new enterprise were created that used the internet and needed CPUs like crazy. But it also seems obvious from this fact, that hardware is becoming more and more of a commodity, because the average cost per CPU-cylce needed to solve a specific problem goes down and down. My favorite statement here is, which I use often in virtualization talks, and now in cloud talks: “Take your mobile phone out of your pocket, look at it, and remember: NASA had LESS CPU power (in total) to savely bring mankind to the moon and back compared to what you look at right now.”

That then sets the stage for open and candid discussions on the state of virtualization, cloud computing et. al. But, again, more on that later.

Matthias

Eco responsible power friendly small home server (part 2)

By , October 5, 2009 04:37

A long time ago, I did write about my small new home server. You can read all about that at: http://blogs.pfuetzner.de/matthias/?p=299

Meanwhile some changes have been done to the system. Yes, I needed to buy the 90W PSU (although I now know how much Watt the system really uses, I’m curious, why that was really needed, but it sometimes did crash with the 60W PSU and that’s gone, now, that I have the 90W PSU, so I assume, it must have had to do with the 60W PSU. Or on that specific PSU, but the dealer refused to change it, as it is running. More on Wattage later).

I also did change the passive heat sink on the CPU to also be a ZALMAN, safe is safe (see image). And with that, I now have the case put on the small side, which in the image is below, so that the heat can exit through the two case openings, which are now at the top. No fans any more in the case… Quiet!

More importantly, I did also replace the CF card by a real disk, as that only uses 1.8W at load, less, if not in use. I did buy a 1.8 inch Toshiba disk (those, that are used inside the iPods, in my case a TOSHIBA MK2006GA) plus two converters (3.5 -> 2.5 and 2.5 -> 1.8) and placed the disk inside the cabinet (No part numbers here, and no distributor, as I bought them on german eBay. Overall disk + two converters were around 40 EURO). The main reason for this change is the fact, that the CF card was really slow, and that 8 GB weren’t enough to perform Live Upgrade (and now maintain different Boot Environments) (I now have 30 G in the rpool, for swap, crash and all the rest).

I also updated the OS.. ;-)

Now, I did install OpenSolaris 2009.06 from an image on a usbstick, that went really smooth, no changes needed. And, after that, I upgraded to the actual dev build, which this weekend was build 124…

One more change hardware-wise: I also added two more 1 TB disks, because I had a nasty failure of BOTH external power supplies of the WD disks over the same weekend some months ago… Rendering me without storage… So, I decided to mirror across different types of disks, so I bought two no-name 1 TB external USB disks (which have Hitachi SATA disks in it). WD quickly sent new PSUs, great service, and they admitted, that those PSUs were faulty… They had many such failures…

With the new setup of the disks, I had a different problem: The new disks did not do power-save by themselves, whereas the WD disks did fall into power save. ZFS does not have any clue about power management of the disks, it assumes, that that is done by the underlying elements. The problem here resulted in a DEGRADED zpool, as one disk of the mirror was not available…

The solution is simple: Have an entry for every disk in /etc/power.conf so that the powerd takes care of that. So here’s my /etc/power.conf:


device-dependency-property removable-media /dev/fb
autopm enable
autoS3 default
system-threshold 900s
cpu-threshold 1s
# Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior
autoshutdown 30 9:00 9:00 noshutdown
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@1/disk@0,0 900s
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@2/disk@0,0 900s
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@3/disk@0,0 900s
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@4/disk@0,0 900s
device-thresholds /pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0 900s
cpupm enable

Then there was the problem, that the WD disks did not reveal different devid strings (that problem had been brought to my attention by a colleague of mine), the string was the same for all WD disks. You can check that by running (as root):

bash-3.2# echo '::spa -c' | mdb -k

and compare the lines that contain devid. In my case that was:

devid='id1,sd@TWD______10EAVS_External_/a'
devid='id1,sd@f004fa5244aa68827000dc26e0004/a'
devid='id1,sd@TWD______10EAVS_External_/a'
devid='id1,sd@f004fa5244aa76c85000a3a89000a/a'

So I did open a bug, and got a pre-fix, that will come to OpenSolaris and Solaris soon. The fix fixes the way the scsa2usb driver handles disks that do not correctly report page 80 or page 83 scsi inquiries (which many USB don’t do right!). So, now with the new scsa2usb driver the output looks like:

devid='id1,sd@f000665644ab34259000aa2b30004/a'
devid='id1,sd@f004fa5244aa68827000dc26e0004/a'
devid='id1,sd@f004fa52449bea5b4000f1f610000/a'
devid='id1,sd@f004fa5244aa76c85000a3a89000a/a'

That makes for four uniquely identifiable USB disks. Which is a good thing, as now the ZFS stack can in case additional infos get lost still uniquely address the disks, and not treat one for the other! That would render the data corrupted, in case all other infos might get lost. Rest assured, ZFS not only checks for the devid string, but the devid string is the last resort if all else fails…

Until the final fix (bug is in fix-delivered state!) will be out, there’s a workaround. Quoted from the workaround section of bug 6881590:

For the issue of devid not unique due to page 83 data, it can be a workaround
by setting the similar lines as below in sd.conf, which enforces sd to ignore
the vpd page data and fabricate a devid.
for x86:
sd-config-list="VendorID+ProductID","sd-ver1-tst-data";sd-ver1-tst-data= 1,0x00004,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
for sparc:
sd-config-list="VendorID+ProductID","sd-ver1-tst-data";sd-ver1-tst-data= 1,0x00008,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;

For rare cases, if I would need to reinstall my system, I keep a log of all steps performed, so if I would need to install from scratch, I would end up at the same stage. Here’s that script:


# Install OpenSolaris 2009.06 from media (CD, USB stick, whatever)
# After install, perform:
pfexec su –
svcadm disable nwam
svcadm enable network/physical:default
svcadm disable sendmail
vi /etc/hosts
vi /etc/defaultrouter
vi /etc/hostname.rge0
vi /etc/ethers
vi /etc/power.conf
vi /etc/inet/ntp.conf
svcadm enable ntp
crontab -e # (to add: 0 3 * * 3 /usr/sbin/zpool scrub rpool)
pkg install SUNWsmba SUNWsmbfskr SUNWsmbs SUNWsmbskr
svcadm enable smb/server
vi /etc/nsswitch.conf
hosts: files dns mdns
ipnodes: files dns mdns
smbadm join -w PFUETZNER
vi /etc/pam.conf
# Manually added for kernel CIFS server in workgroup mode
other password required pam_smb_passwd.so.1 nowarn
passwd pfuetz
passwd root
svccfg -s idmap setprop config/default_domain = astring: PFUETZNER
svccfg -s idmap setprop config/ds_name_mapping_enabled=boolean: false
svcadm refresh idmap
# install new scsa2usb driver
******************************************************************************
# Then, for the upgrade to work, I prefer to do it by hand, so:
# ONCE:
pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org
pkg set-property flush-content-cache-on-success true
# Every Update:
pkg refresh –full
pkg install SUNWipkg
pkg image-update –be-name buildXXX
beadm activate buildXXX

And for the setup of the CIFS/NFS shares on ZFS, here’s what needs to be done (assuming, you have a zpool named “pfuetz” ;-) ):

zfs create -o casesensitivity=mixed pfuetz/smb-share
zfs set sharenfs=root=@192.168.2 pfuetz/smb-share
zfs set sharesmb=on pfuetz/smb-share
zfs set sharesmb=name=smb-share pfuetz/smb-share

Power usage:

I’ve been testing the power consumption of the system alone, with no external USB disks. It uses 39 W. With the external four 1 TB disks the power usage goes to 72W, when the disks are awake, but not doing anything. If the disks are under load, the power consumption goes up to 80W. If the disks fall asleep, the power consumption goes down to 63 W.

Lessons learned so far:

There’s one additional lesson: The “zpool scrub” for the 2 TB mirrored disks takes 16 hours. So it might be a good idea, just for power savings to really use SATA disks internal to the server, and not external. That also would solve the devid problem. And also would solve the power management problem. And even safe money, as the scrub would be finished in way less time.

So here, finally the image of the “actual interior”:

I hope, these additional infos might help you in determining, what to build as a small home server. You might also check Jan’s or Constantin’s blog, they also have some insights!

Matthias

Panorama Theme by Themocracy