Category: General

Sun’s VDI in discussion

By , November 12, 2009 19:33

Brian Madden and Claudio Rodrigues and others do discuss about Sun’s VDI.

I left the following comment on CR’s blog, as I am unable to get a login at brianmadden.com, the email with the initial PW does simply not reach me…

Taking the overall loss of Sun as an indicator for the fittedness of a specific product for a given problem, is simple bullshit… ;-) And is only trying to discredit something for something else… What’s it, you are jealous of in Sun’s VDI solution?

OK, with that of my mind ( ;-) ), let’s state:

Sadly, Sun’s annual report does not specify numbers neither for VDI nor Desktops nor Sun Rays.

But, now, back to the question at hand:

There’s more to be checked, if comparing VDI solutions, then only the underlying OS for a small, although important part of the overall solution/offering.

Some of these additional topics include:

1.) Costs of acquisition (what does ist cost to build an environment, HW and SW and installtime)
2.) Operating expenses (and yes, there you have costs for admins, if you need a “new” OS platform)
3.) Security (overall, starting from separation of users to separation of processes to security against intruders on the overall chain of devices and software stacks)
4.) Efficiency (where do I get most power for the buck)
5.) Access point in case of problem (one point shopping, one point service?)

I’m sure you read about: http://www.projectvrc.com/. Sadly, that did not include Sun’s VDI. Still, there is something to learn here. The differences between the different solutions (TS, or Xen, or, …) are not the decision making points, as the differences are not as big. A main finding there is, that memory per server is a limiting factor. That’s why for example, in former times, the Sun X4600 M2 was a very attractive system for large VDI environments. And, as further answer to 4.) above, Sun’s VDI allows you to run either VBox or VMware as a basis for user sessions, which also influences points 1.) and 2.)

For point 5.) only MS and Sun can offer a “two stop shopping” VDI solution, MS by adding a single HW supplier, and Sun by adding MS licenses. All other VDI vendors need a three stop offer, as VMware does not own MS licenses nor Hardware, and as Citrix also does not own HW nor MS licenses. That is a point, that’s not to be underestimated! And here Sun’s the only one who can offer two stop shopping for HETEROGENEOUS environments.

Now, let’s look at point 3.), security.

We all know, that Solaris is the most advanced and efficient (scaling nearly linearly with addition of CPUs way beyond 100 CPUs) OS on the planet. This helps immensely, when defining large scale environments, because consolidating onto large systems gets possible, because the OS is not simply managing itself and the underlying resources, but leaves many resources available for the apps. With putting for example every single VBox instance into a separate Solaris 10 Zone/Container, you additionally get the benefit of fine grained resource control AND security, as that environment simply is not able to break into a different Zone/Container. And Zones/Container are EAL4+ certified… ;-) (afaik).

And, an additional topic for 4.) is putting the VDI images onto ZFS. Cloning gets easy and quick… But that’s another topic…

So, I would love to see projectvrc results for Sun’s VDI… And a more vivid discussion about the pros and cons of VDI solutions in total…

Matthias

Lecture again…

By , November 5, 2009 13:13

Also during the winter semester, Ulrich Gräf und myself will be giving a lecture at the University in Darmstadt. This time it’s about

Persistant Storage – Datastructures and Algorithms

The plan, which still can change due to unforseen events, is:

16.10.09: L1: Intro and sequential Datasets
  Notation of Information (writing)
	Drawings
	Writing (Coal, Color, Clay-slate (or what's the word for "Tontafel"), Knots, Papyrus, Printing)
	
  Data-Handling (Reading and Writing)
	punchband
	punchcard
	tape
	disk (magnetical, optical, MO)
	Flash

  Dataset
	Structure
		Datasets/Extents
		Types: F, V, FB/VB, FBS/VBS
		Setoriented treatment
	Algorithms
		FCB
			Extents
			Position
		open/close
		read/write
		Buffering through Application
	Catalog
	PDS
	Consistency

23.10.09: L2: Indices
	B-Tree, B*-Tree
	Bitmap-Tables
	Index-Datasets
	Performance-Aspects
	Space-Aspects
	Consistency

30.10.09: L3: FAT (Matthias Pfützner)

06.11.09: L4: Encodings (Matthias Pfützner)
	EBCDIC
	ASCII 6, 7, 8 Bit
	UTF Variants

13.11.09: L5: Simple Databases
	MySQL Tables
		(InnoDB)
	Indices
	Consistency

20.11.09: L6: Datastructures for Databases
	Shared Memory
	Multi-Process vs. Multi-Thread
	Source?
	
27.11.09: L7: Hash-Methods
	Principle
	Overflow
	Perfect Hash
	Minimum Perfect Hash
	Sparse Hash

04.12.09: L8: DBM
	Structure
	File holes - Problems?
	
	VSAM, etc.
		sequential
		ISAM
		hash
		index

11.12.09: L9: UFS Structure
	as Berkeley FFS, NTFS
	ext2, ext3,  VxFS Differences

18.12.09: L10: Recovery and Consistency (Matthias Pfützner)
	File System Check
	Table Consistency Check
	Backup / Restore

	Log
	COW
	Snapshot
	Checksums
	Self Healing

15.01.10: L11: UFS in the OS
	Datastructures
	Memory Walk
	
22.01.10: L12: ZFS
	Features
	Datastructures
	
29.01.10: L13: ZFS in the OS
	Datastructures
	ZIL
	ARC Cache
	L2ARC
	Memory Walk
	
05.02.10: L14: Oracle
	Features
	Log
	Redo-Log
	Transactions
	Read Transaction

12.02.10: L15: Wrapup	

Hope to see many enthusiastic students…

Matthias

Update (30. November 2009): Slides are available at: http://www.dvs.tu-darmstadt.de/teaching/storage/2009/

Cleanup of blog done…

By , November 2, 2009 19:55

So, now, after approx. 2 hours in total, the move to the new home is done. What kept me busy was the fact, that wordpress or the newer firefox seem to make a difference between lower-case and upper-case characters in html coding… so a <br> is something different then a <BR>. That’s not standard, but, it is, what it is…

Matthias

This blog has been moved over…

By , October 31, 2009 18:09

This blog has been moved over from http://blogs.sun.com/pfuetz. I now still need to clean up some of the old entries, especially some links seem to be a bit bizarre. So, stay with me, until I have fixed those. I’ll do that!

Matthias

Gründung einer OpenSolaris User Group im Rhein-Main Gebiet

By , October 27, 2009 10:20

Wann:
17. November 2009, ab 17:45

Wo:
Sun Microsystems
Geschäftsstelle Frankfurt
Amperestraße 6
63225 Langen

Liebe Unix, Linux, Solaris und OpenSolaris Freunde,

es gibt mittlerweile einige OpenSolaris User Groups in Deutschland. Daher wollen wir nun auch im erweiterten Rhein-Main Gebiet (Frankfurt, Wiesbaden, Mainz, Darmstadt, Offenbach, …) herausfinden, ob es genug Interessenten für eine solche Gruppe gibt.

Wir laden deshalb alle an UNIX/Solaris Interessierten zu einem ersten Treffen in die Sun-Geschäftsstelle nach Langen ein.

Eine Agenda für dieses Treffen haben wir auch schon, wenn es noch weitere Vorschläge gibt, sind wir gerne bereit, diese aufzunehmen…

17:45 Beginn und Begrüßung
18:00 Open Solaris User Groups, was sind das?
* Vorstellung
* Warum?
* Wohin?
* Ideensammlung
* Diskussion
19:00 Wie baue ich einen umweltfreundlichen Home NAS-Server mit Hilfe von OpenSolaris? (Vortrag von Matthias Pfützner)
19:30 Platz für einen weiteren Vortrag (Interessenten vor!)
20:00 Offene Diskussion
21:00 Ende der Veranstaltung

Um schon mal eine gewisse Vorstellung entwickeln zu können, was eine OSUG ist, hier eine kurze Information:

Eine OSUG ist eine informelle Zusammenkunft von an OpenSolaris interessierten Leuten, die sich regelmäßig oder unregelmäßig treffen, um Erfahrungen auszutauschen, und sich mit OpenSolaris zu beschäftigen. Das Ganze in zwangloser Atmosphäre, ggfls. bei Bier und einem Snack und immer auch mit ein paar kurzen aber knackigen Vorträgen.

Zur Abgrenzung: Was ist eine OSUG nicht?

Eine OSUG ist KEINE Sun Marketing Veranstaltung. Es ist eine Veranstaltung die davon lebt, dass sich möglichst Viele einbringen.

Näheres dazu siehe: http://www.opensolaris.org/os/usergroups/

Um den Anfang leichter zu gestalten, wollen wir den ersten Termin, wie oben erwähnt, in der Sun Geschäftsstelle in Langen abhalten. Danach würden wir aber gerne auch auf andere Lokalitäten zurückgreifen.

Beim ersten Treffen wollen wir auch die organisatorischen Fragen diskutieren:

* Wie häufig & wo wollen wir uns treffen?
* Was sind Vorschläge der Teilnehmer für die OSUG im Rhein-Main Gebiet?
* Wie soll die OSUG im Rhein-Main Gebiet genannt werden?
* Gibt es Freiwillige für Vorträge?

* Wer möchte eine XING Gruppe dazu aufbauen?
* Wer möchte eine eigene WebSeite dazu aufbauen?
* Brauchen wir eine Mailingliste? Wenn ja, wer kümmert sich darum?

Weil die Veranstaltung in der Sun Geschäftsstelle stattfindet, brauchen wir aus Sicherheitgründen eine Anmeldung, denn ohne Anmeldung darf man das Gebäude nicht betreten. Daher bitten wir um Rückantwort an:

mailto:Michael.Gottwald@Sun.COM oder mailto:Ulrich.Graef@Sun.COM oder mailto:Matthias@Pfuetzner.DE

Wir freuen uns auf eine zahlreiche Teilnahme und in der Zukunft viele weitere spannende und kurzweilige Treffen.

Michael Gottwald, Ulrich Gräf, Matthias Pfützner

Update: PS: Infos gibt’s auch bei IT-Connection in Xing und OpenSolaris bei Xing und auch im Blog von Michael als auch bei twitter: http://twitter.com/osugrm
P.P.S.: Und auch das Solarium erwähnt es nun!

Cloud and Desktop Virtualization

By , October 26, 2009 08:27

Benny’s Blog has an interesting article about virtualization (specifically desktop virtualization) and the cloud.

Benny is not as sure, that cloud will be the future.

I tend to agree. Benny, perhaps we need to shine a bit more light into what cloud is and what the roles of the different corners of the industry will be in the cloud.

First, it seems obvious, that cloud computing is a hype.

Second, it seems obvious, that many of those, that one meets at cloud conferences, CloudCamps, you-name-it, are highly enthusiastic about the Cloud. So, it’s not strange to observe, that you seem a bit out-of-place in such a setup.

But: First things first. It’s always important, to segment the market: Here, with cloud computing, we have: a.) Users of the cloud, b.) provider of apps in the cloud, c.) provider of the cloud, d.) provider of tools to build a cloud.

Many, many, years ago, it’ been predicted, that the Mainframe would die. IBM still lives quite well in building and enhancing that piece of dinosaur technology, so, a statement like “in five years time, all desktop apps will be replaced by apps from the cloud” is as correct as the statement, that the mainframe is dead.

We also know, that putting things into a cloud does offer new challenges: 1.) Latency, 2.) Trust, 3.) Security.

You don’t want to have to transmit every single bit via handshake back and forth. Some apps really do require short such shakehand-cycles. They simply are not easy (if at all) to be put into the cloud. (Example: Sending data via a pidgeon can still be faster then sending it via high speed network, use the math!).

You don’t want to put all your information, data, whatever to somebody, that you do not trust. Just google for the Sidekick desaster that Microsoft had with T-Mobile and Danger.

And, you also have to segment the user population: I.) End-user and II.) Corporate-User. They also have differing buying patterns, and therefore I also agree that a statement like “in five years time, all apps will be coming from the cloud” is simply wrong.

Still, cloud computing offers options, and they will be persued. That will have an effect on the classical desktop, that’s for sure. But, I doubt, that that alone will change the usage. I think, that more changes will come form the advances in mobile computing, especially on the PDA/Mobile Phone. You have a smaller screen real estate, you have worse keyboards, but many tasks can still be done very well on such a device. Classical things, like pure email-Web-Frontends might die, because most will be reading their emails directly from the phone.

So, what does that make of Microsoft and their classical desktop approach? They will still survive, quite nicely, and for way more then 5 or ten years. Because change in corporate infrastructures don’t happen over night… And people are lazy…

The Cloud idea itself will help shape the way datacenters are build, and the way, services in datacenters are deployed.

On a mainframe, there never has been the notion of “I need my own hardware for my services”. They have always been shared, and trust has been, that that can be done securely.

In the Open Systems world, that notion changed to: “I need my own hardware for my jobs.” Hardware was cheap, so that was doable. Nowadays we see, that operating cheap hardware gets more expensive then the hardware itself, so that trend is changing. We see (see also my last blog entry), that systems are way underutilized, and that therefore, there’s a trend to consolidate stuff onto single systems. With virtualization underneath, that also works quite well.

So, cloud computing is more about a change in management of assets and services and billing for these things, then a change in usage of tools from the individual end-user.

So, rest assured, your job will still be there in five years time… ;-) But you might be targeting different types of customers…

Matthias

The state of the IT industry as I see it… (Part 1)

By , October 14, 2009 04:24

During my summer vacation, and driven by the fact, that Oracle is acquiring Sun, I had been spending thoughts on the state of the IT industry lately.

Then last week, some colleagues forwarded a link to a fake interview with Bjarne Stroustrup on the invention of C++ and the drivers behind it. This interview now triggered my intent to write down some thoughts and publish them for discussion.

Additionally the new hype word “Cloud” made me want to add some comments to that as well. Specifically, as my possible new boss, Larry Ellison calls it “vapor water”.

So, let’s start with some basic facts, that I think, are important to understand what I’m trying to explain:

  • Moore’s Law
  • Software needs

The first point I guess needs no further commenting. Still the second point needs it. I did not perform a precise analysis, but it seems obvious (discussion point one!) that the needs that software versions add to the previous version are now outpaced by Moore’s Law. Will say: Hardware gets faster quicker, then software needs grow. One simple proof for that is the fact, that most servers in datacenters nowadays are utilized only around or below 15%, and another important proof is the rapid uptake of virtualization techniques.

As a sideeffect of this rapid growth in compute power, many companies have gone out of business or are close to getting out of business. Think DEC, Data General, Compaq, SGI, Cray, Apollo Computer and others like these. Now Sun Microsystems also will become history.

The question behind this all simply is: Why?

In order to start an answer, I will first deviate a bit… ;-)

In September 1990 there was a famous 15 year anniversary edition titled “Byte Magazine 15th anniversary summit, setting the standards” of the Byte magazine, which did try to shade an outlook into the next 15 years of the computer industry and its development by asking the 63 most influential people of the time to look back and from that predict the future. Sadly, Byte was torn down in 1998, and although it did continue as a web-presence for some years, it is no longer available, and had been removed from the web in February 2009. So also the link above no longer works.

A link to the table of contents of that edition at least can be found here. The cover page is here. And some of the predictions published in that issue can be found here (that page calls it a brief excerpt, sadly I dumped all my paper versions of Byte approx. 10 years ago into the bin in the hope of it being online forever. How naive, wrong and mistaken have I been… ;-( ) (discussion point two) (and: btw: I’d like to get a copy, if anyone can spare his/hers).

In that discussion, many diverse topics had been approached, and had been answered by many but not all of the 63 people. One of those was Donald Knuth, creator of the programming language Pascal. He is cited with:

Donald Knuth: …computers are going to double in speed every year until 1995, and then they’re going to run out of ideas.

How wrong has he been… ;-)

And as a proof point to my second topic above, I’d like to quote Mr Kernighan, inventor of the famous programming language C:

What about the software side of the equation? Or are all the changes coming in hardware?

Brian Kernighan: Software, unfortunately, is not nearly as easy to make go better as hardware seems to be. And the software will not get better fast enough, and so you’ll piddle away more and more of [the power] on stuff that doesn’t quite work the way you want it to….

And then there was the question:

What is the biggest obstacle to major new breakthroughs in computing?

In a word, software, not hardware.

Suffice it to say, that I assume these predictions from close to 20 years ago a valid proof point for my thesis, that software didn’t grew and doesn’t grow in demand as quickly as the hardware grew and still grows in speed and capabilities.

After this short deviation into the past using a look into the most important publication of that era back to the question on why the IT industry is in the state that it is in today.

It seems obvious, that some (shall I say: many?) have miscalculated the differences in development speed between software and hardware. Or assumed, that the overall demand for compute power might still ramp up and make up for the growth in produced CPU capacities to still be able to sell as much (in Dollars) as before (that would have required to sell way more units then before). It did work for quite a while during the gold-rush era of the early 21st century and the dot-com bubble, where many new enterprise were created that used the internet and needed CPUs like crazy. But it also seems obvious from this fact, that hardware is becoming more and more of a commodity, because the average cost per CPU-cylce needed to solve a specific problem goes down and down. My favorite statement here is, which I use often in virtualization talks, and now in cloud talks: “Take your mobile phone out of your pocket, look at it, and remember: NASA had LESS CPU power (in total) to savely bring mankind to the moon and back compared to what you look at right now.”

That then sets the stage for open and candid discussions on the state of virtualization, cloud computing et. al. But, again, more on that later.

Matthias

Eco responsible power friendly small home server (part 2)

By , October 5, 2009 04:37

A long time ago, I did write about my small new home server. You can read all about that at: http://blogs.pfuetzner.de/matthias/?p=299

Meanwhile some changes have been done to the system. Yes, I needed to buy the 90W PSU (although I now know how much Watt the system really uses, I’m curious, why that was really needed, but it sometimes did crash with the 60W PSU and that’s gone, now, that I have the 90W PSU, so I assume, it must have had to do with the 60W PSU. Or on that specific PSU, but the dealer refused to change it, as it is running. More on Wattage later).

I also did change the passive heat sink on the CPU to also be a ZALMAN, safe is safe (see image). And with that, I now have the case put on the small side, which in the image is below, so that the heat can exit through the two case openings, which are now at the top. No fans any more in the case… Quiet!

More importantly, I did also replace the CF card by a real disk, as that only uses 1.8W at load, less, if not in use. I did buy a 1.8 inch Toshiba disk (those, that are used inside the iPods, in my case a TOSHIBA MK2006GA) plus two converters (3.5 -> 2.5 and 2.5 -> 1.8) and placed the disk inside the cabinet (No part numbers here, and no distributor, as I bought them on german eBay. Overall disk + two converters were around 40 EURO). The main reason for this change is the fact, that the CF card was really slow, and that 8 GB weren’t enough to perform Live Upgrade (and now maintain different Boot Environments) (I now have 30 G in the rpool, for swap, crash and all the rest).

I also updated the OS.. ;-)

Now, I did install OpenSolaris 2009.06 from an image on a usbstick, that went really smooth, no changes needed. And, after that, I upgraded to the actual dev build, which this weekend was build 124…

One more change hardware-wise: I also added two more 1 TB disks, because I had a nasty failure of BOTH external power supplies of the WD disks over the same weekend some months ago… Rendering me without storage… So, I decided to mirror across different types of disks, so I bought two no-name 1 TB external USB disks (which have Hitachi SATA disks in it). WD quickly sent new PSUs, great service, and they admitted, that those PSUs were faulty… They had many such failures…

With the new setup of the disks, I had a different problem: The new disks did not do power-save by themselves, whereas the WD disks did fall into power save. ZFS does not have any clue about power management of the disks, it assumes, that that is done by the underlying elements. The problem here resulted in a DEGRADED zpool, as one disk of the mirror was not available…

The solution is simple: Have an entry for every disk in /etc/power.conf so that the powerd takes care of that. So here’s my /etc/power.conf:


device-dependency-property removable-media /dev/fb
autopm enable
autoS3 default
system-threshold 900s
cpu-threshold 1s
# Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior
autoshutdown 30 9:00 9:00 noshutdown
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@1/disk@0,0 900s
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@2/disk@0,0 900s
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@3/disk@0,0 900s
device-thresholds /pci@0,0/pci8086,464c@1d,7/storage@4/disk@0,0 900s
device-thresholds /pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0 900s
cpupm enable

Then there was the problem, that the WD disks did not reveal different devid strings (that problem had been brought to my attention by a colleague of mine), the string was the same for all WD disks. You can check that by running (as root):

bash-3.2# echo '::spa -c' | mdb -k

and compare the lines that contain devid. In my case that was:

devid='id1,sd@TWD______10EAVS_External_/a'
devid='id1,sd@f004fa5244aa68827000dc26e0004/a'
devid='id1,sd@TWD______10EAVS_External_/a'
devid='id1,sd@f004fa5244aa76c85000a3a89000a/a'

So I did open a bug, and got a pre-fix, that will come to OpenSolaris and Solaris soon. The fix fixes the way the scsa2usb driver handles disks that do not correctly report page 80 or page 83 scsi inquiries (which many USB don’t do right!). So, now with the new scsa2usb driver the output looks like:

devid='id1,sd@f000665644ab34259000aa2b30004/a'
devid='id1,sd@f004fa5244aa68827000dc26e0004/a'
devid='id1,sd@f004fa52449bea5b4000f1f610000/a'
devid='id1,sd@f004fa5244aa76c85000a3a89000a/a'

That makes for four uniquely identifiable USB disks. Which is a good thing, as now the ZFS stack can in case additional infos get lost still uniquely address the disks, and not treat one for the other! That would render the data corrupted, in case all other infos might get lost. Rest assured, ZFS not only checks for the devid string, but the devid string is the last resort if all else fails…

Until the final fix (bug is in fix-delivered state!) will be out, there’s a workaround. Quoted from the workaround section of bug 6881590:

For the issue of devid not unique due to page 83 data, it can be a workaround
by setting the similar lines as below in sd.conf, which enforces sd to ignore
the vpd page data and fabricate a devid.
for x86:
sd-config-list="VendorID+ProductID","sd-ver1-tst-data";sd-ver1-tst-data= 1,0x00004,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
for sparc:
sd-config-list="VendorID+ProductID","sd-ver1-tst-data";sd-ver1-tst-data= 1,0x00008,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;

For rare cases, if I would need to reinstall my system, I keep a log of all steps performed, so if I would need to install from scratch, I would end up at the same stage. Here’s that script:


# Install OpenSolaris 2009.06 from media (CD, USB stick, whatever)
# After install, perform:
pfexec su –
svcadm disable nwam
svcadm enable network/physical:default
svcadm disable sendmail
vi /etc/hosts
vi /etc/defaultrouter
vi /etc/hostname.rge0
vi /etc/ethers
vi /etc/power.conf
vi /etc/inet/ntp.conf
svcadm enable ntp
crontab -e # (to add: 0 3 * * 3 /usr/sbin/zpool scrub rpool)
pkg install SUNWsmba SUNWsmbfskr SUNWsmbs SUNWsmbskr
svcadm enable smb/server
vi /etc/nsswitch.conf
hosts: files dns mdns
ipnodes: files dns mdns
smbadm join -w PFUETZNER
vi /etc/pam.conf
# Manually added for kernel CIFS server in workgroup mode
other password required pam_smb_passwd.so.1 nowarn
passwd pfuetz
passwd root
svccfg -s idmap setprop config/default_domain = astring: PFUETZNER
svccfg -s idmap setprop config/ds_name_mapping_enabled=boolean: false
svcadm refresh idmap
# install new scsa2usb driver
******************************************************************************
# Then, for the upgrade to work, I prefer to do it by hand, so:
# ONCE:
pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org
pkg set-property flush-content-cache-on-success true
# Every Update:
pkg refresh –full
pkg install SUNWipkg
pkg image-update –be-name buildXXX
beadm activate buildXXX

And for the setup of the CIFS/NFS shares on ZFS, here’s what needs to be done (assuming, you have a zpool named “pfuetz” ;-) ):

zfs create -o casesensitivity=mixed pfuetz/smb-share
zfs set sharenfs=root=@192.168.2 pfuetz/smb-share
zfs set sharesmb=on pfuetz/smb-share
zfs set sharesmb=name=smb-share pfuetz/smb-share

Power usage:

I’ve been testing the power consumption of the system alone, with no external USB disks. It uses 39 W. With the external four 1 TB disks the power usage goes to 72W, when the disks are awake, but not doing anything. If the disks are under load, the power consumption goes up to 80W. If the disks fall asleep, the power consumption goes down to 63 W.

Lessons learned so far:

There’s one additional lesson: The “zpool scrub” for the 2 TB mirrored disks takes 16 hours. So it might be a good idea, just for power savings to really use SATA disks internal to the server, and not external. That also would solve the devid problem. And also would solve the power management problem. And even safe money, as the scrub would be finished in way less time.

So here, finally the image of the “actual interior”:

I hope, these additional infos might help you in determining, what to build as a small home server. You might also check Jan’s or Constantin’s blog, they also have some insights!

Matthias

CloudCamp Frankfurt

By , September 29, 2009 05:03

Yesterday I had been visiting CloudCamp Frankfurt. You can follow what many of the attendees had been saying by following the #cloudcampfra Twitter hashtag. Some photos can also be found at Flickr. The list of participants can be found at: http://cloudcamp-frankfurt-09-rss.eventbrite.com/.

So, what were my impressions?

First: I like the Museum of Moving Images (Filmmuseum) in Frankfurt, it’s a really nice place, and has a very attractive cinema with a really good program.

So, this location has been selected to host CloudCampFRA: It’s obvious, that it’s not a congress center, but given the relatively small number of participants, the selection was a good one. A small caveat: The main presentations should have been given in the downstairs cinema. That would have solved the heat problem and the L-shaped audience…

The lightning talks were nothing new for me, as I already had been giving a lot of thoughts to clouds. Some were entertaining, some simply didn’t contain anything new. For more details, check Philipp Strube’s feedback, which sadly is in german only.

So, what’s the overall feedback?

I have been missing discussions and lightning talks on the “Trust” aspect, which currently (IMHO) prevents a widespread adoption of clouds.

Security is only (mostly) seen as a technical problem, not also as a trust problem.

Google did withdraw its participation on very short notice, I would have loved to discuss Google Wave and its influence on future Cloud-shaping.

The discussions in the workshops and during Networking at the beginning and during lunch snack breaks were very good!

Matthias

P.S.: Thanks to Dirk Schneider from Salesforce.com I did learn that the trust problem at least seems solvable, they don’t have those talks any more. That’s a good sign!

P.P.S.: Next #cloudcampfra will be in approx 6 months.

P.P.P.S.: Thanks for organizing, it was worth spending the time!

UPDATE: P.P.P.P.S.: Mark Masterson, organizer, did write his feedback here.
P.P.P.P.P.S.: Slides are here.

Petabytes and the Backblaze Pod… ;-(

By , September 3, 2009 04:50

A new company name Backblaze announced the petabyte storage pod:

http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

I’m curious as to WHY any data-security concious person should want to hand over petabytes of storage to a NON-ZFS based storage solution?

But, there’s hope: You could run the solution, that Nexenta uses directly from the OpenSolaris download page, and use that instead. I did not try it, but to me it seems feasable.

I personally would not trust Linux with so much storage! Silent data corruption on such a box is a desater waiting to happen!

Matthias

Panorama Theme by Themocracy