Thursday, September 21, 2006

Home fileserver on Solaris Express/ZFS

(Mirroring this for posterity in case Svens ever forgets to pay his yahoo hosting bill :-) )

Hardware:
Basically I bought everything at Fry's. Going online, a lot of this stuff might be even cheaper, but probably not by a lot. With the exception of the raid controllers, all of this stuff was available at every store. The raid controller I finally found at the Brokaw Fry's. The only thing I re-used was an old 20x IDE CD-ROM, in retrospect since I ended up re-installing three times, it would have been worth $30 for a fast CDrom to get that 2 hours of my life back.

Case: Aluminus Ultra: $130
This is kinda ghey-with-an-H because of the side window and high gloss and everything, but it did have the advantage of having a neat internal 3.5-inch hard drive mounting rail system. Also it has 5 5 1/4 inch bays which is important for the other disks plus CD-ROM. Since it's mostly 120mm fans it's relatively quiet -- still a lot noisier than my shuttle, but quieter than my old tower.

SATA Cage: "random fry's brand": $120
This thingie lets you mount 4 sata drives in a neat hotswap case that takes up 3 5 1/4 inch bays.

2 x SATA Controller: "SIIG 4-port RAID": $80/ea
These are SiliconImage 3114 chipset based RAID cards. They come with all of the power and data cables you'll need.

8 x SATA disks: "Maxtor 250G Maxline Plus II" $100/ea
These are plain SATA-I disks but the controller only does SATA-I anyway.

Motherboard: "Asus K8N socket 754" $47 - open stock
Piece of crap open box motherboard. I got it because it has AGP video, 3-4 PCI slots, and builtin gigabit ethernet.

CPU: "AMD Sempron 2800+ 64bit" $41
Retail box -- slower CPU than what I expected, but it's good enough to run the RAID calcs, samba, httpd, etc, and it's 64 bit :-)

RAM: "Patriot 1G value Ram" $90
With $15 rebate that I'll probably forget about (edit 02/10/2007 -- I forgot)

Video: "No-Name GeForce MX400" $50
I bought it cause it was cheap and did not have a fan, enough crap spinning in there already.

I did not buy a floppy drive
If you do this, buy a floppy drive, or at least make sure you have one handy you can use temporarily.

The total damage was around $1500 w/ tax and everything. The total space after RAID is 1.8T, less than a dollar a gig.

II. Trials and Tribulations:
Everything snapped into this case pretty well. I spent extra time making sure that the sata and power cables were tied down nice, in promote good airflow.

I started installing the latest build of OpenSolaris, Nevada Build 31, because I wanted to play with ZFS and I wanted the most mature sata/network driver support avaialble. It's 4 CD's not counting the language or "add ons" (/opt/sfw) pack. The main problem I had was that Solaris 10 does not recognize these RAID cards out of the box. Solaris still does not recognize any SATA hardware that is acting as a raid card. The way to "fix" these cards is to remove their RAID functionality by loading a straight IDE BIOS. Download the IDE BIOS here. Also download the "BIOS Updater Utilities".

You'll need a DOS boot disk to copy these files to. If you do not have a DOS boot disk, there are instructions in this .ZIP file that tell you where to download a FreeDOS boot disk image where you can then copy the BIOS updater and the IDE BIOS .bin file. Getting a floppy drive hooked up is the hard part. Once you're booted to DOS, the command I used was
A:\> UPDFLASH B5304.BIN -v 

The command will carp about some various stuff and then go about it's business updating your Flash BIOS. For this specific RAID card, I think I had to tell it that the Flash memory was compatible with STT 39??010 1M flash. The command updated both of the cards in my system at the same time; it did not require me to run it twice or use special command line flags.

Thus updated, you can now reboot your system. You may notice that during POST, the cards are now called "Silicon Image 3114 SATALink" instead of "3114 SATARaid" and have no option to press <ctl-s> to enter their BIOS. You can now install Solaris from CD as normal.

I do not care for installing Solaris off of CD. It gets to the "Using RPC for sysid configs" (or something) step of the boot, and then just hangs there for 5+ minutes. There's really no way to tell if your machine is horked, or your CD drive froze up, or what, it's just sitting there not doing anything.

The installer could now see all 8 of my disks. I chose to put a small ~4G partition on c1d0s0 and 512M for swap on c1d0s1. I left slice 6 empty and a 1-cylinder slice at s7 for the metadb. Once the operating system installed, I mirrored onto the first disk of the second controller (disk4) by giving it an identical partitioning scheme as disk 0 and using the standard solaris meta-commands:
#  metadb -fa c1d0s7 c3d0s7
# metainit -f d1 1 1 c1d0s0
# metainit -f d2 1 1 c3d0s0
# metainit -f d6 1 1 c1d0s1
# metainit -f d7 1 1 c3d0s1
# metainit d0 -m d1
# metainit d5 -m d6
# metaroot d0
# (edit vfstab to use d5 for swap)
# lockfs -fa
# reboot

Once the system came back up, I attached the metadevices:
#  metattach d0 d2
# metattach d5 d7

The sync went fast since these are small slices. At this point I did some additional configuration. You'll want to use the "svcadm" command to turn off things like autofs, telnet, ftp, etc:
#  svcadm disable telnet
# svcadm disable autofs
# svcadm disable finger
(etc, I did not document exactly what I disabled but autofs has to be turned off if you want home directories to work :-)) Also if you have this specific motherboard, you'll need to add the following to your /etc/driver_aliases for the system to find your network card: nge "pci10de,df" (This tells the system to bind the Nvidia Gigabit Ethernet driver to the PCI card with vendor ID 10de and product ID 00de). Do the usual editing of /etc/hostname.nge0 /etc/inet/ipnodes /etc/hosts /etc/netmasks /etc/defaultrouter to get your network up and running.

If you had something that Solaris just saw out of the box, you probably set this up already during the install. If not, you might have to add something different to your /etc/driver_alises - there is pretty good google juice on the various ethernet cards out there. I rebooted at this point for the changes to take effect.

Now for the fun part, ZFS. For my non-root disks, I created another partition using almost all of the disks (I saved the first three cylinders since it looked like there was some boot information or something on there). zfs was a lot easier than I thought it would be. Do a man on zfs and zpool. To create the pool, it was easy:
zpool create -f u01 raidz c1d0s6 c1d1s6 c2d0s6 c2d1s6 c3d0s6 c3d1s6 c4d0s6 c4d1s6

The -f is to force it, because c1d0s6 and c3d0s6 are smaller than all of the other partitions. This command returned in about 4 seconds. It literally takes longer to type out the command than it does for ZFS to create you a 1.78TB filesystem. The filesystem will auto-magically be mounted on /u01 (because the pool name is u01, specified on the command line above, it could be any arbitrary name). From here, you can use the "zfs" command to create new filesystems that share data from this pool:

zfs create u01/home
zfs create u01/home/amiller
zfs set mountpoint=/home/amiller u01/home/amiller

None of this has to go into /etc/vfstab, the system just knows about it and mounts it at boot with "zfs mount -a".
So now on my box, I have:


amiller$ df -kh |grep u01
u01 1.8T 121K 1.7T 1% /u01
u01/home 1.8T 119K 1.7T 1% /u01/home
u01/home/amiller 1.8T 3.7M 1.7T 1% /home/amiller

No comments: