Post by Colin FeeThe box is based around a Gigabyte GA-880GM-USB3 mobo, with an AMD Phenom
II X6 1055T cpu and buckets of RAM and the OS on a 240Gb SSD.
more than adequate.
my main zfs box here is a Phenom II x6 1090T with 16GB RAM and a pair
of 240GB SSDs for OS/L2ARC/ZIL. it's also my desktop machine, internet
gateway, firewall, virtualisation server, and everything/anything I want
to run or experiment with box.
so a 1055T dedicated to just a ZFS fileserver will have no problem
coping with the load.
Post by Colin FeeSo I'm looking for a strategy re the implementation of ZFS.
0. zfsonlinux is pretty easy to work with, easy to learn and to use.
i'd recommend playing around with it to get a feel for how it works and
to experiment with some features before putting it into 'production' use
- otherwise you may find later on that there was a more optimal way of
doing what you want.
e.g. once you've got a pool set up, get into the habit of creating
subvolumes on the pool for different purposes rather than just
sub-directories. You can enable different quotas (soft quota
or hard-limit) on each volume, and have different attributes
(e.g. compression makes sense for text documents, but not for
already-compressed video files).
my main pool is mounted as /export (a nice, traditional *generic*
name), and i have /export/home, /export/src, /export/ftp, /export/www,
/export/tftp and several other subvols on it. as well as zvols for VMs.
if performance is important to you then do some benchmarking on your own
hardware with various configurations. Russell's bonnie++ is a great
tool for this.
1. if your disks are new and have 4K sectors OR if you're likely to add
4K-sector drives in future, then create your pool with 'ashift=12'
4K sector drives, if not quite the current standard right now, are
pretty close to current standard and will inevitably replace the old
512-byte sector standard in a very short time.
(btw, ashift=12 works even with old 512-byte sector drives, because 4096
is a multiple of 512. there is an insignificantly tiny reduction in
usable space when using ashift=12 on 512 sector drives).
2. The SSD can be partitioned so that some of it is for the OS (50-100GB
should be plenty), some for a small ZIL write intent cache (4 or 8GB is
heaps), and the remainder for L2ARC cache.
3. if you're intending to use some of that SSD for ZIL (write intent
cache) then it's safer to have two ZIL partitions on two separate SSDs
in a mirror configuration, so that if one SSD dies, you don't lose
recent unflushed writes. this is one of those things that is low risk
but high damage potential.
in fact, an mdadm raid-1 for the OS, two non-raid L2ARC cache
partitions, and mirrored ZIL is, IMO, an ideal configuration.
if you can only fit one SSD in the machine, then obviously you can't do
this.
4. if performance is far more important than size, then create your pool
with two mirrored pairs (i.e. the equivalent of RAID-10) rather than
RAID-Z1. This will give you the capacity of two drives, whereas RAID-Z1
would give you the capacity of 3 of your 4 drives.
It also has the advantage of being relatively easier/cheaper to expand,
just add another mirrored pair of drives to the pool. expanding a RAID-Z
involves either adding another complete RAID-Z to the pool (i.e. 4
drives, so that you have a pool consisting of two RAID-Zs) or replacing
each individual drive one after the other.
e.g. 4 x 2TB drives would give 4TB total in RAID-10, compared to 6TB if
you used RAID-Z1.
I use RAID-Z1 on my home file server and find the performance to be
acceptable. The only time I really find it slow is when a 'zpool scrub'
is running (i have a weekly cron job on my box)...my 4TB zpools are now
about 70% full, so it takes about 6-8 hours for scrub to run. It's only
a problem if i'm using the machine late at night.
I also use RAID-Z1 on my mythtv box. I've never had an issue with
performance on that, although I do notice if I transcode/cut-ads more
than 3 or 4 recordings at once (but that's a big IO load for a home
server, read 2GB or more - much more for HD recordings - and write
1GB or so for each transcode, all simultaneous). It's mostly large
sequential files, which is a best-case scenario for disk performance.
Post by Colin FeeI can install up to 4 SATA disks onto the mobo (5 in total with one
slot used by the SSD)
if you ever plan to add more drives AND have a case that can physically
fit them, then I highly recommend LSI 8-port SAS PCI-e 8x controllers.
They're dirt cheap (e.g. IBM M1015 can be bought new off ebay for under
$100), high performance, and can take 8 SAS or SATA drives. They're
much better and far cheaper than any available SATA expansion cards.
craig
--
craig sanders <***@taz.net.au>
BOFH excuse #188:
..disk or the processor is on fire.