Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this array.
I have created 2 volumes on the array
in RAID0 (stripe of 128 KB) presented to the host with lun0 and lun1.
So, on the host :
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1d0 <DEFAULT cyl 30397 alt 2 hd 255 sec 63>
/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0
1. c2d0 <DEFAULT cyl 30397 alt 2 hd 255 sec 63>
/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0
2. c6t600A0B800038AFBC000002F7472155C0d0 <DEFAULT cyl 35505 alt 2
hd 255 sec 126>
/scsi_vhci/disk at g600a0b800038afbc000002f7472155c0
3. c6t600A0B800038AFBC000002F347215518d0 <DEFAULT cyl 35505 alt 2
hd 255 sec 126>
/scsi_vhci/disk at g600a0b800038afbc000002f347215518
Specify disk (enter its number):
bash-3.00# zpool create tank mirror
c6t600A0B800038AFBC000002F347215518d0 c6t600A0B800038AFBC000002F7472155C0d0
bash-3.00# df -h /tank
Filesystem size used avail capacity Mounted on
tank 532G 24K 532G 1% /tank
I have tested the performance with a simple dd
[
time dd if=/dev/zero of=/tank/testfile bs=1024k count=10000
time dd if=/tank/testfile of=/dev/null bs=1024k count=10000
]
command and it gives :
# local throughput
stk2540
mirror zfs /tank
read 232 MB/s
write 175 MB/s
# just to test the max perf I did:
zpool destroy -f tank
zpool create -f pool c6t600A0B800038AFBC000002F347215518d0
And the same basic dd gives me :
single zfs /pool
read 320 MB/s
write 263 MB/s
Just to give an idea the SVM mirror using the two local sata2 disks
gives :
read 58 MB/s
write 52 MB/s
So, in production the zfs /tank mirror will be used to hold
our home directories (10 users using 10GB each),
our projects files (200 GB mostly text files and cvs database),
and some vendors tools (100 GB).
People will access the data (/tank) using nfs4 with their
workstations (sun ultra 20M2 with centos 4update5).
On the ultra20 M2, the basic test via nfs4 gives :
read 104 MB/s
write 63 MB/s
A this point, I have the following questions :
-- Does someone has some similar figures about the STK 2540 using zfs ?
-- Instead of doing only 2 volumes in the array,
what do you think about doing 8 volumes (one for each disk)
and doing a 4 two way mirror :
zpool create tank mirror c6t6001.. c6t6002.. mirror c6t6003..
c6t6004.. {...} mirror c6t6007.. c6t6008..
-- I will add 4 disks in the array next summer.
Do you think I should create 2 new luns in the array
and doing a :
zpool add tank mirror c6t6001..(lun3) c6t6001..(lun4)
or build from scratch the 2 luns (6 disks raid0) , and the pool tank
(ie : backup /tank - zpool destroy -- add disk - reconfigure array
-- zpool create tank ... - restore backuped data)
-- I think about doing a disk scrubbing once a month.
Is it sufficient ?
-- Have you got any comment on the performance from the nfs4 client ?
If you add any advices / suggestions, feel free to share.
Thanks,
Benjamin