hi all, I''d like to build a solid storage server using zfs and opensolaris. The server more or less should have a NAS role, thus using nfsv4 to export the data to other nodes. since i am more a an engineer/sysadmin than a hardware guru, i was wondering, what is the best configuration in terms of both price and performance. the constraint is to use common sata-ii disks. from many disucssions i found out that the zfs controller configuration is a simple raid-0 controller to build raidz on top of a striped disk set. so what is your best practices recommendation towards hardware: -> how many individual sata controllers per disks provide the best performance? e.g. does it have a performance improvement if multiple cheap controllers are used to access N disks. is DMA here a limmitation anyways? -> memmory/RAM: i''ve read a 64-bit processor uses more cache for ZFS than on a 32-bit processor. What about RAM speed? If I want to use commodity (e.g. 7200 rpm) SATA ii disks i guess that the performance is severly defined by the speed of the disks. is it a plus to have a ddr2. What is a formulae for caches in zfs - I mean without a special power failure configuration, write caches are problematic anyways. -> processor/bus: a NAS server node is IO/centric. Is there a novel feature in AMD opteron + chipsets that would suggest such a chip? for instance in the awy the memory can be accessed? -> nic one thing is that a disk has a transfer rate for read and write of around 60MB/s. This would be around 500 MBit/s. So i guess for such a box, a server enhanced nic would not yield much performance improvements anyway. what would be your reasonable advice? --Jakob
On 12/10/06, Jakob Praher <jp at hapra.at> wrote:> hi all, > > I''d like to build a solid storage server using zfs and opensolaris. The > server more or less should have a NAS role, thus using nfsv4 to export > the data to other nodes. > > since i am more a an engineer/sysadmin than a hardware guru, i was > wondering, what is the best configuration in terms of both price and > performance. >We would need some idea of budget and amount of disk space desired, given what you said so far the best deal on the market is the x4500 ( thumper) it has 48 250 or 500GB drives in a 4RU case along with 2 dual core opteron cpus, and just $2/GB for 24TB of storage. http://www.sun.com/servers/x64/x4500/index.xml James Dickens uadmin.blogspot.com> the constraint is to use common sata-ii disks. > from many disucssions i found out that the zfs controller configuration > is a simple raid-0 controller to build raidz on top of a striped disk set. > > so what is your best practices recommendation towards hardware: > > -> how many individual sata controllers per disks provide the best > performance? > e.g. does it have a performance improvement if multiple cheap > controllers are used to access N disks. is DMA here a limmitation anyways? > > -> memmory/RAM: i''ve read a 64-bit processor uses more cache for ZFS > than on a 32-bit processor. What about RAM speed? If I want to use > commodity (e.g. 7200 rpm) SATA ii disks i guess that the performance is > severly defined by the speed of the disks. is it a plus to have a ddr2. > What is a formulae for caches in zfs - I mean without a special power > failure configuration, write caches are problematic anyways. > > -> processor/bus: > a NAS server node is IO/centric. Is there a novel feature in AMD opteron > + chipsets that would suggest such a chip? for instance in the awy the > memory can be accessed? > > -> nic > one thing is that a disk has a transfer rate for read and write of > around 60MB/s. This would be around 500 MBit/s. So i guess for such a > box, a server enhanced nic would not yield much performance improvements > anyway. > > what would be your reasonable advice? > > --Jakob > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Jakob Praher wrote:> hi all, > > I''d like to build a solid storage server using zfs and opensolaris. The > server more or less should have a NAS role, thus using nfsv4 to export > the data to other nodes. > > ... > what would be your reasonable advice? >First of all, figure out what you need in terms of capacity and IOPS/sec. This will determine the number of spindles, cpus, network adaptors, etc. Keep in mind, for large sequential reads and large writes you can get a significant fraction of the max throughput of the drives; my 4 x 500 GB RAIDZ configuration does 150 MB/sec pretty consistently. If you''re doing small random reads or writes, you''ll be much more limited by the number of spindles and the way you configure them. - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts