Hello zfs-discuss,
Server is v440, Solaris 10U2 + patches. Each test repeated at least two times
and two results posted. Server connected with dual-ported FC card with
MPxIO using FC-AL (DAS).
1. 3510, RAID-10 using 24 disks from two enclosures, random
optimization, 32KB stripe width, write-back, one LUN
1.1 filebench/varmail for 60s
a. ZFS on top of LUN, atime=off
IO Summary: 490054 ops 8101.6 ops/s, (1246/1247 r/w) 39.9mb/s,
291us cpu/op, 6.1ms latency
IO Summary: 492274 ops 8139.6 ops/s, (1252/1252 r/w) 40.1mb/s,
303us cpu/op, 6.1ms latency
b. ZFS on top of LUN, atime=off
WRITE CACHE OFF (write-thru)
IO Summary: 281048 ops 4647.0 ops/s, (715/715 r/w) 22.8mb/s, 298us
cpu/op, 10.7ms latency
IO Summary: 282200 ops 4665.3 ops/s, (718/718 r/w) 23.0mb/s, 298us
cpu/op, 10.6ms latency
c. UFS on top of LUN, noatime, maxcontig set to 48
IO Summary: 383262 ops 6337.1 ops/s, (975/975 r/w) 31.2mb/s, 566us
cpu/op, 7.9ms latency
IO Summary: 381706 ops 6310.4 ops/s, (971/971 r/w) 31.1mb/s, 560us
cpu/op, 7.9ms latency
d. UFS on top of LUN, noatime, maxcontig set to 48,
WRITE CACHE OFF (write-thru)
IO Summary: 148825 ops 2460.0 ops/s, (378/379 r/w) 12.1mb/s, 772us
cpu/op, 20.9ms latency
IO Summary: 151152 ops 2498.4 ops/s, (384/385 r/w) 12.4mb/s, 758us
cpu/op, 20.5ms latency
2. 3510, 2x (4x RAID-0(3disks)), 32KB stripe width,
random optimization, write back. 4 R0 groups are in one enclosure
and assigned to primary controller then another 4 R0 groups are in other
enclosure
and assigned to secondary controller. Then RAID-10 is created with
mirror groups between controllers. 24x disks total as in #1.
2.1 filebench/varmail 60s
a. ZFS RAID-10, atime=off
IO Summary: 379284 ops 6273.4 ops/s, (965/965 r/w) 30.9mb/s, 314us
cpu/op, 8.0ms latency
IO Summary: 383917 ops 6346.9 ops/s, (976/977 r/w) 31.4mb/s, 316us
cpu/op, 7.8ms latency
b. ZFS RAID-10, atime=off
WRITE CACHE OFF (write-thru)
IO Summary: 275490 ops 4549.9 ops/s, (700/700 r/w) 22.3mb/s, 327us
cpu/op, 11.0ms latency
IO Summary: 276027 ops 4567.8 ops/s, (703/703 r/w) 22.5mb/s, 319us
cpu/op, 11.0ms latency
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
So this is the interesting data , right ?
1. 3510, RAID-10 using 24 disks from two enclosures, random
optimization, 32KB stripe width, write-back, one LUN
1.1 filebench/varmail for 60s
a. ZFS on top of LUN, atime=off
IO Summary: 490054 ops 8101.6 ops/s, (1246/1247 r/w) 39.9mb/s,
291us cpu/op, 6.1ms latency
2. 3510, 2x (4x RAID-0(3disks)), 32KB stripe width,
random optimization, write back. 4 R0 groups are in one enclosure
and assigned to primary controller then another 4 R0 groups are in other
enclosure
and assigned to secondary controller. Then RAID-10 is created with
mirror groups between controllers. 24x disks total as in #1.
2.1 filebench/varmail 60s
a. ZFS RAID-10, atime=off
IO Summary: 379284 ops 6273.4 ops/s, (965/965 r/w) 30.9mb/s,
314us cpu/op, 8.0ms latency
Have you tried 1M stripes spacially in case 2 ?
-r
Hello Roch,
Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R> So this is the interesting data , right ?
R> 1. 3510, RAID-10 using 24 disks from two enclosures, random
R> optimization, 32KB stripe width, write-back, one LUN
R> 1.1 filebench/varmail for 60s
R> a. ZFS on top of LUN, atime=off
R> IO Summary: 490054 ops 8101.6 ops/s, (1246/1247 r/w)
R> 39.9mb/s, 291us cpu/op, 6.1ms latency
R> 2. 3510, 2x (4x RAID-0(3disks)), 32KB stripe width,
R> random optimization, write back. 4 R0 groups are in one enclosure
R> and assigned to primary controller then another 4 R0 groups are in
other enclosure
R> and assigned to secondary controller. Then RAID-10 is created with
R> mirror groups between controllers. 24x disks total as in #1.
R> 2.1 filebench/varmail 60s
R> a. ZFS RAID-10, atime=off
R> IO Summary: 379284 ops 6273.4 ops/s, (965/965 r/w)
R> 30.9mb/s, 314us cpu/op, 8.0ms latency
R> Have you tried 1M stripes spacially in case 2 ?
R> -r
I did try with 128KB and 256KB stripe width - the same results
(difference less than 5%).
I haven''t tested 1MB ''coz maximum for 3510 is 256KB.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
Hello Robert,
Thursday, August 24, 2006, 4:25:16 PM, you wrote:
RM> Hello Roch,
RM> Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R>> So this is the interesting data , right ?
R>> 1. 3510, RAID-10 using 24 disks from two enclosures, random
R>> optimization, 32KB stripe width, write-back, one LUN
R>> 1.1 filebench/varmail for 60s
R>> a. ZFS on top of LUN, atime=off
R>> IO Summary: 490054 ops 8101.6 ops/s, (1246/1247 r/w)
R>> 39.9mb/s, 291us cpu/op, 6.1ms latency
R>> 2. 3510, 2x (4x RAID-0(3disks)), 32KB stripe width,
R>> random optimization, write back. 4 R0 groups are in one enclosure
R>> and assigned to primary controller then another 4 R0 groups are
in other enclosure
R>> and assigned to secondary controller. Then RAID-10 is created
with
R>> mirror groups between controllers. 24x disks total as in #1.
R>> 2.1 filebench/varmail 60s
R>> a. ZFS RAID-10, atime=off
R>> IO Summary: 379284 ops 6273.4 ops/s, (965/965 r/w)
R>> 30.9mb/s, 314us cpu/op, 8.0ms latency
R>> Have you tried 1M stripes spacially in case 2 ?
R>> -r
RM> I did try with 128KB and 256KB stripe width - the same results
RM> (difference less than 5%).
RM> I haven''t tested 1MB ''coz maximum for 3510 is 256KB.
I''ve just tested with 4k - the same result.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
Hello Robert,
Thursday, August 24, 2006, 4:44:26 PM, you wrote:
RM> Hello Robert,
RM> Thursday, August 24, 2006, 4:25:16 PM, you wrote:
RM>> Hello Roch,
RM>> Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R>>> So this is the interesting data , right ?
R>>> 1. 3510, RAID-10 using 24 disks from two enclosures, random
R>>> optimization, 32KB stripe width, write-back, one LUN
R>>> 1.1 filebench/varmail for 60s
R>>> a. ZFS on top of LUN, atime=off
R>>> IO Summary: 490054 ops 8101.6 ops/s, (1246/1247 r/w)
R>>> 39.9mb/s, 291us cpu/op, 6.1ms latency
R>>> 2. 3510, 2x (4x RAID-0(3disks)), 32KB stripe width,
R>>> random optimization, write back. 4 R0 groups are in one
enclosure
R>>> and assigned to primary controller then another 4 R0 groups
are in other enclosure
R>>> and assigned to secondary controller. Then RAID-10 is created
with
R>>> mirror groups between controllers. 24x disks total as in #1.
R>>> 2.1 filebench/varmail 60s
R>>> a. ZFS RAID-10, atime=off
R>>> IO Summary: 379284 ops 6273.4 ops/s, (965/965 r/w)
R>>> 30.9mb/s, 314us cpu/op, 8.0ms latency
R>>> Have you tried 1M stripes spacially in case 2 ?
R>>> -r
RM>> I did try with 128KB and 256KB stripe width - the same results
RM>> (difference less than 5%).
RM>> I haven''t tested 1MB ''coz maximum for 3510 is
256KB.
RM> I''ve just tested with 4k - the same result.
And now I tried with creating two stripes on 3510, each with 12 disks,
32KB stripe width, each on different controller.
Then using ZFS I mirrored them.
And the result is ~6300 IOPS for the same test.
Looks like I''ll go with HW raid and zfs as file system for several
reasons.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
1. ZFS on top of LUN, atime=off, nthreads set to 64 IO Summary: 1008724 ops 16698.4 ops/s, (2569/2570 r/w) 82.0mb/s, 286us cpu/op, 9.8ms latency IO Summary: 1004646 ops 16617.3 ops/s, (2556/2557 r/w) 81.5mb/s, 283us cpu/op, 10.0ms latency 2. UFS on top of LUN (the same as in #1), noatime, nthreads set to 64 IO Summary: 641008 ops 10606.0 ops/s, (1632/1632 r/w) 51.8mb/s, 417us cpu/op, 16.5ms latency IO Summary: 631543 ops 10452.8 ops/s, (1608/1609 r/w) 51.0mb/s, 410us cpu/op, 16.2ms latency This message posted from opensolaris.org
nthreads set to 256 1. ZFS, atime=off, HW raid-10 24 disks IO Summary: 1545154 ops 25594.2 ops/s, (3935/3939 r/w) 122.7mb/s, 288us cpu/op, 20.0ms latency 2. UFS, noatime, HW raid-10 24 disks IO Summary: 716377 ops 11862.4 ops/s, (1823/1826 r/w) 56.1mb/s, 432us cpu/op, 23.2ms latency So looks like the more concurency the better ZFS is comparing to UFS. This message posted from opensolaris.org