Bob Evans
2006-Aug-16 13:55 UTC
[zfs-discuss] Why is ZFS raidz slower than simple ufs mount?
Hi, this is a follow up to "Significant pauses to zfs writes". I''m getting about 15% slower performance using ZFS raidz than if I just mount the same type of drive using ufs. Based on some of the suggestions I received, I tried a bit more testing. First here is my setup: SunBlade 1000, 750MHZ CPU, 2 GB Memory, 2x36GB FC internal drives, Antares dual channel scsi card (80 GB/sec). Compaq SCSI drive chassis, 14x18.3 GB, 15K RPM drives I configured a ZFS raidz file system on one half of the drive array, using one channel of the scsi card, the other channel went to the other half of the array where I mounted a single drive UFS. This lets me use the same scsi card, hardware and disk drives for both tests. I copied (using cp) a 10GB test file to both the zfs drive and the ufs drive. On average, it took 6 minutes for the ufs drive and 7.5 min for the zfs drive. Earlier, I had made the observation that during the zfs writes, there was a "cycling" of writes. This involved about 2 sec of disk writes, followed by about 2 seconds of idel time, followed by another 2 sec of disk writes (observed from disk activity lights on the unit). I finally brought up the sdtperfmeter to view what was happening during those writes. I could definitely see the cpu repeatedly hit 100%, then drop off. This corresponded with the observed writes and pauses on the drive unit. Attached are snapshots of the sdtperfmeter during these tests: nfs_write_3 is the write to the ufs drive (10 GB test file) zfs_write_3 is the write to the zfs drive zfs_write_2cpu is the same test, but with an additional CPU installed on the server (2x750) I still don''t understand why the zfs file system is slower than using ufs? The zfs writes were stil slower even with 2 cpus in the server. Does anybody have any ideas? This message posted from opensolaris.org -------------- next part -------------- A non-text attachment was scrubbed... Name: nfs_write_3.png Type: image/png Size: 14416 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060816/5baac2cf/attachment.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: zfs_write_3.png Type: image/png Size: 16860 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060816/5baac2cf/attachment-0001.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: zfs_write_2cpu.png Type: image/png Size: 16384 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060816/5baac2cf/attachment-0002.png>
Robert Milkowski
2006-Aug-17 06:04 UTC
[zfs-discuss] Why is ZFS raidz slower than simple ufs mount?
Hello Bob, Wednesday, August 16, 2006, 3:55:26 PM, you wrote: BE> Hi, this is a follow up to "Significant pauses to zfs writes". BE> I''m getting about 15% slower performance using ZFS raidz than if BE> I just mount the same type of drive using ufs. BE> Based on some of the suggestions I received, I tried a bit more BE> testing. First here is my setup: BE> SunBlade 1000, BE> 750MHZ CPU, BE> 2 GB Memory, BE> 2x36GB FC internal drives, BE> Antares dual channel scsi card (80 GB/sec). BE> Compaq SCSI drive chassis, 14x18.3 GB, 15K RPM drives BE> I configured a ZFS raidz file system on one half of the drive BE> array, using one channel of the scsi card, the other channel went BE> to the other half of the array where I mounted a single drive UFS. BE> This lets me use the same scsi card, hardware and disk drives for both tests. What do you mean that you configured ZFS raidz file system? For raid-z you need at least two disks/partitions. Can you send zpool status output and give exact config for ufs? Can you also give details which slices (starting and ending block) did you use for UFS and for ZFS? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Bob Evans
2006-Aug-17 12:56 UTC
[zfs-discuss] Re: Why is ZFS raidz slower than simple ufs mount?
Robert, Sorry about not being clearer. The storage unit I am using is configured as follows: X X X X X X X X X X X X X X \ \-- (Each X is an 18 GB SCSI Disk) The first 7 disks have been used for the ZFS RaidZ, I used the last disk (#14) for my UFS target. The first 7 are on one scsi channel, the next 7 are on the other channel. Here is the output of zpool status: pool: z state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM z ONLINE 0 0 0 raidz ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 c3t4d0 ONLINE 0 0 0 c3t5d0 ONLINE 0 0 0 c3t8d0 ONLINE 0 0 0 errors: No known data errors Here is the format of each of the 14 disks in the array: partition> print Current partition table (original): Total disk sectors available: 35548662 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 34 16.95GB 35548662 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 35548663 8.00MB 35565046 I ufs mounted the target disk by doing the following: newfs /dev/rdsk/c4t8d0s0 mount /foo /dev/dsk/c4t8d0s0 Thanks! This message posted from opensolaris.org
Victor Latushkin
2006-Aug-17 14:39 UTC
[zfs-discuss] Re: Why is ZFS raidz slower than simple ufs mount?
Hi Bob, you are using some non-Sun SCSI HBA. Could you please be more specific about HBA model and driver? You are getting pretty the same high CPU load with write to single-disk UFS and raid-z. This may mean that the problem is not with ZFS itself. Victor Bob Evans wrote:> Robert, > > Sorry about not being clearer. > > The storage unit I am using is configured as follows: > > X X X X X X X X X X X X X X > \ > \-- (Each X is an 18 GB SCSI Disk) > > The first 7 disks have been used for the ZFS RaidZ, I used the last disk (#14) for my UFS target. The first 7 are on one scsi channel, the next 7 are on the other channel. > > Here is the output of zpool status: > pool: z > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > z ONLINE 0 0 0 > raidz ONLINE 0 0 0 > c3t0d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > c3t2d0 ONLINE 0 0 0 > c3t3d0 ONLINE 0 0 0 > c3t4d0 ONLINE 0 0 0 > c3t5d0 ONLINE 0 0 0 > c3t8d0 ONLINE 0 0 0 > > errors: No known data errors > > Here is the format of each of the 14 disks in the array: > partition> print > Current partition table (original): > Total disk sectors available: 35548662 + 16384 (reserved sectors) > > Part Tag Flag First Sector Size Last Sector > 0 usr wm 34 16.95GB 35548662 > 1 unassigned wm 0 0 0 > 2 unassigned wm 0 0 0 > 3 unassigned wm 0 0 0 > 4 unassigned wm 0 0 0 > 5 unassigned wm 0 0 0 > 6 unassigned wm 0 0 0 > 8 reserved wm 35548663 8.00MB 35565046 > > > I ufs mounted the target disk by doing the following: > > newfs /dev/rdsk/c4t8d0s0 > mount /foo /dev/dsk/c4t8d0s0 > > Thanks! > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bob Evans
2006-Aug-17 16:31 UTC
[zfs-discuss] Re: Re: Why is ZFS raidz slower than simple ufs mount?
First, I apologize, I listed the Antares in my original post, it was one of two scsi cards I tested with. The posted CPU snapshots were from the LSI 22320 card (mentioned below). I''ve tried this with two different SCSI cards. As far as I know, both are standard SCSI cards used for Suns. Sun lists the LSI 22320 as a supported HBA (I believe). They are: LSI 22320 64-bit PCI-X Ultra 320 SCSI Dual Channel HBA (newer card) Antares PCI SCSI-2U2WL dual 80 MB/Sec SCSI (older card) I get the same results for both cards... I use the default drivers that Solaris 10 Update 2 provides. I think we got the newer card through Sun, but I don''t have the original box. The older card had been used in an Enterprise 450 box. Bob This message posted from opensolaris.org
Richard Elling - PAE
2006-Aug-17 18:02 UTC
[zfs-discuss] Why is ZFS raidz slower than simple ufs mount?
Bob Evans wrote:> Hi, this is a follow up to "Significant pauses to zfs writes". > > I''m getting about 15% slower performance using ZFS raidz than if I just mount the same type of drive using ufs.What is your expectation? -- richard