William D. Hathaway
2006-Mar-31 02:29 UTC
[zfs-discuss] gtar on zfs vs gtar on ufs -- build 36
Hi, As part of my "many small files" benchmarking effort, I did the following test: * create 500k files of approx 13k in size (2-24k random distribution) spread across 1000 directories on a ZFS filesystem and a UFS filesystem (only mount/set option for both was disabling atime) * use the Solaris bundled gtar to do runs of: time /usr/sfw/bin/gtar -cf - $fs/fstest | cat > /dev/null (where $fs == ufs or zfs, I had to put the cat in there because otherwise gtar doesn''t actually read file contents if it detects /dev/null is the output device) I''ve completed 3 serial runs on UFS with times of 1:51:23, 1:52:32, 1:52:17 I''ve completed 2 serial runs on ZFS with times of 2:43:41 and 2:40:06 (3rd ZFS run is in progress) I''m running: SunOS spaz 5.11 snv_36 i86pc i386 i86pc on an 1800mhz W110Z with a single disk df -k /ufs/fstest /zfs/fstest Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0d0s3 20170273 6546310 13422261 33% /ufs/fstest zfs/fstest 48637952 6574605 42058168 14% /zfs/fstest While I am thrilled with "zfs backup" performance, I was surprised that UFS still had a ~ 40% speed difference when I ran the same gtar test on each fs. This message posted from opensolaris.org
William D. Hathaway wrote:>Hi, > As part of my "many small files" benchmarking effort, I did the following test: > >* create 500k files of approx 13k in size (2-24k random distribution) spread across 1000 directories on a ZFS filesystem and a UFS filesystem (only mount/set option for both was disabling atime) > >* use the Solaris bundled gtar to do runs of: > time /usr/sfw/bin/gtar -cf - $fs/fstest | cat > /dev/null >(where $fs == ufs or zfs, I had to put the cat in there because otherwise gtar doesn''t actually read file contents if it detects /dev/null is the output device) > >I''ve completed 3 serial runs on UFS with times of 1:51:23, 1:52:32, 1:52:17 >I''ve completed 2 serial runs on ZFS with times of 2:43:41 and 2:40:06 >(3rd ZFS run is in progress) > >I''m running: >SunOS spaz 5.11 snv_36 i86pc i386 i86pc >on an 1800mhz W110Z with a single disk > >df -k /ufs/fstest /zfs/fstest >Filesystem kbytes used avail capacity Mounted on >/dev/dsk/c0d0s3 20170273 6546310 13422261 33% /ufs/fstest >zfs/fstest 48637952 6574605 42058168 14% /zfs/fstest > >While I am thrilled with "zfs backup" performance, I was surprised that UFS still had a ~ 40% speed difference when I ran the same gtar test on each fs. > > >Hmmm, you''re running 64bit right? I just did this on today''s non-debug bits (3/31), and saw zfs faster. For ZFS (uncaching data via export)... fsh-mullet# zpool export z fsh-mullet# zpool import z fsh-mullet# cd /z fsh-mullet# /bin/time /usr/sfw/bin/gtar -cf - . | cat > /dev/null real 3:26.7 user 11.4 sys 1:01.0 fsh-mullet# Now for UFS (uncaching data via umount)... fsh-mullet# umount /ufs_gtar fsh-mullet# mount -F ufs /dev/dsk/c0t1d0s0 /ufs_gtar fsh-mullet# cd /ufs_gtar fsh-mullet# /bin/time /usr/sfw/bin/gtar -cf - . | cat > /dev/null real 4:07.2 user 11.5 sys 59.4 fsh-mullet# Note, i did mine on a whole separate disk, and used the exact same disk for both ufs and zfs (so I don''t have to worry about which partition the file system is on). I did the gtar on 1000 directories of 500 2k files. Since you only have one disk, perhaps the zfs file system is on a inner track and not getting the same throughput as the ufs file system. eric
William D. Hathaway
2006-Apr-01 01:07 UTC
[zfs-discuss] gtar on zfs vs gtar on ufs -- build 36
Hi Eric,
Good call on the possibility of inner track causing the speed
difference. I am re-creating the zfs pool on the partition earlier used
by UFS and repeating the tests.
eric kustarz wrote:> William D. Hathaway wrote:
>
>> Hi,
>> As part of my "many small files" benchmarking effort, I did
the
>> following test:
>>
>> * create 500k files of approx 13k in size (2-24k random distribution)
>> spread across 1000 directories on a ZFS filesystem and a UFS
>> filesystem (only mount/set option for both was disabling atime)
>>
>> * use the Solaris bundled gtar to do runs of:
>> time /usr/sfw/bin/gtar -cf - $fs/fstest | cat > /dev/null
>> (where $fs == ufs or zfs, I had to put the cat in there because
>> otherwise gtar doesn''t actually read file contents if it
detects
>> /dev/null is the output device)
>>
>> I''ve completed 3 serial runs on UFS with times of 1:51:23,
1:52:32,
>> 1:52:17
>> I''ve completed 2 serial runs on ZFS with times of 2:43:41 and
2:40:06
>> (3rd ZFS run is in progress)
>>
>> I''m running:
>> SunOS spaz 5.11 snv_36 i86pc i386 i86pc
>> on an 1800mhz W110Z with a single disk
>>
>> df -k /ufs/fstest /zfs/fstest
>> Filesystem kbytes used avail capacity Mounted on
>> /dev/dsk/c0d0s3 20170273 6546310 13422261 33% /ufs/fstest
>> zfs/fstest 48637952 6574605 42058168 14% /zfs/fstest
>>
>> While I am thrilled with "zfs backup" performance, I was
surprised
>> that UFS still had a ~ 40% speed difference when I ran the same gtar
>> test on each fs.
>>
>>
>>
>
> Hmmm, you''re running 64bit right?
>
> I just did this on today''s non-debug bits (3/31), and saw zfs
faster.
>
> For ZFS (uncaching data via export)...
>
> fsh-mullet# zpool export z
> fsh-mullet# zpool import z
> fsh-mullet# cd /z
> fsh-mullet# /bin/time /usr/sfw/bin/gtar -cf - . | cat > /dev/null
>
> real 3:26.7
> user 11.4
> sys 1:01.0
> fsh-mullet#
>
> Now for UFS (uncaching data via umount)...
>
> fsh-mullet# umount /ufs_gtar
> fsh-mullet# mount -F ufs /dev/dsk/c0t1d0s0 /ufs_gtar
> fsh-mullet# cd /ufs_gtar
> fsh-mullet# /bin/time /usr/sfw/bin/gtar -cf - . | cat > /dev/null
>
> real 4:07.2
> user 11.5
> sys 59.4
> fsh-mullet#
>
> Note, i did mine on a whole separate disk, and used the exact same disk
> for both ufs and zfs (so I don''t have to worry about which
partition the
> file system is on). I did the gtar on 1000 directories of 500 2k
> files. Since you only have one disk, perhaps the zfs file system is on
> a inner track and not getting the same throughput as the ufs file system.
>
> eric
>
--
William D. Hathaway email: william.hathaway at versatile.com
Solutions Architect aim: wdhPO
Versatile, Inc. cell: 717-314-5461
> Good call on the possibility of inner track causing the speed > difference. I am re-creating the zfs pool on the partition earlier used > by UFS and repeating the tests.FYI, on current 500G Hitachi drives, bandwidth goes from 64.8 MB/sec down to 31.0 MB/sec as you go from zone 0 to zone 29 (outer to inner). Jeff
William D. Hathaway
2006-Apr-02 15:57 UTC
[zfs-discuss] Re: gtar on zfs vs gtar on ufs -- build 36
I repeated the tests using the same partition for a ZFS pool as I had used for UFS and ZFS has now came out ahead 1:41 to 1:52. FWIW, I also noticed that zpool create warns about slices that overlap with s2 which I didn''t remember seeing until b36. # zpool create zfs c0d0s3 invalid vdev specification use ''-f'' to override the following errors: /dev/dsk/c0d0s3 overlaps with /dev/dsk/c0d0s2 # zpool create -f zfs c0d0s3 This message posted from opensolaris.org
Eric Schrock
2006-Apr-03 16:11 UTC
[zfs-discuss] Re: gtar on zfs vs gtar on ufs -- build 36
On Sun, Apr 02, 2006 at 08:57:03AM -0700, William D. Hathaway wrote:> > FWIW, I also noticed that zpool create warns about slices that overlap with s2 which I didn''t remember seeing until b36. > > # zpool create zfs c0d0s3 > invalid vdev specification > use ''-f'' to override the following errors: > /dev/dsk/c0d0s3 overlaps with /dev/dsk/c0d0s2 > # zpool create -f zfs c0d0s3Hmmm, this is definitely a bug. Overlapping with the backup slice is definitely "expected" ;-) I''ll see if I can reproduce it and track down what''s happening. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock