Anantha N. Srirama
2007-Jan-13 16:05 UTC
[zfs-discuss] Extremely poor ZFS perf and other observations
I''m observing the following behavior in our environment (Sol10U2, E2900, 24x96, 2x2Gbps, ...) - I''ve a compressed ZFS filesystem where I''m creating a large tar file. I notice that the tar process is running fine (accumulating CPU, truss shows writes, ...) but for whatever reason the timestamp on the file doesn''t change nor does the file size change. The same is true for ''zpool list'' output, the usage numbers don''t change for minutes at a time. - I started a tar job to the compressed ZFS filesystem reading from another compressed ZFS filesystem. At the same time I started copying files from another ZFS filesysem (same pool & same attributes) to a remote server (GigE connection) using SCP writing to an UFS filesystem. [b]Guess what? My scp over the wire beat the pants off of the local ZFS tar session writing to a 2x2Gbps SAN and EMC disks![/b] [b]I''m beginning to develop serious reservations about ZFS performance, specially with the compress feature turned on.[/b] This message posted from opensolaris.org
Eric Kustarz
2007-Jan-16 23:44 UTC
[zfs-discuss] Extremely poor ZFS perf and other observations
Anantha N. Srirama wrote:> I''m observing the following behavior in our environment (Sol10U2, E2900, 24x96, 2x2Gbps, ...)In general, i would recommend upgrading to s10u3 (if you can).> > - I''ve a compressed ZFS filesystem where I''m creating a large tar file. I notice that the tar process is running fine (accumulating CPU, truss shows writes, ...) but for whatever reason the timestamp on the file doesn''t change nor does the file size change. The same is true for ''zpool list'' output, the usage numbers don''t change for minutes at a time. > > - I started a tar job to the compressed ZFS filesystem reading from another compressed ZFS filesystem. At the same time I started copying files from another ZFS filesysem (same pool & same attributes) to a remote server (GigE connection) using SCP writing to an UFS filesystem. [b]Guess what? My scp over the wire beat the pants off of the local ZFS tar session writing to a 2x2Gbps SAN and EMC disks![/b] >Can you send the actual command you did (is the tar job creating a tar archive or extracting files)? Is this only a problem when compression is turned on? If so, i suspect its this bug: 6460622 zio_nowait() doesn''t live up to its name http://bugs.opensolaris.org/view_bug.do?bug_id=6460622 Which should be putback very shortly. eric
Anantha N. Srirama
2007-Jan-17 13:43 UTC
[zfs-discuss] Re: Extremely poor ZFS perf and other observations
U3 is under consideration, we''re going through some rudimentary testing of the update. I ran the following commands gtar cf <file in a new ZFS pool with compression>, I was just creating a tar file The remote copying was done as follows: scp -c arcfour . user at remotehost:/<UFS filesystem> BTW, the reverse operation of repopulating my FS (by untarring the local tar file) was extremely slow. Me thinks it was 2x slower, I averaged 5MB/S. Let me run some more experiments before I conclusively say the problems are related to compression. Initial observations suggest that it is for the following reasons: I ran 4 parallel gtar sessions reading from 4 ZFS filesystems with compresson writing to a new ZFS filesystem with compression on. My aggregate I/O never changed whether I ran 1 stream or 4 streams, it never exceeded around 20MB/S. My I/O sub-system was idling most of them time, sub 10ms write response time per ''iostat''. If compression was not the issue what else can I explain the magical 20MB/S ceiling no matter how many write streams I had going? This message posted from opensolaris.org