Jim Mauro
2009-Mar-23 23:44 UTC
[zfs-discuss] Copying thousands of small files on an expanded ZFS pool crawl to a poor performance-not on other pools.
Cross-posting to the public ZFS discussion alias. There''s nothing here that requires confidentiallity, and the public alias is a much broader audience with a larger number of experienced ZFS users... As to the issue - what is the free space disparity across the pools? Is the one particular pool significantly tighter on free space than the other pools ("zpool list")? Thanks, /jim Nobel Shelby wrote:> Customer has many large zfs pools..He does the same on all pools: > Copying overnight large amounts of small files (1-5K). > All but one particular pool (that has been expanded) gives them this > problem: > --the copying within a few minutes crawls and the zpool looks > unresponsive. > > Background: > He had to grow this particular pool twice over a period of time (it > was 6TB and it grew by 4TB twice-now it is 14TB) > Solaris was U4 but now is U6. > > They have limited the arc: > set zfs:zfs_arc_max=0x100000000 > and > zfs:zfs_nocacheflush=1 (they have a 6540 array). > > Does expanding the pool affects performance and if so what is the best > way to recover > (other than rebuilding the pool) > > Thanks, > -Nobel > > >
Roch
2009-Mar-24 10:08 UTC
[zfs-discuss] Copying thousands of small files on an expanded ZFS pool crawl to a poor performance-not on other pools.
Hi Noel. zpool iostat -v For a working pool and for a problem pool would help to see the type of pool and it''s capacity. I assume the problem is not the source of the data. To read large number of small files typically requires lots and lots of threads (say 100 per source disks). Is data coming into the pool through NFS/CIFS/direct ? -r Jim Mauro writes: > > Cross-posting to the public ZFS discussion alias. > There''s nothing here that requires confidentiallity, and > the public alias is a much broader audience with a larger > number of experienced ZFS users... > > As to the issue - what is the free space disparity > across the pools? Is the one particular pool significantly > tighter on free space than the other pools ("zpool list")? > > Thanks, > /jim > > Nobel Shelby wrote: > > Customer has many large zfs pools..He does the same on all pools: > > Copying overnight large amounts of small files (1-5K). > > All but one particular pool (that has been expanded) gives them this > > problem: > > --the copying within a few minutes crawls and the zpool looks > > unresponsive. > > > > Background: > > He had to grow this particular pool twice over a period of time (it > > was 6TB and it grew by 4TB twice-now it is 14TB) > > Solaris was U4 but now is U6. > > > > They have limited the arc: > > set zfs:zfs_arc_max=0x100000000 > > and > > zfs:zfs_nocacheflush=1 (they have a 6540 array). > > > > Does expanding the pool affects performance and if so what is the best > > way to recover > > (other than rebuilding the pool) > > > > Thanks, > > -Nobel > > > > > >
Nobel Shelby
2009-Mar-24 16:02 UTC
[zfs-discuss] Copying thousands of small files on an expanded ZFS pool crawl to a poor performance-not on other pools.
Jim, There is no space constraints nor quotas... Thanks, -Nobel Jim Mauro wrote:> > Cross-posting to the public ZFS discussion alias. > There''s nothing here that requires confidentiallity, and > the public alias is a much broader audience with a larger > number of experienced ZFS users... > > As to the issue - what is the free space disparity > across the pools? Is the one particular pool significantly > tighter on free space than the other pools ("zpool list")? > > Thanks, > /jim > > Nobel Shelby wrote: >> Customer has many large zfs pools..He does the same on all pools: >> Copying overnight large amounts of small files (1-5K). >> All but one particular pool (that has been expanded) gives them this >> problem: >> --the copying within a few minutes crawls and the zpool looks >> unresponsive. >> >> Background: >> He had to grow this particular pool twice over a period of time (it >> was 6TB and it grew by 4TB twice-now it is 14TB) >> Solaris was U4 but now is U6. >> >> They have limited the arc: >> set zfs:zfs_arc_max=0x100000000 >> and >> zfs:zfs_nocacheflush=1 (they have a 6540 array). >> >> Does expanding the pool affects performance and if so what is the >> best way to recover >> (other than rebuilding the pool) >> >> Thanks, >> -Nobel >> >> >>