Chris Linton-Ford
2008-Feb-05 14:53 UTC
[zfs-discuss] ls on directory in ZFS f/s unusably slow
Hi all,
I posted to osol-discuss with this but got no resolution - I''m hoping a
more focused mailing list will yield better results.
I have a ZFS filesystem (zpool version 8, on SXDE 9/07) on which there
was a directory that contained a large number of files (~2 million).
After deleting these files, a ls of the directory still takes in the
order of 2 minutes and iostat shows lots of disk reads across the 3
disks in the raidz. All snapshots of the filesystem have been deleted.
A scrub of the zpool comes back with no errors, but a zdb shows checksum
errors:
Error counts:
errno count
50 8
leaked space: vdev 0, offset 0x4d0b2ae000, size 137216
leaked space: vdev 0, offset 0x4d0be6f800, size 121856
leaked space: vdev 0, offset 0x4de9f00000, size 196608
leaked space: vdev 0, offset 0x4df2986800, size 161792
leaked space: vdev 0, offset 0x4df2ca8400, size 134144
leaked space: vdev 0, offset 0x4df2b03800, size 104448
leaked space: vdev 0, offset 0x4df2560000, size 196608
leaked space: vdev 0, offset 0x4db94ea000, size 25600
block traversal size 622518934528 != alloc 622520012800 (leaked 1078272)
bp count: 7157530
bp logical: 495490334208 avg: 69226
bp physical: 412306654208 avg: 57604 compression:
1.20
bp allocated: 622518934528 avg: 86973 compression:
0.80
SPA allocated: 622520012800 used: 83.30%
capacity operations bandwidth ---- errors
----
description used avail read write read write read write
cksum
data 580G 116G 234 0 11.5M 0 0 0
27
raidz1 580G 116G 234 0 11.5M 0 0 0
27
/dev/dsk/c0t1d0s0 228 0 5.76M 0 0 0
0
/dev/dsk/c0t2d0s0 228 0 5.76M 0 0 0
0
/dev/dsk/c0t3d0s0 228 0 5.76M 0 0 0
0
Is this a known bug in this version of ZFS, and is there anything I can
do about it or do I need to rebuild the pool?
Also, will these errors persist if I do a zpool upgrade?
Thanks in advance for any help,
Chris