Hello, I ran a zpool scrub on 2 zpools. one located on internal sas drives, the second on external, USB SATA drives. The internal pool finished scrubbing in no time, while the external pool is taking incredibly long. Typical data transfer rate to this external pool is 80MB/s. Any help would be greatly appreciated. justin # zpool status external pool: external state: ONLINE scrub: scrub in progress, 0.01% done, 161h29m to go config: NAME STATE READ WRITE CKSUM external ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 c6t0d0s0 ONLINE 0 0 0 c7t0d0s0 ONLINE 0 0 0 errors: No known data errors -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080306/7090fd40/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3361 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080306/7090fd40/attachment.bin>
On 06 March, 2008 - Justin Vassallo sent me these 12K bytes:> Hello, > > > > I ran a zpool scrub on 2 zpools. one located on internal sas drives, the > second on external, USB SATA drives. > > > > The internal pool finished scrubbing in no time, while the external pool is > taking incredibly long.Are you taking periodical snapshots? Currently that will restart scrubs.. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Insufficient data. How big is the pool? How much stored? Are the external drives all on the same USB bus? I am switching to eSATA for my next external drive setup as both USB 2.0 and firewire are just too fricking slow for the large drives these days. This message posted from opensolaris.org
Each partition in the pool is 320G. the disks only have one partition / disk Each disk is connected to a separate USB2 port on a X4200 M2. The scrub took around 6 hrs to complete, which I am told is acceptable (I was not aware it takes takes so long when I first posted; thanks to those who replied). What I must note is that the file systems on these pools were terribly slow and unusable during the whole scrub, which I understand is not normal. During this time, disks were 30% busy (which is normal for these disks), LWP switch was quite low at 10k/second and the CPU was relaxed at <10%. Should I think that I have an IO bottleneck, or would this fs locking be considered as weird zfs behavior. Thanks justin -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Vincent Fox Sent: 06 March 2008 18:53 To: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] zfs scrub taking very very long Insufficient data. How big is the pool? How much stored? Are the external drives all on the same USB bus? I am switching to eSATA for my next external drive setup as both USB 2.0 and firewire are just too fricking slow for the large drives these days. This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3361 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080308/aa35528a/attachment.bin>
I am using b68 or b69 (can''t remember) and the scrubs take forever. They never finish. It turned out to be a bug in this opensolaris version. I posted that question here somewhere, and it was a confirmed bug. This message posted from opensolaris.org