I am running zfs set on 3 x 300gb HD''s, I do see my disk activity going crazy all the time, is there any reason for it? I have nothing running on this system, just did setit up for testing purposes. I do replicate data from different system once a day trough rsync but that is quick process and I am not sure why am I getting this I/O activity on the system. extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 351.3 0.0 41312.2 0.0 0.0 24.1 0.0 68.5 1 98 c1t1d0 351.4 0.0 41312.3 0.0 0.0 24.1 0.0 68.5 1 98 c1t1d0s0 340.3 0.0 41384.7 0.0 0.0 21.4 0.0 62.8 1 85 c1t2d0 340.3 0.0 41384.8 0.0 0.0 21.4 0.0 62.8 1 85 c1t2d0s0 355.4 0.0 41825.8 0.0 0.0 24.8 0.0 69.9 1 100 c1t3d0 355.4 0.0 41825.9 0.0 0.0 24.8 0.0 69.9 1 100 c1t3d0s0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 317.9 0.0 38718.9 0.0 0.0 30.8 0.0 97.0 1 100 c1t1d0 317.9 0.0 38718.9 0.0 0.0 30.8 0.0 97.0 1 100 c1t1d0s0 410.2 0.0 50768.9 0.0 0.0 25.7 0.0 62.6 1 96 c1t2d0 410.2 0.0 50768.8 0.0 0.0 25.7 0.0 62.6 1 96 c1t2d0s0 409.2 0.0 51087.7 0.0 0.0 34.1 0.0 83.3 1 100 c1t3d0 409.2 0.0 51087.7 0.0 0.0 34.1 0.0 83.3 1 100 c1t3d0s0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 9.0 0.0 32.5 0.0 0.4 0.0 41.3 0 14 c1t0d0 0.0 9.0 0.0 32.5 0.0 0.4 0.0 41.3 0 14 c1t0d0s5 432.0 0.0 53225.5 0.0 0.0 27.3 0.0 63.2 1 93 c1t1d0 432.0 0.0 53225.5 0.0 0.0 27.3 0.0 63.2 1 93 c1t1d0s0 306.0 0.0 36914.6 0.0 0.0 25.9 0.0 84.7 1 95 c1t2d0 306.0 0.0 36914.6 0.0 0.0 25.9 0.0 84.7 1 95 c1t2d0s0 336.0 0.0 40197.2 0.0 0.0 18.6 0.0 55.5 1 82 c1t3d0 336.0 0.0 40197.2 0.0 0.0 18.6 0.0 55.5 1 82 c1t3d0s0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 350.0 0.0 41241.2 0.0 0.0 26.5 0.0 75.8 1 94 c1t1d0 350.0 0.0 41241.2 0.0 0.0 26.5 0.0 75.8 1 94 c1t1d0s0 367.0 0.0 43291.3 0.0 0.0 28.4 0.0 77.3 1 100 c1t2d0 367.0 0.0 43291.4 0.0 0.0 28.4 0.0 77.3 1 100 c1t2d0s0 363.0 0.0 43679.1 0.0 0.0 26.3 0.0 72.4 1 96 c1t3d0 363.0 0.0 43679.2 0.0 0.0 26.3 0.0 72.4 1 96 c1t3d0s0 I did try to see what is going on on the disk, I did try to kill any processes that might be doing anything (there was rsync server setup, so I did kill that..) Anyway when I do: # fuser -c /d /d: nothing has any locks on it. [09:33:34] root at adas: /root > zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT d 832G 515G 317G 61% ONLINE - [09:33:47] root at adas: /root > zpool status pool: d state: ONLINE scrub: scrub in progress, 35.37% done, 1h1m to go config: NAME STATE READ WRITE CKSUM d ONLINE 0 0 0 raidz ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 [09:34:02] root at adas: /root > zpool iostat 1 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- d 515G 317G 93 7 7.25M 667K d 515G 317G 714 0 87.6M 0 d 515G 317G 717 0 87.8M 0 d 515G 317G 671 0 83.5M 0 d 515G 317G 856 0 106M 0 d 515G 317G 699 0 85.5M 0 d 515G 317G 782 0 96.1M 0 d 515G 317G 718 0 88.3M 0 ^C Any idea what is going on and why there is so much reading going on? Thanks for help or suggestions. Chris
On Tue, 2006-05-09 at 09:44, Krzys wrote:> scrub: scrub in progress, 35.37% done, 1h1m to go> Any idea what is going on and why there is so much reading going on?see above. someone must have done a "zpool scrub" recently. (unfortunate that it doesn''t tell you when the scrub started..) I/O should stop abruptly when the scrub completes. - Bill
Ys, I did tun that command but it was quite a few days ago... :( Would it take that long to complete? I would never imagine it would... Is there any way to stop it? Chris On Tue, 9 May 2006, Bill Sommerfeld wrote:> On Tue, 2006-05-09 at 09:44, Krzys wrote: >> scrub: scrub in progress, 35.37% done, 1h1m to go > >> Any idea what is going on and why there is so much reading going on? > > see above. someone must have done a "zpool scrub" recently. > (unfortunate that it doesn''t tell you when the scrub started..) > > I/O should stop abruptly when the scrub completes. > > - Bill > > > !DSPAM:122,4460a3a81389520823356! >
On Tue, May 09, 2006 at 11:04:05AM -0400, Krzys wrote:> Ys, I did tun that command but it was quite a few days ago... :( Would it > take that long to complete? I would never imagine it would... Is there any > way to stop it?Are you taking regular snapshots? There is currently a bug whereby snapshot creation/deletion will cause scrubs/resilvers to restart. You can stop the scrub with ''zpool scrub -s''. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
> Ys, I did tun that command but it was quite a few > days ago... :( Would it take > that long to complete? I would never imagine it > would... Is there any way to > stop it? > > Chris >The really odd part Chris is that the scrub indicates it''s at 35.37% with 1hour and 1 minute left to finish the other 64.63%. Are you sure you started the scrub last a few days ago and maybe don''t have some script or someone else restarting a scrub? This message posted from opensolaris.org
This did work and all activity is stopped. :) Thank you. Chris On Tue, 9 May 2006, Eric Schrock wrote:> On Tue, May 09, 2006 at 11:04:05AM -0400, Krzys wrote: >> Ys, I did tun that command but it was quite a few days ago... :( Would it >> take that long to complete? I would never imagine it would... Is there any >> way to stop it? > > Are you taking regular snapshots? There is currently a bug whereby > snapshot creation/deletion will cause scrubs/resilvers to restart. You > can stop the scrub with ''zpool scrub -s''. > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > > > !DSPAM:122,4460b015196511856210096! >
Well I did manually start it few days ago, unless there is some automatic way to start it... Where could I find it out if it does automatically start or not? thanks for the info.. I like this where I have a system to play with and experiment it, I would hate it to be in production and behave like this where I dont know whats going on, but I am so glad you guys are here to help :) I realy do apprecaite your help and support... zfs is amazing :) and whats fun, its getting better and better with each update :))) thanks for that. Chris On Tue, 9 May 2006, Wes Williams wrote:>> Ys, I did tun that command but it was quite a few >> days ago... :( Would it take >> that long to complete? I would never imagine it >> would... Is there any way to >> stop it? >> >> Chris >> > > The really odd part Chris is that the scrub indicates it''s at 35.37% with 1hour and 1 minute left to finish the other 64.63%. Are you sure you started the scrub last a few days ago and maybe don''t have some script or someone else restarting a scrub? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > !DSPAM:122,4460b11323694435029698! >
Actually this would explain the behavior, probably because I have regular snapshots taken every hour it does restart scrub and that is why I am seeing it. Is there any way to disable scrub untill this is fixed? Or somehow prevent them from starting? I can certainly add additional line to a script that I do snaps to just kill scrub.. Thanks for help. Chris On Tue, 9 May 2006, Krzys wrote:> This did work and all activity is stopped. :) Thank you. > > Chris > > > On Tue, 9 May 2006, Eric Schrock wrote: > >> On Tue, May 09, 2006 at 11:04:05AM -0400, Krzys wrote: >>> Ys, I did tun that command but it was quite a few days ago... :( Would it >>> take that long to complete? I would never imagine it would... Is there any >>> way to stop it? >> >> Are you taking regular snapshots? There is currently a bug whereby >> snapshot creation/deletion will cause scrubs/resilvers to restart. You >> can stop the scrub with ''zpool scrub -s''. >> >> - Eric >> >> -- >> Eric Schrock, Solaris Kernel Development >> http://blogs.sun.com/eschrock >> >> >> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > !DSPAM:122,446102ce6284108731717! >