Displaying 20 results from an estimated 2000 matches similar to: "resilver speed."
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare.
Everything went well, but the resilvering process seems to be taking an eternity:
# zpool status
pool: bigpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2010 Jul 05
5
never ending resilver
Hi list,
Here''s my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
filerbackup13
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell
makes zpool resilvering so slow? I''m running OpenSolaris 2009.06.
I have had a large number of problematic disks due to a bad production
batch, leading me to resilver quite a few times, progressively
replacing each disk as it dies (and now preemptively removing disks.)
My complaint is that resilvering ends up
2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn''t the slog a copy of the
in-memory intent log? Wouldn''t it just simply replicate the data that''s
in the other log, checked against what''s in RAM? And presumably there
isn''t that much data in the slog so there isn''t that much to check?
Or
2010 Oct 16
4
resilver question
Hi all
I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start:
Status immediately after starting resilver:
# zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine
2010 Apr 14
1
Checksum errors on and after resilver
Hi all,
I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I''m posting this to get help with calming my nerves and detect any possible future faults.
Lets start with some specs.
OSOL 2009.06
Intel SASUC8i (w LSI 1.30IT FW)
Gigabyte
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello,
I''m working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168 hours, that put completion at sometime tomorrow night.
However, he just reported zpool status shows:
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log
2010 Dec 20
3
Resilvering - Scrubing whats the different
Hello All
I read this thread Resilver/scrub times? for a few minutes
and I have recognize that I dont know the different between
Resilvering and Scrubing. Shame on me. :-(
I dont find some declarations in the man-pages and I know the command
to start scrubing "zpool scrub tank"
but what is the command to start resilver and what is the different?
--
Best Regards
Alexander
Dezember, 20
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it
continually restarting, I eliminated all of the snapshot-taking
facilities which were enabled and
2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0
2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3
maxtxg=567354
2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2009 Mar 30
3
Data corruption during resilver operation
I''m in well over my head with this report from zpool status saying:
root # zpool status z3
pool: z3
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
2009 Nov 22
9
Resilver/scrub times?
Hi all!
I''ve decided to take the "big jump" and build a ZFS home filer (although it
might also do "other work" like caching DNS, mail, usenet, bittorent and so
forth). YAY! I wonder if anyone can shed some light on how long a pool scrub
would take on a fairly decent rig. These are the specs as-ordered:
Asus P5Q-EM mainboard
Core2 Quad 2.83 GHZ
8GB DDR2/80
OS:
2 x
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz
--
ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540
I have a hard time
2010 Sep 29
7
Is there any way to stop a resilver?
Is there any way to stop a resilver?
We gotta stop this thing - at minimum, completion time is 300,000 hours, and
maximum is in the millions.
Raidz2 array, so it has the redundancy, we just need to get data off.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/9dbb6cf5/attachment.html>
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact
with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun
badged Seagate ST31000N in a J4400) ? I have a resilver running and am
seeing about 700-800 writes/sec. on the hot spare as it resilvers.
There is no other I/O activity on this box, as this is a remote
replication target for production data. I have a the
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don''t put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on assumption that resilver time was limited by sustainable
throughput of disks, which
2007 Apr 10
15
Poor man''s backup by attaching/detaching mirror drives on a _striped_ pool?
Hi,
one quick&dirty way of backing up a pool that is a mirror of two devices is to
zpool attach a third one, wait for the resilvering to finish, then zpool detach
it again.
The third device then can be used as a poor man''s simple backup.
Has anybody tried it yet with a striped mirror? What if the pool is
composed out of two mirrors? Can I attach devices to both mirrors, let them