Displaying 20 results from an estimated 1000 matches similar to: "x4500 resilvering spare taking forever?"
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello,
I''m working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168 hours, that put completion at sometime tomorrow night.
However, he just reported zpool status shows:
2008 Sep 05
6
resilver speed.
Is there any way to control the resliver speed? Having attached a third disk to a mirror (so I can replace the other disks with larger ones) the resilver goes at a fraction of the speed of the same operation using disk suite. However it still renders the system pretty much unusable for anything else.
So I would like to control the rate of the resilver. Either slow it down a lot so that the
2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn''t the slog a copy of the
in-memory intent log? Wouldn''t it just simply replicate the data that''s
in the other log, checked against what''s in RAM? And presumably there
isn''t that much data in the slog so there isn''t that much to check?
Or
2010 Dec 20
3
Resilvering - Scrubing whats the different
Hello All
I read this thread Resilver/scrub times? for a few minutes
and I have recognize that I dont know the different between
Resilvering and Scrubing. Shame on me. :-(
I dont find some declarations in the man-pages and I know the command
to start scrubing "zpool scrub tank"
but what is the command to start resilver and what is the different?
--
Best Regards
Alexander
Dezember, 20
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell
makes zpool resilvering so slow? I''m running OpenSolaris 2009.06.
I have had a large number of problematic disks due to a bad production
batch, leading me to resilver quite a few times, progressively
replacing each disk as it dies (and now preemptively removing disks.)
My complaint is that resilvering ends up
2010 Oct 16
4
resilver question
Hi all
I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
2010 Jul 05
5
never ending resilver
Hi list,
Here''s my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
filerbackup13
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it
continually restarting, I eliminated all of the snapshot-taking
facilities which were enabled and
2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0
2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3
maxtxg=567354
2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it
was "constantly busy", and since our x4500 has always died miserably in
the past when a HDD dies, they wanted to replace it before the HDD
actually died.
The usual was done, HDD replaced, resilvering started and ran for about
50 minutes. Then the system hung, same as always, all ZFS related
commands would just
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2013 Feb 17
13
zfs raid1 error resilvering and mount
hi, i have raid1 on zfs with 2 device on pool
first device died and boot from second not working...
i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import
http://puu.sh/2402E
when i load zfs.ko and opensolaris.ko i see this message:
Solaris: WARNING: Can''t open objset for zroot/var/crash
Solaris: WARNING: Can''t open objset for zroot/var/crash
zpool status:
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it''s better off to
have a 64MB cache disk in the on-line pool than in the backup set sitting
off-line all
2008 Sep 05
3
Snapshots during a scrub
I have a weekly scrub setup, and I''ve seen at least once now where it
says "don''t snapshot while scrubbing"
Is this a data integrity issue, or will it make one or both of the
processes take longer?
Thanks
2009 Jul 11
5
Resilvering Loop
I have a situation where my zpool (with two radiz2s) is resilvering
and reaches a certain point, then starts over.
There no read, write or checksum errors. The disks do have a fair
amount of resilvering to do, as I''ve had a variety of disk failures.
But at the core of things, there''s enough parity data to avoid data
loss, and the zpool is functioning fine, albeit in a
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact
with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun
badged Seagate ST31000N in a J4400) ? I have a resilver running and am
seeing about 700-800 writes/sec. on the hot spare as it resilvers.
There is no other I/O activity on this box, as this is a remote
replication target for production data. I have a the
2008 Jan 23
4
Synchronous scrub?
Say I''m firing off an at(1) or cron(1) job to do scrubs, and say I want to scrub two pools sequentially
because they share one device. The first pool, BTW, is a mirror comprising of a smaller disk and a subset of a larger disk. The other pool is the remainder of the larger disk.
I see no documentation mentioning how to scrub, then wait-until-completed. I''m happy to be pointed
2008 Jul 25
18
zfs, raidz, spare and jbod
Hi.
I installed solaris express developer edition (b79) on a supermicro
quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
I installed solaris onto one of the internal drives. I added an areca
arc-1680 sas-controller and configured it in jbod-mode. I attached an
external sas-cabinet with 16 sas-drives 1 TB (931 binary GB). I
created a raidz2-pool with ten disks and one spare.
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don''t put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on assumption that resilver time was limited by sustainable
throughput of disks, which