Displaying 20 results from an estimated 10000 matches similar to: "RaidzN blocksize ... or blocksize in general ... and resilver"
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?"
That''s a kind of radical, possibly offensive, question formula
that I have lately.
Reading up on theory of RAID5, I grasped the idea of the write
hole (where one of the sectors of the stripe, such as the parity
data, doesn''t get written - leading to invalid data upon read).
In general, I think the same applies to bitrot of
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).
I believe it was sometimes implied on this list that such
fragmentation for "static" data can be currently combatted
only by zfs send-ing existing
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don''t put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on assumption that resilver time was limited by sustainable
throughput of disks, which
2010 Oct 16
4
resilver question
Hi all
I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
2010 Jul 05
5
never ending resilver
Hi list,
Here''s my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
filerbackup13
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false?
B) If I buy larger drives and resilver, does defrag happen?
C) Does zfs send zfs receive mean it will defrag?
--
This message posted from opensolaris.org
2010 Apr 14
1
Checksum errors on and after resilver
Hi all,
I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I''m posting this to get help with calming my nerves and detect any possible future faults.
Lets start with some specs.
OSOL 2009.06
Intel SASUC8i (w LSI 1.30IT FW)
Gigabyte
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start:
Status immediately after starting resilver:
# zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello,
I''m working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168 hours, that put completion at sometime tomorrow night.
However, he just reported zpool status shows:
2008 Sep 05
6
resilver speed.
Is there any way to control the resliver speed? Having attached a third disk to a mirror (so I can replace the other disks with larger ones) the resilver goes at a fraction of the speed of the same operation using disk suite. However it still renders the system pretty much unusable for anything else.
So I would like to control the rate of the resilver. Either slow it down a lot so that the
2010 Jul 09
4
resilver of older root pool disk
This is a hypothetical question that could actually happen:
Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0
and for some reason c0t0d0s0 goes off line, but comes back
on line after a shutdown. The primary boot disk would then
be c0t0d0s0 which would have much older data than c0t1d0s0.
Under normal circumstances ZFS would know that c0t0d0s0
needs to be resilvered. But in this case
2010 Dec 10
5
Large Drives
The time has come to expand my OpenSolaris NAS.
Right now I have 6 1TB Samsung Spinpoints in a Raidz2 configuration. I also have a mirrored root pool.
The Raidz2 configuration should be for my most critical data - but right now it is holding everything so I need to add some more pools and move some data around.
To start I need a vdev I will call "temp" that acts as a networked bit
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it''s better off to
have a 64MB cache disk in the on-line pool than in the backup set sitting
off-line all
2010 May 18
25
Very serious performance degradation
Hi,
I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks :
zfs_raid ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
c7t3d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0
2012 Nov 30
13
Remove disk
Hi all,
I would like to knwon if with ZFS it''s possible to do something like that :
http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
meaning :
I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I''ve 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and
2009 Nov 22
9
Resilver/scrub times?
Hi all!
I''ve decided to take the "big jump" and build a ZFS home filer (although it
might also do "other work" like caching DNS, mail, usenet, bittorent and so
forth). YAY! I wonder if anyone can shed some light on how long a pool scrub
would take on a fairly decent rig. These are the specs as-ordered:
Asus P5Q-EM mainboard
Core2 Quad 2.83 GHZ
8GB DDR2/80
OS:
2 x
2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn''t the slog a copy of the
in-memory intent log? Wouldn''t it just simply replicate the data that''s
in the other log, checked against what''s in RAM? And presumably there
isn''t that much data in the slog so there isn''t that much to check?
Or
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell
makes zpool resilvering so slow? I''m running OpenSolaris 2009.06.
I have had a large number of problematic disks due to a bad production
batch, leading me to resilver quite a few times, progressively
replacing each disk as it dies (and now preemptively removing disks.)
My complaint is that resilvering ends up
2010 Sep 29
7
Is there any way to stop a resilver?
Is there any way to stop a resilver?
We gotta stop this thing - at minimum, completion time is 300,000 hours, and
maximum is in the millions.
Raidz2 array, so it has the redundancy, we just need to get data off.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/9dbb6cf5/attachment.html>