Displaying 20 results from an estimated 2000 matches similar to: "6344108 snapshot create/delete interlock with scrub/resilver must sync txg"
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it
continually restarting, I eliminated all of the snapshot-taking
facilities which were enabled and
2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0
2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3
maxtxg=567354
2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2010 Oct 16
4
resilver question
Hi all
I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
2012 Jun 18
1
Restore destroyed snapshot ???
OK, I am a butt-head and accidentally destroyed my last snapshot of a
replicated ZFS dataset. The dataset is NOT mounted and other than a
resilver going on, there is no I/O going on to this dataset. Is there
any way to roll back and get my latest snapshot back?
from zpool history -i:
2012-06-18.10:34:00 zfs destroy xxx at 1339668001
2012-06-18.10:34:00 [internal destroy txg:2213852] dataset =
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log
2006 Mar 17
1
acquiring duplicate lock of same type: "vnode interlock"
I think I've read somewhere about panic during early root mount, fsck
etc.. Perhaps this might be related:
Full dmesg: http://people.freebsd.org/~ariff/misc/dmesg.boot.amd64
[....]
acquiring duplicate lock of same type: "vnode interlock"
1st vnode interlock @ kern/vfs_vnops.c:791
2nd vnode interlock @ kern/vfs_subr.c:2018
KDB: stack backtrace:
witness_checkorder() at
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2006 Jul 29
1
zfs discussion forum bug
I started three new threads recently,
"Feature proposal: differential pools"
"Feature proposal: trashcan via auto-snapshot with every txg commit"
"Flushing synchronous writes to mirrors"
Matthew Ahrens and Henk Langeveld both replied to my first thread by sending their messages to both me and to zfs-discuss at opensolaris.org, and Matt did likewise for my second
2010 Oct 04
1
Metropolis: Implementation of Interlock Protocol using Linux Shell Programming, OpenSSH, and GPG
I have wrote a small Linux Shell command for implementing Interlock Protocol
which is known as a cryptographic protocol that resistant to
man-in-the-middle attack. Here is the steps of interlock protocol:
*(1)* Alice send her public key to Bob
*(2)* Bob send his public key to Alice.
*(3)* Alice encrypts her message using Bob's public key. Then she sends half
of that encrypted message to
2009 Nov 22
9
Resilver/scrub times?
Hi all!
I''ve decided to take the "big jump" and build a ZFS home filer (although it
might also do "other work" like caching DNS, mail, usenet, bittorent and so
forth). YAY! I wonder if anyone can shed some light on how long a pool scrub
would take on a fairly decent rig. These are the specs as-ordered:
Asus P5Q-EM mainboard
Core2 Quad 2.83 GHZ
8GB DDR2/80
OS:
2 x
2014 Feb 20
0
[PATCH] nv50: enable txg where supported
Signed-off-by: Ilia Mirkin <imirkin at alum.mit.edu>
---
This applies on top of Dave Airlie's r600g-texture-gather branch. Ran piglit
with -t gather, passed all 1057 tests. Can't say I fully understand what all
the arguments to handleTEX in the Coverter are but... seems to work. Will
probably require some care for nvc0 support which should have SM5 caps.
2013 Oct 15
0
How to unstick ZFS resilver?
I have a large (88-drive) zpool in which a drive was recently
replaced. (The pool has a bunch of duff Toshiba MK2001TRKB drives --
never ever pay money for these! -- and I'm trying to replace them one
by one before they fail completely.) The resilver on the first drive
replacement has been taking much much too long, and currently it's
stuck in this state:
pool: export
state: DEGRADED
2010 Apr 14
1
Checksum errors on and after resilver
Hi all,
I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I''m posting this to get help with calming my nerves and detect any possible future faults.
Lets start with some specs.
OSOL 2009.06
Intel SASUC8i (w LSI 1.30IT FW)
Gigabyte
2008 Sep 05
6
resilver speed.
Is there any way to control the resliver speed? Having attached a third disk to a mirror (so I can replace the other disks with larger ones) the resilver goes at a fraction of the speed of the same operation using disk suite. However it still renders the system pretty much unusable for anything else.
So I would like to control the rate of the resilver. Either slow it down a lot so that the
2010 Jul 05
5
never ending resilver
Hi list,
Here''s my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
filerbackup13
2009 Aug 21
0
possible resilver bugs
Hi, I don''t have means to replicate this issue nor file a bug about it so I''d like your opinion about these issues or perhaps make bug report if necessary.
In scenario where is say three raidz2 groups consisting several disks, two disks fail in different raidz-groups. You have degraded pool and two degraded raidz2 groups.
Now, one replaces first disk and starts resilvering, it
2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn''t the slog a copy of the
in-memory intent log? Wouldn''t it just simply replicate the data that''s
in the other log, checked against what''s in RAM? And presumably there
isn''t that much data in the slog so there isn''t that much to check?
Or
2007 Jan 10
0
[Fwd: zfs discussion forum bug]
I believe concerns like this should go to you?
Bev.
-------- Original Message --------
Subject: [zfs-discuss] zfs discussion forum bug
Date: Sat, 29 Jul 2006 00:03:13 -0700 (PDT)
From: Andrew <andrewee2@yahoo.com>
To: zfs-discuss@opensolaris.org
I started three new threads recently,
"Feature proposal: differential pools"
"Feature proposal: trashcan via auto-snapshot with
2019 Jun 01
0
[PATCH AUTOSEL 5.1 035/186] drm/nouveau/kms/gv100-: fix spurious window immediate interlocks
From: Ben Skeggs <bskeggs at redhat.com>
[ Upstream commit d2434e4d942c32cadcbdbcd32c58f35098f3b604 ]
Cursor position updates were accidentally causing us to attempt to interlock
window with window immediate, and without a matching window immediate update,
NVDisplay could hang forever in some circumstances.
Fixes suspend/resume on (at least) Quadro RTX4000 (TU104).
Reported-by: Lyude
2019 Jun 01
0
[PATCH AUTOSEL 5.0 032/173] drm/nouveau/kms/gv100-: fix spurious window immediate interlocks
From: Ben Skeggs <bskeggs at redhat.com>
[ Upstream commit d2434e4d942c32cadcbdbcd32c58f35098f3b604 ]
Cursor position updates were accidentally causing us to attempt to interlock
window with window immediate, and without a matching window immediate update,
NVDisplay could hang forever in some circumstances.
Fixes suspend/resume on (at least) Quadro RTX4000 (TU104).
Reported-by: Lyude
2019 Jun 01
0
[PATCH AUTOSEL 4.19 027/141] drm/nouveau/kms/gv100-: fix spurious window immediate interlocks
From: Ben Skeggs <bskeggs at redhat.com>
[ Upstream commit d2434e4d942c32cadcbdbcd32c58f35098f3b604 ]
Cursor position updates were accidentally causing us to attempt to interlock
window with window immediate, and without a matching window immediate update,
NVDisplay could hang forever in some circumstances.
Fixes suspend/resume on (at least) Quadro RTX4000 (TU104).
Reported-by: Lyude