Displaying 20 results from an estimated 1997 matches for "degrading".
2013 Mar 23
0
Dirves going offline in Zpool
Hi,
I have Dell md1200 connected to two heads ( Dell R710 ). The heads have
Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the
RAID controller.
One of the drives had crashed and is replaced by a spare. Resilvering was
triggered but fails to complete due to drives going offline. I have to
reboot the head ( R710) and drives comes online. This happened repeatedly
when
2010 Jul 12
7
How do I clean up corrupted files from zpool status -v?
Hi Folks..
I have a system that was inadvertently left unmirrored for root. We were able
to add a mirror disk, resilver, and fix the corrupted files (nothing very
interesting was corrupt, whew), but zpool status -v still shows errors..
Will this self correct when we replace the degraded disk and resilver? Or is
there something else that I''m not finding that I need to do to clean up?
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be).
A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,
zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0
and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has
2013 Aug 24
10
Help interpreting RAID1 space allocation
I''ve created a test volume and copied a bulk of data to it, however the
results of the space allocation are confusing at best. I''ve tried to
capture the history of events leading up to the current state. This is
all on a Debian Wheezy system using a 3.10.5 kernel package
(linux-image-3.10-2-amd64) and btrfs tools v0.20-rc1 (Debian package
0.19+20130315-5). The host uses an
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start:
Status immediately after starting resilver:
# zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine
2010 Dec 05
4
Zfs ignoring spares?
Hi all
I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2010 Aug 15
2
Is the error threshold for a degraded device configurable?
I look after an x4500 for a client and wee keep getting drives marked as
degraded with just over 20 checksum errors.
Most of these errors appear to be driver or hardware related and thier
frequency increases during a resilver, which can lead to a death
spiral. The increase in errors within a vdev during a resilver (I
recently had three drives in an 8 drive raidz vdev "degraded")
2019 Jun 14
3
zfs
Hi, folks,
testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
pulled one drive (11-drive, one hot spare pool), and it resilvered with
the hot spare. zpool status -x shows me
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
2012 Apr 17
3
Btrfs in degraded mode
Hello,
I have created a btrfs filesystem with RAID1 setup having 2 disks. Everything
works fine but when I try to umount the device and remount it in degraded mode,
the data still goes into both the disk. ideally in degraded mode only one disk
show disk activity and not the failed ones.
System Config:
Base OS: Slackware
kernel: linux 3.3.2
"sar -pd 2 10" shows me that the data is
2018 Feb 14
1
[vhost:vhost 22/23] drivers/firmware/qemu_fw_cfg.c:130:36: sparse: incorrect type in initializer (different base types)
tree: https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git vhost
head: 3d22d7c1190db3209b644b8a13a75a9802b4587f
commit: b3a8771f409b74c42deee28aee3092fc5d2c8dab [22/23] fw_cfg: write vmcoreinfo details
reproduce:
# apt-get install sparse
git checkout b3a8771f409b74c42deee28aee3092fc5d2c8dab
make ARCH=x86_64 allmodconfig
make C=1 CF=-D__CHECK_ENDIAN__
2000 May 15
1
Graceful degradation of signal
Hello all.
In the shower the other day (where most of this sort of musing gets
done, eh?) I was thinking about graceful degradation of audio signals.
Let me apologise in advance if these are elementary concepts or if I
demonstrate a complete lack of insight -- I don't rate even a dabbler
status in the area of audio codecs.
Anyway:
If we have a 128kbs signal coming down a *udp* channel with
2018 Jan 14
0
[PATCH v2 1/3] appliance: init: Avoid running degraded md devices
The issue:
- raid1 will be in degraded state if one of its components is logical volume (LV)
- raid0 will be inoperable at all (inacessible from within appliance) if one of its component is LV
- raidN: you can expect the same issue for any raid level depends on how many components are inaccessible at the time mdadm is running and raid redundency.
It happens because mdadm is launched prior to lvm
2011 Jul 03
4
I/O Currently Suspended Need Help Repairing
Hey guys,
I had a zfs system in raidz1 that was working until there was a power outage and now I''m getting this:
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run ''zpool clear''.
see: http://www.sun.com/msg/ZFS-8000-HC
scrub: scrub in progress for 0h49m,
2009 Jan 20
2
hot spare not so hot ??
I have configured a test system with a mirrored rpool and one hot spare. I
powered the systems off, pulled one of the disks from rpool to simulate a
hardware failure.
The hot spare is not activating automatically. Is there something more i
should have done to make this work ?
pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist
for
2006 Jul 05
4
degrading gracefully - how to tell if JS is enabled?
Is there a RoR best practice wrt determing in a visitor''s browser has JS disabled? Is there even a way to find out? I''ve got a couple of pages in my app that are not going to degrade gracefully at all. I really need to point a visitor who has JS disabled down a seperate path. Any ideas?
Thanks,
Bill
-------------- next part --------------
An HTML attachment was scrubbed...
2009 Jul 09
1
merge performace degradation in 2.9.1
I have noticed a significant performance degradation using merge in 2.9.1
relative to 2.8.1. Here is what I observed:
N <- 100000
X <- data.frame(group=rep(12:1, each=N), mon=rep(rev(month.abb), each=N))
X$mon <- as.character(X$mon)
Y <- data.frame(mon=month.abb, letter=letters[1:12])
Y$mon <- as.character(Y$mon)
Z <- cbind(Y, group=1:12)
system.time(Out
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2009 Jul 06
1
Performance degradation on multi-processor system
Hi,
We are seeing performance degradation when running the same R script in
multiple instances of R on a multi-processor system. We are a bit surprised
by this because we figured that each instance of R is running in its own
processor, and therefore running a second, third or fourth instance should
not affect the performance of the first instance.
Here's a test script that exhibits this
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it''s better off to
have a 64MB cache disk in the on-line pool than in the backup set sitting
off-line all