similar to: multiple disk failure

Displaying 20 results from an estimated 400 matches similar to: "multiple disk failure"

2010 Oct 02
3
out of HDD space - zfs degraded
Overnight I was running a zfs send | zfs receive (both within the same system / zpool). The system ran out of space, a drive went off line, and the system is degraded. This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18 23:43:48 EDT 2010. The following logs are also available at http://www.langille.org/tmp/zfs-space.txt <- no line wrapping This is what was running: #
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata: pool: d state: FAULTED status: The pool metadata is corrupted and the pool cannot be opened. action: Destroy and re-create the pool from a backup source. see: http://illumos.org/msg/ZFS-8000-72 scan: none requested config: NAME STATE
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What would you do next to try and recover this zfs pool? I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was composed of 4 1.5 TiB disks. One disk is totally dead. Another had SMART errors, but using GNU ddrescue I was able to copy all the data off successfully. I have copied all 3 remaining disks as images using
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2013 Feb 23
1
Old ICH7 SATA-2 question
Hello there, I've got a question about SATA. I've got ASUS P5GC-MX/1333 with ICH7. (SATA2 support) A few HDD with SATA2. system: uname -a FreeBSD diablo.miekoff.local 9.1-STABLE FreeBSD 9.1-STABLE #1 r246666: Tue Feb 12 00:19:07 MSK 2013 root at diablo.miekoff.local:/usr/obj/usr/src/sys/DIABLO64 amd64 camcontrol info camcontrol iden ada2 pass2: <ST3500320AS SD1A> ATA-8 SATA 2.x
2009 Jan 15
2
zfs drive keeps failing between export and import
I have a zpool that consists for a two-drive mirror. The two times I took the zpool offline, I had to resilver one of the drives (the same drive both times) when I imported it back. All drives in the pool show no read, write, or checksum errors and are new, so I'm looking to a software problem before hardware. Both drives are encrypted geli devices. I tried to reproduce the error with 1GB
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors. The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic. I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14). But still when trying to do zpool import
2012 Feb 03
6
Spectacularly disappointing disk throughput
Greetings! I''ve got a FreeBSD-based (FreeNAS) appliance running as an HVM DomU. Dom0 is Debian Squeeze on an AMD990 chipset system with IOMMU enabled. The DomU sees six physical drives: one of them is a USB stick that I''ve passed through in its entirety as a block device. The other five are SATA drives attached to a controller that I''ve handed to the DomU with PCI
2007 Feb 07
4
NFS share problem with mac os x client
Hello, I test right now the beauty of zfs. I have installed opensolaris on a spare server to test nfs exports. After creating tank1 with zpool and a subfilesystem with zfs tank1/nfsshare, I have set the option sharenfs=on to tank1/nfsshare. With Mac OS X as client I can mount the filesystem in Finder.app with nfs://server/tank1/nfsshare, but if I copy a file an error ocours. Finder say "The
2010 Jun 02
11
ZFS recovery tools
Hi, I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks to some great forum posts from Victor Latushkin, however without his posts I would still be crying at night... I think the worst example is the zdb man page, which all it does is to ask you
2012 Jan 08
0
Pool faulted in a bad way
Hello, I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2007 Apr 18
2
zfs block allocation strategy
Hi, quoting from zfs docs "The SPA allocates blocks in a round-robin fashion from the top-level vdevs. A storage pool with multiple top-level vdevs allows the SPA to use dynamic striping to increase disk bandwidth. Since a new block may be allocated from any of the top-level vdevs, the SPA implements dynamic striping by spreading out writes across all available top-level vdevs" Now,
2002 Aug 23
1
Legends and Fonts
Hello. Is it possible to set specify the font used by legend()? I would like to specify a fixed-width font so that I can line up parts of vertically stacked curve labels. For example, it would be nice if I could align the names, ages, and weights in the following three curve labels: Bob age=7 weight=100 Alexander age=13 weight=150 Susan age=20 weight=130 Is there perhaps a clever
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
1999 Nov 11
6
Compilation of R under Mandrake Linux 6.1 (helios)
I've just installed Mandrake Linux, then compiled R-0.65.1 . Whether because I omitted some necessary items when I selected software for installion, or because they anyway needed to be loaded afterwards, I found it necessary to load the following packages in order to compile R-0.65.1 pgcc-g77-1.1.3-3mdk.i586.rpm (Fortran g77 compiler) XFree66-devel-3.3.5-3mdk.i586.rpm (X.h etc
2013 Nov 03
1
FreeBSD 10 Beta 2: make installkernel failure with installer provided ZFS configuration.
Hi, I was trying to rebuild world on a FreeBSD 10 test system, that I had just installed. ZFS root was setup. I let the installation program do all the ZFS setup and configuration. I put root on a 5 disk encrypted raidz array. Besides the installer configuring 5 times the amount of swap space I asked for (asked for 8gb, installer put 8gb on each drive, for 40gb in total.) everything was working