similar to: disks in zpool gone at the same time

Displaying 20 results from an estimated 200 matches similar to: "disks in zpool gone at the same time"

2010 Jul 16
6
Lost zpool after reboot
Hello, I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows. Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed.
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2008 Dec 04
11
help diagnosing system hang
Hi all, First, I''ll say my intent is not to spam a bunch of lists, but after posting to opensolaris-discuss I had someone communicate with me offline that these lists would possibly be a better place to start. So here we are. For those on all three lists, sorry for the repetition. Second, this message is meant to solicit help in diagnosing the issue described below. Any hints on
2009 Jul 29
0
LVM and ZFS
I''m curious about if there are any potential problems with using LVM metadevices as ZFS zpool targets. I have a couple of situations where using a device directly by ZFS causes errors on the console about "Bus and lots of "stalled" I/O. But as soon as I wrap that device inside an LVM metadevice and then use it in the ZFS zpool things work perfectly fine and smoothly (no
2008 Jan 17
9
ATA UDMA data parity error
Hey all, I''m not sure if this is a ZFS bug or a hardware issue I''m having - any pointers would be great! Following contents include: - high-level info about my system - my first thought to debugging this - stack trace - format output - zpool status output - dmesg output High-Level Info About My System --------------------------------------------- - fresh
2009 Jul 07
0
[perf-discuss] help diagnosing system hang
Interresting... I wonder what differs between your system and mine. With my dirt-simple stress-test: server1# zpool create X25E c1t15d0 server1# zfs set sharenfs=rw X25E server1# chmod a+w /X25E server2# cd /net/server1/X25E server2# gtar zxf /var/tmp/emacs-22.3.tar.gz and a fully patched X42420 running Solaris 10 U7 I still see these errors: Jul 7 22:35:04 merope Error for Command:
2011 Aug 09
7
Disk IDs and DD
Hiya, Is there any reason (and anything to worry about) if disk target IDs don''t start at 0 (zero). For some reason mine are like this (3 controllers - 1 onboard and 2 PCIe); AVAILABLE DISK SELECTIONS: 0. c8t0d0 <ATA -ST9160314AS -SDM1 cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,cb84 at 5/disk at 0,0 1. c8t1d0 <ATA -ST9160314AS -SDM1
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2010 Mar 09
0
snv_133 mpt_sas driver
Hi all, Today a new message has been seen in my system and another freeze has happen to it. The message is : Mar 9 06:20:01 zfs01 failed to configure smp w50016360001e06bf Mar 9 06:20:01 zfs01 mpt: [ID 201859 kern.warning] WARNING: smp_start do passthru error 16 Mar 9 06:20:01 zfs01 scsi: [ID 243001 kern.warning] WARNING: /pci at 0,0/pci8086,3410 at 9/pci1000,3150 at 0 (mpt2): Mar 9
2009 Dec 12
0
Messed up zpool (double device label)
Hi! I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen''t seem to be well-engineered. After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status: pool: tank state: DEGRADED status: One or more devices could not be used because the label is missing or
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance. I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2010 Jan 10
5
Repeating scrub does random fixes
I''ve been using a 5-disk raidZ for years on SXCE machine which I converted to OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which was fixed. So, now I''m at OSOL snv_111b and I''m finding that scrub repairs errors on random disks. If I repeat the scrub, it will fix errors on other disks. Occasionally it runs cleanly. That it doesn''t
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk --- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100 +++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400 @@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2009 Jan 21
8
cifs perfomance
Hello! I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal. CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage: usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2007 Oct 14
1
odd behavior from zpool replace.
i''ve got a little zpool with a naughty raidz vdev that won''t take a replacement that as far as i can tell should be adequate. a history: this could well be some bizarro edge case, as the pool doesn''t have the cleanest lineage. initial creation happened on NexentaCP inside vmware in linux. i had given the virtual machine raw device access to 4 500gb drives and 1 ~200gb
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all, I am not sure my original mail got through to the list (I haven''t received it back), so I attach it below. Anyhow, now I have a saved kernel crash dump of the system panicking when it tries to - I believe - deferred-release the corrupted deduped blocks which are no longer referenced by the userdata/blockpointer tree. As I previously wrote in my thread on unfixeable
2012 Jan 11
0
Clarifications wanted for ZFS spec
I''m reading the "ZFS On-disk Format" PDF (dated 2006 - are there newer releases?), and have some questions regarding whether it is outdated: 1) On page 16 it has the following phrase (which I think is in general invalid): The value stored in offset is the offset in terms of sectors (512 byte blocks). To find the physical block byte offset from the beginning of a slice,
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add my pool to freenas. After adding the zfs disk, vdev and pool. I decided to back out and went back to opensolaris. Now my raidz pool will not mount and got the following errors. Hope someone expert can help me recover from this error.
2006 May 09
3
Possible corruption after disk hiccups...
I''m not sure exactly what happened with my box here, but something caused a hiccup on multiple sata disks... May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,5c at 9/pci-ide at a/ide at 0 (ata6): May 9 16:47:43 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 7/ide at 1 (ata3): May 9 16:47:43 sol timeout: abort request, target=0
2007 Mar 30
0
On disk SMI & EFI label documentation
I''m not sure if this alias is only to discuss the Solaris ZFS implementation or others. I''m writing my own ZFS code from scratch in Java. I''ll skip the reasons why I''m doing this in Java, lets just assume I have some. Is there any good documentation on the disk label structures? Right now my code is just reading the ZFS labels and the nvlist data but when