Displaying 20 results from an estimated 20000 matches similar to: "zpool command hangs, how to recover?"
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all,
I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2009 May 09
2
Recover files from overwritten zpool?
To make a long story short, after a Solaris reinstall I needed to access
a disk from the previous install. I realize now I should have done a
zpool import, but instead I recreated the pool, thinking this would
bring my file system back. I destroyed the new pool, but other than that
I have done nothing to overwrite data.
Is it possible to recover the data from the old pool?
--Stig
2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
Customer has a Thumper running:
SunOS x4501 5.10 Generic_120012-14 i86pc i386 i86pc
where running "zpool detech disk c6t7d0" to detech a mirror causes zpool
command to hang with following kernel stack trace:
PC: _resume_from_idle+0xf8 CMD: zpool detach disk1 c6t7d0
stack pointer for thread fffffe84d34b4920: fffffe8001c30c10
[ fffffe8001c30c10 _resume_from_idle+0xf8() ]
2009 Feb 11
0
failmode= continue prevents zpool processes from hanging and being unkillable?
> Dear ZFS experts,
> somehow one of my zpools got corrupted. Symptom is that I cannot
> import it any more. To me it is of lesser interest why that happened.
> What is really challenging is the following.
>
> Any effort to import the zpool hangs and is unkillable. E.g. if I
> issue a "zpool import test2-app" the process hangs and cannot be
> killed. As this
2008 Feb 05
2
ZFS+ config for 8 drives, mostly reads
Hi,
I posted in the Solaris install forum as well about the fileserver I''m building for media files but wanted to ask more specific questions about zfs here. The setup is 8x500GB SATAII drives to start and down the road another 4x750 SATAII drives, the machine will mostly be doing reads and streaming data over GigaE.
-I''m under the impression that ZFS+(ZFS2) is similar to
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s
been patched and seems to have
2010 Jan 03
2
"zpool import -f" not forceful enough?
I had to use the labelfix hack (and I had to recompile it at that) on 1/2 of an old zpool. I made this change:
/* zio_checksum(ZIO_CHECKSUM_LABEL, &zc, buf, size); */
zio_checksum_table[ZIO_CHECKSUM_LABEL].ci_func[0](buf, size, &zc);
and I''m assuming [0] is the correct endianness, since afterwards I saw it come up with "zpool import".
Unfortunately, I
2007 Apr 02
0
zpool scrub checksum error
Hello,
i?ve already read many posts about checksum error on zpools but i like to have some more informations, please.
We use 2 sun servers (amd x64, SunOS, 5.10 Generic_118855-36, hopefully all patches) with two hardware raids (raid 10) connected through fibre channel. Disk space is about 3 TB split into 4 pool including several (about 10 - 15) zfs each. After 22 days uptime i got a first
2013 Mar 23
0
Dirves going offline in Zpool
Hi,
I have Dell md1200 connected to two heads ( Dell R710 ). The heads have
Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the
RAID controller.
One of the drives had crashed and is replaced by a spare. Resilvering was
triggered but fails to complete due to drives going offline. I have to
reboot the head ( R710) and drives comes online. This happened repeatedly
when
2009 Aug 31
0
zpool scrub results in pool deadlock
I just ran zpool scrub on an active pool on an x4170 running S10U7 with the latest patches and iostat immediately dropped to 0 for all the pool devices and all processes associated with that device where hard locked, e.g., kill -9 on a zpool status processes was ineffective. However, other zpool on the system, such as the root pool, continued to work.
Neither init 6 nor reboot where able to take
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings,
my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system.
Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot.
Now when investigating
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives.
The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In
2011 Jul 26
2
recover zpool with a new installation
Hi all,
I lost my storage because rpool don''t boot. I try to recover, but
opensolaris says to "destroy and re-create".
My rpool installed on flash drive, and my pool (with my info) it''s on
another disks.
My question is: It''s possible I reinstall opensolaris in new flash drive,
without stirring on my pool of disks, and recover this pool?
Thanks.
Regards,
--
2008 Jan 22
0
zpool attach problem
On a V240 running s10u4 (no additional patches), I had a pool which looked like this:
<pre>
> # zpool status
> pool: pool01
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> pool01 ONLINE 0 0 0
> mirror
2007 Nov 13
0
in a zpool consist of regular files, when I remove the file vdev, zpool status can not detect?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here.
I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2007 Nov 13
3
zpool status can not detect the vdev removed?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2009 Dec 12
0
Messed up zpool (double device label)
Hi!
I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen''t seem to be well-engineered.
After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All,
over the last couple of weeks, I had to boot from my rpool from various physical
machines because some component on my laptop mainboard blew up (you know that
burned electronics smell?). I can''t retrospectively document all I did, but I am
sure I recreated the boot-archive, ran devfsadm -C and deleted
/etc/zfs/zpool.cache several times.
Now zpool status is referring to a