Displaying 20 results from an estimated 600 matches similar to: "zfs export and import between diferent controllers"
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a
raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7
and tried to add my pool to freenas.
After adding the zfs disk,
vdev and pool. I decided to back out and went back to opensolaris. Now
my raidz pool will not mount and got the following errors. Hope someone
expert can help me recover from this error.
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no
longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices
in another pool I don''t think this is related, since I the pools are ofline
pending access to the volumes.
I tried running find /dev/zvol/dsk/poolname -type f and here is the stack,
hopefully this someone a hint at what the issue is, I have
2010 May 07
0
confused about zpool import -f and export
Hi, all,
I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up.
I do a successful install, then I boot OK,
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
Hi,
more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB
HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS
Version 2). Everything went fine and I used the pool to store personal
stuff on it, like lots of photos and music. (So getting the data back is
not time critical, but still important to me.)
Later, since the development of the ZFS extension was
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file
server in it for learning purposes, and I moved almost all of my data
to it. Yesterday, and naturally after no longer having backups of the
data in the server, I had a controller failure (SiS 180 (oh, the
quality)) and the HDD was considered unplugged. When I noticed a few
checksum failures on `zfs status` (including two on
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
Hi,
I''m running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I''ve encountered a FreeBSD problem (PR kern/128083) and decided about updating the motherboard BIOS. It looked like the update went right but after that I was shocked to see my ZFS destroyed! Rolling the BIOS back did not help.
Now it looks like that:
# zpool status
pool: tank
state: UNAVAIL
status:
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2011 Jan 04
0
zpool import hangs system
Hello,
I''ve been using Nexentastore Community Edition with no issues now for a
while now, however last week I was going to rebuild a different system so I
started to copy all the data off that to my to a raidz2 volume on me CE
system. This was going fine until I noticed that they copy was stalled, as
well as the entire system was non-responsive. I let it sit for several hours
with no
2012 Jan 08
0
Pool faulted in a bad way
Hello,
I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2009 Apr 08
2
ZFS data loss
Hi,
I have lost a ZFS volume and I am hoping to get some help to recover the
information ( a couple of months worth of work :( ).
I have been using ZFS for more than 6 months on this project. Yesterday
I ran a "zvol status" command, the system froze and rebooted. When it
came back the discs where not available.
See bellow the output of " zpool status", "format"
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2008 Jan 10
2
Assistance needed expanding RAIDZ with larger drives
Hi all,
Please can you help with my ZFS troubles:
I currently have 3 x 400 GB Seagate NL35''s and a 500 GB Samsung Spinpoint in a RAIDZ array that I wish to expand by systematically replacing each drive with a 750 GB Western Digital Caviar.
After failing miserably, I''d like to start from scratch again if possible. When I last tried, the replace command hung for an age, network
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What
would you do next to try and recover this zfs pool?
I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was
composed of 4 1.5 TiB disks. One disk is totally dead. Another had
SMART errors, but using GNU ddrescue I was able to copy all the data
off successfully.
I have copied all 3 remaining disks as images using
2013 Mar 05
2
make_dev_physpath_alias
Hello all.
I have a supermicro 16 bay box with a LSI 9211-8i card.
We use it for temp data storage, and we wanted to try the l4z compression.
After updating the source tree to r247839: and doing a make buildworld
cycle all works fine.
But at boot time we get some warnings.
make_dev_physpath_alias: WARNING - Unable to alias
gptid/281951f4-a996-11e1-83eb-00259061b51a to
enc at
2007 Sep 13
11
How do I get my pool back?
After having to replace an internal raid card in an X2200 (S10U3 in
this case), I can see the disks just fine - and can boot, so the data
isn''t completely missing.
However, my zpool has gone.
# zpool status -x
pool: storage
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the
2007 Dec 27
2
Failure of gvinum after panic
Hi all,
I have some problems with my gvinum setup after the system panic'ed.
Afterwards the system fails finding the plexes to the subdisks (or at
least that is what I can understand after having searched the gvinum
source code for the error string in the DMESG log..)
The machine is an IBM Netfinity 5000 and the internal HW self tests
does not find any errors in the hw.
Luckily my root is
2007 Apr 04
1
sun x2100 gmirror problem
Hi,
We're using gmirror on our sun fire x2100 and FreeBSD 6.1-p10. Some days
ago I found this in the logs:
Apr 1 02:12:05 x2100 kernel: ad6: WARNING - WRITE_DMA48 UDMA ICRC error
(retrying request) LBA=612960533
Apr 1 02:12:05 x2100 kernel: ad6: FAILURE - WRITE_DMA48
status=51<READY,DSC,ERROR> error=10<NID_NOT_FOUND> LBA=612960533
Apr 1 02:12:05 x2100 kernel: GEOM_MIRROR: