Displaying 20 results from an estimated 2000 matches similar to: "zpool import problem / missing label / corrupted data"
2010 Feb 24
9
Import zpool from FreeBSD in OpenSolaris
I want to import my zpool''s from FreeBSD 8.0 in OpenSolaris 2009.06.
After reading the few posts (links below) I was able to find on the subject, it seems like it there is a differences between FreeBSD and Solaris. FreeBSD operates on directly on the disk and Solaris creates a partion and uses that... is that right? Is it impossible for OpenSolaris to use zpool''s from FreeBSD?
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2013 Mar 23
0
Dirves going offline in Zpool
Hi,
I have Dell md1200 connected to two heads ( Dell R710 ). The heads have
Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the
RAID controller.
One of the drives had crashed and is replaced by a spare. Resilvering was
triggered but fails to complete due to drives going offline. I have to
reboot the head ( R710) and drives comes online. This happened repeatedly
when
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2008 Jun 04
2
panic on `zfs export` with UNAVAIL disks
hi list,
initial situation:
SunOS alusol 5.11 snv_86 i86pc i386 i86pc
SunOS Release 5.11 Version snv_86 64-bit
3 USB HDDs on 1 USB hub:
zpool status:
state: ONLINE
NAME STATE READ WRITE CKSUM
usbpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c7t0d0p0 ONLINE 0 0 0
c8t0d0p0
2010 Apr 21
2
HELP! zpool corrupted data
Hello,
Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd:
FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64
mfsbsd# zpool import
pool: tank
id: 1998957762692994918
state: FAULTED
2009 Dec 12
0
Messed up zpool (double device label)
Hi!
I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen''t seem to be well-engineered.
After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
2012 Dec 30
4
Expanding a raidz vdev in zpool
Hello All,
I have a zpool that consists of 2 raidz vdevs (raidz1-0 and raidz1-1). The
first vdev is 4 1.5TB drives. The second was 4 500GB drives. I replaced the
4 500GB drives with 4 3TB drives.
I replaced one at time, and resilvered each. Now the process is complete, I
expected to have an extra 10TB (4*2.5TB) of raw space, but it''s still the
same amount of space.
I did an export and
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process:
hydra# zpool import
pool: tank
id:
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello,
I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data.
So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2010 Mar 08
1
zpool will not import on a different controller
I want to move my pool (consisting of five 1.5TB sata drives in raidz1) to a
different computer. I am encountering issues with controllers - the
motherboard (Asus P5BV-C/4L) has 8 sata ports: 4 on a marvell 88se6145,
which seems not to be supported at all; and 4 on intel 82801G, which uses
the pci-ide driver.
I added a 4-port pci sata card, a Silicon Image 3114, which appeared to
work after
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives.
The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In
2007 Oct 14
1
odd behavior from zpool replace.
i''ve got a little zpool with a naughty raidz vdev that won''t take a
replacement that as far as i can tell should be adequate.
a history: this could well be some bizarro edge case, as the pool doesn''t
have the cleanest lineage. initial creation happened on NexentaCP inside
vmware in linux. i had given the virtual machine raw device access to 4
500gb drives and 1 ~200gb
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts:
A customer has X4500 and the boot drives mirrored (c5t0d0s0 and
c5t4d0s0) by SVM,
The ZFS uses the two other partitions on these two drives(c5t0d0s3 and
c5t4d0s3).
If we need to replace the disk drive c5t0d0, do we need to do anything
on the ZFS
(c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive
replacement procedure?
Below is the summary of their current ZFS
2013 Jan 21
0
Corrupted raid-z pool
I''ve been fighting with this raid-z on pool for almost a year now on FreeBSD.
Earlier this week I managed to mount it with one corrupted/offline
disk. I tried to wipe the disk, and ended up settling with using gpart
to make a new partition on it.
It resilvered partially, but when I attempted to clear errors (that
had been present since I last tried getting the pool to work) it
restarted
2006 Nov 01
0
RAID-Z1 pool became faulted when a disk was removed.
So I have attached to my system two 7-disk SCSI arrays, each of 18.2 GB
disks.
Each of them is a RAID-Z1 zpool.
I had a disk I thought was a dud, so I pulled the fifth disk in my array and
put the dud in. Sure enough, Solaris started spitting errors like there was
no tomorrow in dmesg, and wouldn''t use the disk. Ah well. Remove it, put the
original back in - hey, Solaris still thinks
2011 Nov 09
3
Data distribution not even between vdevs
Hi list,
My zfs write performance is poor and need your help.
I create zpool with 2 raidz1. When the space is to be used up, I add 2
another raidz1 to extend the zpool.
After some days, the zpool is almost full, I remove some old data.
But now, as show below, the first 2 raidz1 vdev usage is about 78% and the
last 2 raidz1 vdev usage is about 93%.
I have line in /etc/system
set
2007 Oct 30
1
Different Sized Disks Recommendation
Hi,
I was first attracted to ZFS (and therefore OpenSolaris) because I thought that ZFS allowed the used of different sized disks in raidz pools without wasted disk space. Further research has confirmed that this isn''t possible--by default.
I have seen a little bit of documentation around using ZFS with slices. I think this might be the answer, but I would like to be sure what the