Displaying 20 results from an estimated 6000 matches similar to: "restore lost pool after vtoc re-label"
2007 Feb 03
4
Which label a ZFS/ZPOOL device has ? VTOC or EFI ?
Hi All,
ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs on it. Is it true ?
I formatted a device with VTOC lable and I created a ZFS file system on it.
Now which label the ZFS device has ? is it old VTOC or EFI ?
After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages.
Feb 3 07:47:00 scoobyb
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object?
Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used?
Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2
2008 Jul 09
8
Using zfs boot with MPxIO on T2000
Here is what I have configured:
T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root disks
OpenSolaris Nevada Build 91
Solaris Express Community Edition snv_91 SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 03 June 2008
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2007 May 05
3
Issue with adding existing EFI disks to a zpool
I spend yesterday all day evading my data of one of the Windows disks, so that I can add it to the pool. Using mount-ntfs, it''s a pain due to its slowness. But once I finished, I thought "Cool, let''s do it". So I added the disk using the zero slice notation (c0d0s0), as suggested for performance reasons. I checked the pool status and noticed however that the pool size
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it)
I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10.
Originally what I did was:
zpool attach -f rpool c0t0d0 c0t2d0.
Then I did an installboot on c0t2d0s0.
Didnt work. I was not able to boot from my second drive (c0t2d0).
I cannot remember
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi,
We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe.
Each disk (of 4) is divided up like this
/ 6GB UFS s0
Swap 8GB s1
/var 6GB UFS s3
Metadb 50MB UFS s4
/data 48GB ZFS s5
For SVM we do a 4 way mirror on /,swap, and /var
So we have 3 SVM mirrors
d0=root (sub mirrors d10, d20, d30, d40)
d1=swap (sub mirrors d11, d21,d31,d41)
2009 Dec 23
14
Moving a pool from FreeBSD 8.0 to opensolaris
I was wondering what the best method of moving a pool from FreeBSD 8.0 to
OpenSolaris is.
When i originally built my system, it was using hardware which wouldn''t work
in opensolairs, but i''m about to do an upgrade so i should be able to use
Opensolaris when i''m done.
My current system uses a Highpoint RocketRaid 2340. It has 12 1TB hard
drives an intel core2 quad
2008 Jan 18
7
how to relocate a disk
Hi,
I''d like to move a disk from one controller to another. This disk is
part of a mirror in a zfs pool. How can one do this without having to
export/import the pool or reboot the system?
I tried taking it offline and online again, but then zpool says the disk
is unavailable. Trying a zpool replace didn''t work because it complains
that the "new" disk is part of a
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0
I have not been able to find any discussion on whether (or when) to
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all-
I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1
2006 Jul 19
1
Q: T2000: raidctl vs. zpool status
Hi all,
IHACWHAC (I have a colleague who has a customer - hello, if you''re
listening :-) who''s trying to build and test a scenario where he can
salvage the data off the (internal ?) disks of a T2000 in case the sysboard
and with it the on-board raid controller dies.
If I understood correctly, he replaces the motherboard, does some magic to
get the raid config back, but even
2009 Feb 12
2
Solaris and zfs versions
We''ve been experimenting with zfs on OpenSolaris 2008.11. We created a
pool in OpenSolaris and filled it with data. Then we wanted to move it
to a production Solaris 10 machine (generic_137138_09) so I "zpool
exported" in OpenSolaris, moved the storage, and "zpool imported" in
Solaris 10. We got:
Cannot import ''deadpool'': pool is formatted
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2010 Mar 23
4
Moving drives around...
Kind of a newbie question here -- or I haven''t been able to find great
search terms for this...
Does ZFS recognize zpool members based on drive serial number or some
other unique, drive-associated ID? Or is it based off the drive''s
location (c0t0d0, etc).
I''m wondering because I have a zpool set up across a bunch of drives
and I am planning to move those drives to
2008 Jun 17
6
mirroring zfs slice
Hi All,
I had a slice with zfs file system which I want to mirror, I
followed the procedure mentioned in the amin guide I am getting this
error. Can you tell me what I did wrong?
root # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
export 254G 230K 254G 0% ONLINE -
root # echo |format
Searching for disks...done
2006 May 16
3
ZFS snv_b39 and S10U2
Hello zfs-discuss,
Just to be sure - if I create ZFS filesystems on snv_39 and then
later I would want just to import that pool on S10U2 - can I safely
assume it will just work (I mean nothing new to on-disk format was
added or changed in last few snv releases which is not going to be
in u2)?
I want to put right now (I have to do it now) some data on ZFS and
later I want to
2008 Jul 17
2
zfs sparc boot "Bad magic number in disk label"
Hello,
I recently installed SunOS 5.11 snv_91 onto a Ultra 60 UPA/PCI with OpenBoot 3.31 and two 300GB SCSI disks. The root file system is UFS on c0t0d0s0. Following the steps in ZFS Admin I have attempted to convert root to ZFS utilizing c0t1d0s0. However, upon "init 6" I am always presented with:
Bad magic number in disk label
can''t open disk label package
My Steps:
1)
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME