Displaying 20 results from an estimated 10000 matches similar to: "Comments on a ZFS multiple use of a pool, RFE."
2006 Jan 19
6
How to remove disk
What is the procedure to remove a disk from a ZFS pool and remove the
EFI label ?
klm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060119/4d074b88/attachment.html>
2007 Apr 26
7
device name changing
Hi.
If I create a zpool with the following command:
zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7
and after a reboot the device names for some reason are changed so da2
and da5 are swapped, either by altering the LUN setting on the storage
or by switching cables/swapping disks etc.?
How will zfs handle that? Will it simply acknowledge that all devices
are present and the pool is
2007 Sep 25
23
device alias
Hi. I''d like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or the zpool command, in particular) had a text field for
arbitrary information, it would be possible to add something that
would indicate what LUN on what array the disk in question might be.
This would make troubleshooting and general
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it
in terms of RAID5 I would expect to get (4-1)x18 worth of drive
space, but DF -h shows 4x18. Is this a bug or do I not understand?
2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB
drives and I want to make a RAIDZ of all of them I would expect the
18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2008 May 18
2
possible zfs bug? lost all pools
after trying to mount my zfs pools in single user mode I got the following
message for each:
May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as
it was last accessed by another system (host: gw.bb1.matik.com.br hostid:
0xbefb4a0f). See: http://www.sun.com/msg/ZFS-8000-EY
any zpool cmd returned nothing else as not existing zfs, seems the zfs info on
disks
2008 Dec 18
3
automatic forced zpool import with unmatched hostid
Hi,
since hostid is stored in the label, "zpool import" failed if the hostid dind''t match. Under certain circonstances (ldom failover) it means you have to manually force the zpool import while booting. With more than 80 LDOMs on a single host it will be great if we could configure the machine back to the old behavior where it didn''t failed, maybe with a /etc/sytem
2008 Jan 04
3
Can''t access my data
Hi Folks..
I have/had a zpool containing one filesystem.
I had to change my hostid and needed to import my pool, (I''ve done his
OK in the past).
After the import the mount of my filesystem failed.
# zpool import homespool
cannot mount ''homespool/homes'': mountpoint or dataset is busy
The data seems it might still exist, (correct amount of used space is
reported),
2007 Oct 08
6
zfs boot issue, changing device id
Hi,
Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)...
1/ Standard install on DISK A.
2/ zfs boot install on DISK B.
3/ I change the boot order and my zfs boot works fine.
4/ I install grub on the mbr of DISK B
5/ I disconnect and replace DISK A with DISK B
6/ Reboot, get the grub menu select Solaris ZFS and it panics that it
cannot mount root path @ device XXX...
This is not a ZFS
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2007 Jan 10
2
using veritas dmp with ZFS (but not vxvm)
We have some HDS storage that isn''t supported by mpxio, so we have to use veritas dmp to get multipathing.
Whats the recommended way to use DMP storage with ZFS. I want to use DMP but get at the multipathed virtual luns at as low a level as possible to avoid using vxvm as much as possible.
I figure theres no point in having overhead from 2 volume manages if we can avoid it.
Has anyone
2007 Jul 02
3
ZFS and VXVM/VXFS
We are looking at the alternatives to VXVM/VXFS. One of the feature which we liked in Veritas, apart from the obvious ones is the ability to call the disks by name and group them in to a disk group.
Especially in SAN based environment where the disks may be shared by multiple machines, it is very easy to manage them by disk group names rather than cxtxdx numbers.
Does zfs offer such
2007 Jan 10
4
[osol-discuss] Re: bare metal ZFS ? How To ?
this is off list on purpose ?
> run zpool import, it will search all attached storage and give you a list
> of availible pools. then run zpool import poolname or add a -f if you
> didn''t export before the install/upgrade.
assume worst case
someone walks up to you and drops an array on you.
They say "its ZFS an'' I need that der stuff ''k? " all
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "zpool import -f rpool" only with second disk, but it''s
hangs and the system is
2006 Jul 26
9
zfs questions from Sun customer
Please reply to david.curtis at sun.com
******** Background / configuration **************
zpool will not create a storage pool on fibre channel storage. I''m
attached to an IBM SVC using the IBMsdd driver. I have no problem using
SVM metadevices and UFS on these devices.
List steps to reproduce the problem(if applicable):
Build Solaris 10 Update 2 server
Attach to an external
2007 Sep 13
11
How do I get my pool back?
After having to replace an internal raid card in an X2200 (S10U3 in
this case), I can see the disks just fine - and can boot, so the data
isn''t completely missing.
However, my zpool has gone.
# zpool status -x
pool: storage
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me.
For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2006 Oct 18
5
ZFS and IBM sdd (vpath)
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, MPXIO or VxDMP.
Here is the error message when I try to create my pool:
bash-3.00# zpool create tank /dev/dsk/vpath1a
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c
bash-3.00# zpool create tank /dev/dsk/vpath1c
cannot open
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment
with the backend storage being iSCSI-based, in part because of the
possibilities for failover. In exploring things in our test environment,
I have noticed that ''zpool import'' takes a fairly long time; about
35 to 45 seconds per pool. A pool import time this slow obviously
has implications for how fast
2010 Oct 19
4
rename zpool
Hi,
I have two questions:
1) Is there any way of renaming zpool without export/import ??
2) If I took hardware snapshot of devices under a zpool ( where the snapshot device will be exact copy including metadata i.e zpool and associated file systems) is there any way to rename zpool name of snapshotted devices ?? without losing data part?
Thanks & Regards,
sridhar.
--
This message posted
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few