Displaying 20 results from an estimated 6000 matches similar to: "link in zpool upgrade -v broken"
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath
abcxxxx # zpool import
pool: emcpool1
id:
2010 Aug 13
15
NFS issue with ZFS
I have Solaris 10 U7 that is exporting ZFS filesytem.
The client is Solaris 9 U7.
I can mount the filesytem just fine but I am unable to write to it.
showmount -e shows my mount is set for everyone.
the dfstab file has option rw set.
So what gives?
Phillip
--
This message posted from opensolaris.org
2010 Mar 19
3
zpool I/O error
Hi all,
I''m trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open ''oradata_fs1'': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0% DEGRADED -
rpool 136G 28.6G 107G 21% ONLINE -
#
Why
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du.
The other ZFS was used for similar
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn''t get found.
So, I looked up what the new name for the hot spare was, then added
it to the pool with "zpool
2010 Jul 01
3
zpool on raw disk. Do I need to format?
Folks,
I am learning more about zfs storage. It appears, zfs pool can be created on a raw disk. There is no need to create any partitions, etc. on the disk. Does this mean there is no need to run "format" on a raw disk?
I have added a new disk to my system. It shows up as /dev/rdsk/c8t1d0s0. Do I need to format it before I convert it to zfs storage? Or, can I simply use it as:
# zfs
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi,
yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98.
I can''t use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it
To make the disk bootable I used:
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
using the executable from my new
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0
I have not been able to find any discussion on whether (or when) to
2008 Dec 17
11
zpool detach on non-mirrored drive
I''m using zfs not to have access to a fail-safe backed up system, but to easily manage my file system. I would like to be able to, as I buy new harddrives, just to be able to replace the old ones. I''m very environmentally concious, so I don''t want to leave old drives in there to consume power as they''ve already been replaced by larger ones. However, ZFS
2011 Jul 26
2
recover zpool with a new installation
Hi all,
I lost my storage because rpool don''t boot. I try to recover, but
opensolaris says to "destroy and re-create".
My rpool installed on flash drive, and my pool (with my info) it''s on
another disks.
My question is: It''s possible I reinstall opensolaris in new flash drive,
without stirring on my pool of disks, and recover this pool?
Thanks.
Regards,
--
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi,
I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26.
Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic.
check the panic @
2010 Oct 19
4
rename zpool
Hi,
I have two questions:
1) Is there any way of renaming zpool without export/import ??
2) If I took hardware snapshot of devices under a zpool ( where the snapshot device will be exact copy including metadata i.e zpool and associated file systems) is there any way to rename zpool name of snapshotted devices ?? without losing data part?
Thanks & Regards,
sridhar.
--
This message posted
2010 Apr 21
2
HELP! zpool corrupted data
Hello,
Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd:
FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64
mfsbsd# zpool import
pool: tank
id: 1998957762692994918
state: FAULTED
2007 Jun 21
2
Bug in "zpool history"
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/rootfs at mysnapshot
2007-06-20.10:20:03 zfs clone syspool/rootfs at mysnapshot syspool/myrootfs
2007-06-20.10:23:21 zfs set bootfs=syspool/myrootfs syspool
As you can see it says I did a
2009 Mar 27
7
is zpool export/import | faster than rsync or cp
I need to move data from one zpool to another, lock stock and
barrel,
Being from linux background my instinct was to use rsync. But then I
remembered seeing the `export/import options in man zpool.. And I''ve
seen mention of them here too, but didn''t pay attention since I''d
noticed no need yet.
Now I''m wondering if the export/import sub commands might not be
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont
know how to fix...
I had a pool of two drives:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
emcpower0a ONLINE 0 0 0
emcpower1a ONLINE
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello,
I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data.
So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI
label...
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI
labeled devices
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool
cannot set property for ''rpool'': property
2010 Aug 06
3
Reconfigure zpool
I have zpool like that
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
___c6t0d0 ONLINE 0 0 0
___c6t1d0 ONLINE 0 0 0
___c6t2d0 ONLINE 0 0 0
___c6t3d0 ONLINE 0 0 0
___c6t4d0 ONLINE