Displaying 20 results from an estimated 1100 matches similar to: "Space usage"
2011 Apr 01
15
Zpool resize
Hi,
LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m
changing LUN size on netapp and solaris format see new value but zpool
still have old value.
I tryed zpool export and zpool import but it didn''t resolve my problem.
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi,
yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98.
I can''t use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it
To make the disk bootable I used:
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
using the executable from my new
2009 Feb 22
11
Confused about zfs recv -d, apparently
First, it fails because the destination directory doesn''t exist. Then it
fails because it DOES exist. I really expected one of those to work. So,
what am I confused about now? (Running 2008.11)
# zpool import -R /backups/bup-ruin bup-ruin
# zfs send -R "zp1 at bup-20090222-054457UTC" | zfs receive -dv
bup-ruin/fsfs/zp1"
cannot receive: specified fs (bup-ruin/fsfs/zp1)
2010 Mar 19
3
zpool I/O error
Hi all,
I''m trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open ''oradata_fs1'': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0% DEGRADED -
rpool 136G 28.6G 107G 21% ONLINE -
#
Why
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all,
I have a 5 drive RAIDZ volume with data that I''d like to recover.
The long story runs roughly:
1) The volume was running fine under FreeBSD on motherboard SATA controllers.
2) Two drives were moved to a HP P411 SAS/SATA controller
3) I *think* the HP controllers wrote some volume information to the end of
each disk (hence no more ZFS labels 2,3)
4) In its "auto
2009 Mar 03
8
zfs list extentions related to pNFS
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
store pNFS stripe DMU objects. A pNFS dataset gets created with the
"zfs
2010 Jan 28
2
Need help with repairing zpool :(
...how this can happen is not a topic of this message.
now, there is a problem and I need to solve it, if it is possible.
have one HDD device (80gb), entire disk is for rpool, system on it and home folders.
this is no problem to reinstall system, but need to save some files from user dirs.
an, o''cos, there is no backup.
so, the problem is that zpool is broken :( when I try to start
2008 Aug 22
2
zpool autoexpand property - HowTo question
I noted this PSARC thread with interest:
Re: zpool autoexpand property [PSARC/2008/353 Self Review]
because it so happens that during a recent disk upgrade,
on a laptop. I''ve migrated a zpool off of one partition
onto a slightly larger one, and I''d like to somehow tell
zfs to grow the zpool to fill the new partition. So,
what''s the best way to do this? (and is it
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello,
I''m debating an OS change and also thinking about my options for data
migration to my next server, whether it is on new or the same hardware.
Migrating to a new machine I understand is a simple matter of ZFS
send/receive, but reformatting the existing drives to host my existing
data is an area I''d like to learn a little more about. In the past I''ve
asked about
2010 Mar 05
2
ZFS replication send/receive errors out
My full backup script errorred out the last two times I ran it. I''ve got
a full Bash trace of it, so I know exactly what was done.
There are a moderate number of snapshots on the zp1 pool, and I''m
intending to replicate the whole thing into the backup pool.
After housekeeping, I take make a current snapshot on the data pool (zp1).
Since this is a new full backup, I then
2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below zpool instead of
having to do it at each level.
(Maybe it was in a dream...)
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/ROOT at 01
--($ ~)-- sudo zfs snapshot rpool/ROOT at 02
--($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02
--($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01
cannot rollback to ''rpool/ROOT at 01'': more
2007 Nov 13
3
this command can cause zpool coredump!
in Solaris 10 U4,
type:
-bash-3.00# zpool create -R filepool mirror /export/home/f1.dat /export/home/f2.dat
invalid alternate root ''Segmentation Fault (core dumped)
--
This messages posted from opensolaris.org
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello,
I have a situation where a host, which is booted off its ''rpool'', need
to temporarily import the ''rpool'' of another host, edit some files in
it, and export the pool back retaining its original name ''rpool''. Can
this be done ?
Here is what I am trying to do:
# zpool import -R /a rpool temp-rpool
# zfs set mountpoint=/mnt
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2)
zpool name is rpool. The
I have a 2nd hard drive in the box that I am trying to recover the ZFS
data from (long story but that HD became unbootable after installing IPS
on the machine)
Both drives have a pool named "rpool", so I can''t import the rpool from
the 2nd drive.
root at hyperion:~# zpool status