Displaying 20 results from an estimated 246 matches for "rpool".
Did you mean:
pool
2009 Nov 11
0
libzfs zfs_create() fails on sun4u daily bits (daily.1110)
...tpriv:
scheduling-class:
ip-type: shared
hostid: 900d833f
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
root krb-v210-4 [20:06:07 0]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 17.2G 16.0G 66K /rpool
rpool/ROOT 8.51G 16.0G 21K legacy
rpool/ROOT/snv_126 8.51G 16.0G 8.51G /
rpool/dump 4.00G 16.0G 4.00G -
rpool/export 688M 16.0G 688M /export
rpool/export/home 21K 16.0G 21K /export/home
rpool/swap...
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
...the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system.
Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot.
Now when investigating the rpool from the LiveCd system it can be seen, that still about 11.5 GB are used on the rpool (total capacity: ~65 GB), but the space occupied by the files that are actually accessible after importing the rpool is only about 750 MB. The ''/'' filesystem can not be accessed (the 10 GB is the...
2009 Mar 03
8
zfs list extentions related to pNFS
...nfiguration.
The following is output from the modified command and reflects the
current mode of operation (i.e. "zfs list" lists filesystems, volumes
and pnfs datasets by default):
(pnfs-17-21:/home/lisagab):6 % zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 30.0G 37.0G 32.5K /rpool
rpool/ROOT 18.2G 37.0G 18K legacy
rpool/ROOT/snv_105 18.2G 37.0G 6.86G /
rpool/ROOT/snv_105/var 11.4G 37.0G 11.4G /var
rpool/dump 9.77G 37.0G 9.77G -
rpool/export...
2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below zpool instead of
having to do it at each level.
(Maybe it was in a dream...)
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/ROOT at 01
--($ ~)-- sudo zfs snapshot rpool/ROOT at 02
--($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02
--($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01
cannot rollback to ''rpool/ROOT at 01'': more recent snapshots exist
use ''-r'' to force deletion...
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello,
I have a situation where a host, which is booted off its ''rpool'', need
to temporarily import the ''rpool'' of another host, edit some files in
it, and export the pool back retaining its original name ''rpool''. Can
this be done ?
Here is what I am trying to do:
# zpool import -R /a rpool temp-rpool
# zfs set mountpoint...
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the according [i]/dev/zvol/dsk/rpool/vms/vm1/vm1-dsk[/i].
If I delete a VM i set up using this filesystem via[i] virsh undefine vm1[/i] th...
2008 Jul 22
2
Problems mounting ZFS after install
...shman/solaris/x86.miniroot-66-0624-nd''
extra = ''/platform/i86xpv/kernel/unix''
on_shutdown = "destroy"
on_reboot = "destroy"
on_crash = "preserve"
When it boots up I see:
Searching for installed OS instances...
ROOT/opensolaris was found on rpool.
Do you wish to have it mounted read-write on /a? [y,n,?] y
mounting rpool on /a
cannot mount ''rpool/ROOT/opensolaris'': legacy mountpoint
use mount(1M) to mount this filesystem
Unable to mount rpool/ROOT/opensolaris as root
Starting shell.
Drops me to a shell where I see:
# zfs l...
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
...his case zfs actually works as documented), but rather
as an inconvenience that might need some improvement, i.e.
to allow (forced?) use of "mount -F zfs" even for datasets
with the mountpoint property defined.
Here goes the detailed version:
I was evacuating data from a corrupted rpool which I could
only import read-only while booted from a LiveUSB. As I wrote
previously, I could not use "zfs send" (bug now submitted to
Illumos tracker), so I reverted to directly mounting datasets
and copying data off them into another location (into a similar
dataset hierarchy on my da...
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2)
zpool name is rpool. The
I have a 2nd hard drive in the box that I am trying to recover the ZFS
data from (long story but that HD became unbootable after installing IPS
on the machine)
Both drives have a pool named "rpool", so I can''t import the rpool from
the 2nd drive.
root at hyperion:~# z...
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using ''zpool clear'' or replace t...
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of
zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt
The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB, which leaves about 11GB unaccounted for....
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home)...
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool.
zpool set=delegation=on rpool
zfs allow <user> create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test : permission denied.
Can you not allow to the rpool?
--
This message posted from openso...
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best...
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
...use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it
To make the disk bootable I used:
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
using the executable from my new rpool.
But when I boot my new disk, I get the error "no pool_props" and the booting process returns to prompt with "Fast Device MMU miss".
I read OpenPROM 4.x was needed because of AI ? Did I miss something ?
Can you enlighten me ?
Thanks you,
aurelien
--
This message posted from...
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
...ng bug is fixed.)
The performance problems seem to be due to excessive I/O on the main
disk/pool.
The only things I''ve changed recently is that I''ve created and destroyed
a snapshot, and I used "zpool upgrade".
Here''s what I''m seeing:
# zpool iostat rpool 5
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
rpool 13.3G 807M 7 85 15.9K 548K
rpool 13.3G 807M 3 89 1.60K 723K
rpool 13.3G 810M...
2010 Apr 26
2
How to delegate zfs snapshot destroy to users?
Hi,
I''m trying to let zfs users to create and destroy snapshots in their zfs
filesystems.
So rpool/vm has the permissions:
osol137 19:07 ~: zfs allow rpool/vm
---- Permissions on rpool/vm -----------------------------------------
Permission sets:
@virtual clone,create,destroy,mount,promote,readonly,receive,rename,rollback,send,share,snapshot,userprop
Create time permissions:
@vi...
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "zpool import...