Displaying 20 results from an estimated 1000 matches similar to: "How to delegate zfs snapshot destroy to users?"
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all,
I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny.
I installed default snv_129, installed guest additions -> reboot, set
2009 Mar 03
8
zfs list extentions related to pNFS
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
store pNFS stripe DMU objects. A pNFS dataset gets created with the
"zfs
2008 Jul 22
2
Problems mounting ZFS after install
Let me thank everyone in advance. I''ve read a number of posts here and it helped tremendously in getting the install done. I have a couple of remaining issues which I can''t seem to overcome. Here are the basics:
dom0 - CentOS 5.2 32-bit
Xen 3.2.1 compiles from source
domU - os200805.iso
The install config:
[root@internetpowagroup oshman]# cat opensolaris.install
name =
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
Hello all,
I''d like to report a tricky situation and a workaround
I''ve found useful - hope this helps someone in similar
situations.
To cut the long story short, I could not properly mount
some datasets from a readonly pool, which had a non-"legacy"
mountpoint attribute value set, but the mountpoint was not
available (directory absent or not empty). In this case
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2)
zpool name is rpool. The
I have a 2nd hard drive in the box that I am trying to recover the ZFS
data from (long story but that HD became unbootable after installing IPS
on the machine)
Both drives have a pool named "rpool", so I can''t import the rpool from
the 2nd drive.
root at hyperion:~# zpool status
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of
zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt
The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello,
I have a situation where a host, which is booted off its ''rpool'', need
to temporarily import the ''rpool'' of another host, edit some files in
it, and export the pool back retaining its original name ''rpool''. Can
this be done ?
Here is what I am trying to do:
# zpool import -R /a rpool temp-rpool
# zfs set mountpoint=/mnt
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/ROOT at 01
--($ ~)-- sudo zfs snapshot rpool/ROOT at 02
--($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02
--($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01
cannot rollback to ''rpool/ROOT at 01'': more
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running:
# uname -a
SunOS nissan 5.11 snv_150 i86pc i386 i86pc
(I''ll upgrade as soon as the desktop hang bug is fixed.)
The performance problems seem to be due to excessive I/O on the main
disk/pool.
The only things I''ve changed recently is that I''ve created and destroyed
a snapshot, and I used
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool.
zpool set=delegation=on rpool
zfs allow <user> create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test : permission denied.
Can you not allow to the rpool?
--
This message posted from opensolaris.org
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it)
I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10.
Originally what I did was:
zpool attach -f rpool c0t0d0 c0t2d0.
Then I did an installboot on c0t2d0s0.
Didnt work. I was not able to boot from my second drive (c0t2d0).
I cannot remember
2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below zpool instead of
having to do it at each level.
(Maybe it was in a dream...)
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI
label...
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI
labeled devices
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool
cannot set property for ''rpool'': property
2008 Sep 13
3
Restore a ZFS Root Mirror
Hi all,
after installing OpenSolaris 2008.05 in VirtualBox I''ve created a ZFS Root Mirror by:
"zfs attach rpool Disk B"
and it works like a charm. Now I tried to restore the rpool from the worst Case
Scenario: The Disk the System was installed to (Disk A) fails.
I replaced the Disk A with another virtual Disk C and tried to restore the rpool, but
my Problem is that I
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
Hi list,
while preparing for the changed ACL/mode_t mapping semantics coming
with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are
not inherited when aclmode is set to passthrough for the filesystem.
This very much puzzles me. Example:
$ uname -a
SunOS os 5.11 snv_134 i86pc i386 i86pc
$ pwd
/Volumes/ACLs/dir1
$ zfs list | grep /Volumes
rpool/Volumes 7,00G 39,7G 6,84G
2009 Mar 09
3
cannot mount ''/export'' directory is not empty
Hello,
I am desperate. Today I realized that my OS 108 doesn''t want to boot.
I have no idea what I screwed up. I upgraded on 108 last week without
any problems.
Here is where I''m stuck:
Reading ZFS config: done.
Mounting ZFS filesystems: (1/17) cannot mount ''/export'': directory is
not empty (17/17)
$ svcs -x
svc:/system/filesystem/local:default (local file