similar to: zfs allow does not work for rpool

Displaying 20 results from an estimated 10000 matches similar to: "zfs allow does not work for rpool"

2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello, I have a situation where a host, which is booted off its ''rpool'', need to temporarily import the ''rpool'' of another host, edit some files in it, and export the pool back retaining its original name ''rpool''. Can this be done ? Here is what I am trying to do: # zpool import -R /a rpool temp-rpool # zfs set mountpoint=/mnt
2010 Sep 29
2
rpool spare
Using ZFS v22, is it possible to add a hot spare to rpool? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/4b036d1d/attachment.html>
2010 Apr 26
2
How to delegate zfs snapshot destroy to users?
Hi, I''m trying to let zfs users to create and destroy snapshots in their zfs filesystems. So rpool/vm has the permissions: osol137 19:07 ~: zfs allow rpool/vm ---- Permissions on rpool/vm ----------------------------------------- Permission sets: @virtual clone,create,destroy,mount,promote,readonly,receive,rename,rollback,send,share,snapshot,userprop Create time permissions:
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI label... cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # zpool get bootfs rpool NAME PROPERTY VALUE SOURCE rpool bootfs - default # zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool cannot set property for ''rpool'': property
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem. What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2010 Nov 25
1
Strange behavior of b151a and .zfs directory
Hello, after upgrade to Sol11Express I''ve noticed kind of strange behavior of .zfs directory of any ZFS filesystem. Go into the .zfs directory and type `find . -type f'' for the first time after you''ve mounted the file-system. It''ll show nothing. Type it second time and you will get expected list of files from all the snapshots. Is this expected or is it a bug? I
2012 Dec 21
4
zfs receive options (was S11 vs illumos zfs compatiblity)
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of bob netherton > > You can, with recv, override any property in the sending stream that can > be > set from the command line (ie, a writable). > > # zfs send repo/support at cpu-0412 | zfs recv -o version=4 repo/test > cannot receive: cannot override received
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running: # uname -a SunOS nissan 5.11 snv_150 i86pc i386 i86pc (I''ll upgrade as soon as the desktop hang bug is fixed.) The performance problems seem to be due to excessive I/O on the main disk/pool. The only things I''ve changed recently is that I''ve created and destroyed a snapshot, and I used
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
Hi list, while preparing for the changed ACL/mode_t mapping semantics coming with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are not inherited when aclmode is set to passthrough for the filesystem. This very much puzzles me. Example: $ uname -a SunOS os 5.11 snv_134 i86pc i386 i86pc $ pwd /Volumes/ACLs/dir1 $ zfs list | grep /Volumes rpool/Volumes 7,00G 39,7G 6,84G
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2009 Jun 26
4
Backing up OS drive?
I have one drive that I''m running OpenSolaris on and a 6-drive RAIDZ. Unfortunately I don''t have another drive to mirror the OS drive, so I was wondering what the best way to back up that drive is. Can I mirror it onto a file on the RAIDZ, or will this cause problems before the array is loaded when booting? What about zfs send and recv to the RAIDZ? -- This message posted from
2011 Dec 15
31
Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way to add another drive as a mirror later? Sorry if this was discussed already. I searched the archives and couldn''t find the answer. Thank you.
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello, I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data. So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2010 Jul 12
3
Need ZFS master!
Hello all. I am new...very new to opensolaris and I am having an issue and have no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I installed open solaris on the first drive and rebooted. . Now what I want to do is ad a second drive so they are mirrored. How does one do this!!! I am getting no where and need some help. -- This message posted from opensolaris.org
2010 May 20
2
reconstruct recovery of rpool zpool and zfs file system with bad sectors
Folks I posted this question on (OpenSolaris - Help) without any replies http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am re-posting here in the hope someone can help ... I have updated the wording a little too (in an attempt to clarify) I currently use OpenSolaris on a Toshiba M10 laptop. One morning the system wouldn''t boot OpenSolaris 2009.06 (it was simply
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know