Displaying 20 results from an estimated 3000 matches similar to: "Strange send failure"
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else
who has seen it, as well as comments/speculation on cause.
This bug is pretty bad. If you are lucky you can import the pool read-only
and migrate it elsewhere.
I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results.
http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc
2012 Dec 21
4
zfs receive options (was S11 vs illumos zfs compatiblity)
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-
> bounces at opensolaris.org] On Behalf Of bob netherton
>
> You can, with recv, override any property in the sending stream that can
> be
> set from the command line (ie, a writable).
>
> # zfs send repo/support at cpu-0412 | zfs recv -o version=4 repo/test
> cannot receive: cannot override received
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool.
zpool set=delegation=on rpool
zfs allow <user> create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test : permission denied.
Can you not allow to the rpool?
--
This message posted from opensolaris.org
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello,
I have a situation where a host, which is booted off its ''rpool'', need
to temporarily import the ''rpool'' of another host, edit some files in
it, and export the pool back retaining its original name ''rpool''. Can
this be done ?
Here is what I am trying to do:
# zpool import -R /a rpool temp-rpool
# zfs set mountpoint=/mnt
2010 Apr 26
2
How to delegate zfs snapshot destroy to users?
Hi,
I''m trying to let zfs users to create and destroy snapshots in their zfs
filesystems.
So rpool/vm has the permissions:
osol137 19:07 ~: zfs allow rpool/vm
---- Permissions on rpool/vm -----------------------------------------
Permission sets:
@virtual clone,create,destroy,mount,promote,readonly,receive,rename,rollback,send,share,snapshot,userprop
Create time permissions:
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running:
# uname -a
SunOS nissan 5.11 snv_150 i86pc i386 i86pc
(I''ll upgrade as soon as the desktop hang bug is fixed.)
The performance problems seem to be due to excessive I/O on the main
disk/pool.
The only things I''ve changed recently is that I''ve created and destroyed
a snapshot, and I used
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it)
I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10.
Originally what I did was:
zpool attach -f rpool c0t0d0 c0t2d0.
Then I did an installboot on c0t2d0s0.
Didnt work. I was not able to boot from my second drive (c0t2d0).
I cannot remember
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
Hi list,
while preparing for the changed ACL/mode_t mapping semantics coming
with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are
not inherited when aclmode is set to passthrough for the filesystem.
This very much puzzles me. Example:
$ uname -a
SunOS os 5.11 snv_134 i86pc i386 i86pc
$ pwd
/Volumes/ACLs/dir1
$ zfs list | grep /Volumes
rpool/Volumes 7,00G 39,7G 6,84G
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello,
I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data.
So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI
label...
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI
labeled devices
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool
cannot set property for ''rpool'': property
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of
zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt
The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2011 Dec 15
31
Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn''t find the answer. Thank you.
2011 Sep 22
4
Beginner Question: Limited conf: file-based storage pools vs. FSs directly on rpool
Hi, everyone!
I have a beginner''s question:
I must configure a small file server. It only has two disk drives, and they
are (forcibly) destined to be used in a mirrored, hot-spare configuration.
The OS is installed and working, and rpool is mirrored on the two disks.
The question is: I want to create some ZFS file systems for sharing them via
CIFS. But given my limited configuration:
2011 Jul 26
2
recover zpool with a new installation
Hi all,
I lost my storage because rpool don''t boot. I try to recover, but
opensolaris says to "destroy and re-create".
My rpool installed on flash drive, and my pool (with my info) it''s on
another disks.
My question is: It''s possible I reinstall opensolaris in new flash drive,
without stirring on my pool of disks, and recover this pool?
Thanks.
Regards,
--
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all
I am have constraint disk space (only 8GB) while running os inside vm. Now i
want to add more. It is easy to add for vm but how can i update fs in os?
I cannot use autoexpand because it doesn''t implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?
Doing following:
o added new virtual HDD (it becomes
2010 Sep 29
2
rpool spare
Using ZFS v22, is it possible to add a hot spare to rpool?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/4b036d1d/attachment.html>
2010 Mar 19
3
zpool I/O error
Hi all,
I''m trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open ''oradata_fs1'': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0% DEGRADED -
rpool 136G 28.6G 107G 21% ONLINE -
#
Why