similar to: rpool spare

Displaying 20 results from an estimated 2000 matches similar to: "rpool spare"

2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI label... cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # zpool get bootfs rpool NAME PROPERTY VALUE SOURCE rpool bootfs - default # zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool cannot set property for ''rpool'': property
2011 Dec 15
31
Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way to add another drive as a mirror later? Sorry if this was discussed already. I searched the archives and couldn''t find the answer. Thank you.
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello, I have a situation where a host, which is booted off its ''rpool'', need to temporarily import the ''rpool'' of another host, edit some files in it, and export the pool back retaining its original name ''rpool''. Can this be done ? Here is what I am trying to do: # zpool import -R /a rpool temp-rpool # zfs set mountpoint=/mnt
2010 Jul 12
3
Need ZFS master!
Hello all. I am new...very new to opensolaris and I am having an issue and have no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I installed open solaris on the first drive and rebooted. . Now what I want to do is ad a second drive so they are mirrored. How does one do this!!! I am getting no where and need some help. -- This message posted from opensolaris.org
2010 Feb 16
2
ZFS Mount Errors
Why would I get the following error: Reading ZFS config: done. Mounting ZFS filesystems: (1/6)cannot mount ''/data/apache'': directory is not empty (6/6) svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1 And yes, there is data in the /data/apache file system....... This was created during the jumpstart process. Thanks --------------
2012 Dec 21
4
zfs receive options (was S11 vs illumos zfs compatiblity)
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of bob netherton > > You can, with recv, override any property in the sending stream that can > be > set from the command line (ie, a writable). > > # zfs send repo/support at cpu-0412 | zfs recv -o version=4 repo/test > cannot receive: cannot override received
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool. zpool set=delegation=on rpool zfs allow <user> create rpool both run without any issues. zfs allow rpool reports the user does have create permissions. zfs create rpool/test cannot create rpool/test : permission denied. Can you not allow to the rpool? -- This message posted from opensolaris.org
2010 Mar 19
3
zpool I/O error
Hi all, I''m trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open ''oradata_fs1'': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0% DEGRADED - rpool 136G 28.6G 107G 21% ONLINE - # Why
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool with a mirrored pair and a (shared) hot spare. We reconfigured disks a while ago and now the controller is c4 instead of c2. The hot spare was originally on c2, and apparently on rebooting it didn''t get found. So, I looked up what the new name for the hot spare was, then added it to the pool with "zpool
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
Hi list, while preparing for the changed ACL/mode_t mapping semantics coming with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are not inherited when aclmode is set to passthrough for the filesystem. This very much puzzles me. Example: $ uname -a SunOS os 5.11 snv_134 i86pc i386 i86pc $ pwd /Volumes/ACLs/dir1 $ zfs list | grep /Volumes rpool/Volumes 7,00G 39,7G 6,84G
2011 Jul 26
2
recover zpool with a new installation
Hi all, I lost my storage because rpool don''t boot. I try to recover, but opensolaris says to "destroy and re-create". My rpool installed on flash drive, and my pool (with my info) it''s on another disks. My question is: It''s possible I reinstall opensolaris in new flash drive, without stirring on my pool of disks, and recover this pool? Thanks. Regards, --
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2009 Oct 13
14
How to resize ZFS partion or add a new one?
Hi, I have the following partions on my laptop, Inspiron 6000, from fdisk: 1 Other OS 0 11 12 0 2 EXT LBA 12 2561 2550 26 3 Active Solaris2 2562 9728 7167 74 First one is for Dell utilities. Second one is NTFS and the third is ZFS. I am currently using OpenSolaris 2009.06
2010 Mar 13
3
When to Scrub..... ZFS That Is
When would it be necessary to scrub a ZFS filesystem? We have many "rpool", "datapool", and a NAS 7130, would you consider to schedule monthly scrubs at off-peak hours or is it really necessary? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL:
2009 Dec 06
20
Accidentally added disk instead of attaching
Hi, I wanted to add a disk to the tank pool to create a mirror. I accidentally used zpool add ? instead of zpool attach ? and now the disk is added. Is there a way to remove the disk without loosing data? Or maybe change it to mirror? Thanks, Martijn -- This message posted from opensolaris.org
2009 Oct 08
2
convert raidz from osx
I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the data intact. I want zfs to get access to the full disk instead of a slice. I believe like c8d0 instead off c8d0s1. I wanted to do this 1 disk at a time and let it resilver. what is the proper way to do this. I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1 but it didn''t let me do that. then I
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running: # uname -a SunOS nissan 5.11 snv_150 i86pc i386 i86pc (I''ll upgrade as soon as the desktop hang bug is fixed.) The performance problems seem to be due to excessive I/O on the main disk/pool. The only things I''ve changed recently is that I''ve created and destroyed a snapshot, and I used
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello, I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data. So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the