similar to: Need ZFS master!

Displaying 20 results from an estimated 1000 matches similar to: "Need ZFS master!"

2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2010 Feb 18
2
Killing an EFI label
Since this seems to be a ubiquitous problem for people running ZFS, even though it''s really a general Solaris admin issue, I''m guessing the expertise is actually here, so I''m asking here. I found lots of online pages telling how to do it. None of them were correct or complete. I think. I seem to have accomplished it in a somewhat hackish fashion, possibly not
2009 Oct 13
14
How to resize ZFS partion or add a new one?
Hi, I have the following partions on my laptop, Inspiron 6000, from fdisk: 1 Other OS 0 11 12 0 2 EXT LBA 12 2561 2550 26 3 Active Solaris2 2562 9728 7167 74 First one is for Dell utilities. Second one is NTFS and the third is ZFS. I am currently using OpenSolaris 2009.06
2009 Sep 02
16
Archiving and Restoring Snapshots
I just received a special offer from Sun (marketing...) promising that I will learn "How to use ZFS snapshots for backup and restore purposes." The relevant doc is at... https://www.sun.com/offers/docs/zfs_snapshots.pdf It says: === Begin quote === Archiving and Restoring Snapshots Another use of snapshots is to create archives for long-term storage elsewhere. In the following
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI label... cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # zpool get bootfs rpool NAME PROPERTY VALUE SOURCE rpool bootfs - default # zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool cannot set property for ''rpool'': property
2009 Jun 08
4
[caiman-discuss] Can not delete swap on AI sparc
Hi Richard, Richard Robinson wrote: > I should add that I also used truss and saw the same ENOMEM error. I am on a 4Gb system with swap -l reporting > > swapfile dev swaplo blocks free > /dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296 > > and I was trying to follow the directions for increasing swap here: >
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09... There are supposed to be performance improvements if you create a zpool on a full disk, such as one with an EFI label. Does the same apply if the full disk is used with an SMI label, which is required to boot? I am trying to determine the trade-off, if any, of having a single rpool on cXtYd0s2, if I can even do that, and improved performance compared to having two
2011 Dec 15
31
Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way to add another drive as a mirror later? Sorry if this was discussed already. I searched the archives and couldn''t find the answer. Thank you.
2009 Jun 26
4
Backing up OS drive?
I have one drive that I''m running OpenSolaris on and a 6-drive RAIDZ. Unfortunately I don''t have another drive to mirror the OS drive, so I was wondering what the best way to back up that drive is. Can I mirror it onto a file on the RAIDZ, or will this cause problems before the array is loaded when booting? What about zfs send and recv to the RAIDZ? -- This message posted from
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identify itself as: Seagate-External-SG11-2.73TB Under both Solaris 10 and Solaris 11x, I receive the evil message: | I/O request is not aligned with 4096 disk sector size. | It is handled through Read Modify Write but the performance
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor grants for ZFS. Since all of the ZFS core contributors grants are set to expire on 02-24-2009 we need to renew the members that are still contributing at core contributor levels. We should also add some new members to both Contributor and Core contributor levels. First the current list of Core contributors: Bill
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones. Thanks, budy -- This message posted from opensolaris.org
2012 Jul 25
8
online increase of zfs after LUN increase ?
Hello, There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. That would be a very useful ( vital ) feature in enterprise environment. Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. I found a bug SUNBUG:6430818 (Solaris Does Not Automatically
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else who has seen it, as well as comments/speculation on cause. This bug is pretty bad. If you are lucky you can import the pool read-only and migrate it elsewhere. I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results. http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc
2010 Jan 19
8
Panic running a scrub
This is probably unreproducible, but I just got a panic whilst scrubbing a simple mirrored pool on scxe snv124. Evidently on of the disks went offline for some reason and shortly thereafter the panic happened. I have the dump and the /var/adm/messages containing the trace. Is there any point in submitting a bug report? The panic starts with: Jan 19 13:27:13 host6
2010 Aug 13
15
NFS issue with ZFS
I have Solaris 10 U7 that is exporting ZFS filesytem. The client is Solaris 9 U7. I can mount the filesytem just fine but I am unable to write to it. showmount -e shows my mount is set for everyone. the dfstab file has option rw set. So what gives? Phillip -- This message posted from opensolaris.org
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be less than the product of used and compressratio? For example, # zfs get -p all home1/home1mm01 NAME PROPERTY VALUE SOURCE home1/home1mm01 type volume - home1/home1mm01 creation 1254440045 - home1/home1mm01 used 14902492672
2010 Mar 19
3
zpool I/O error
Hi all, I''m trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open ''oradata_fs1'': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0% DEGRADED - rpool 136G 28.6G 107G 21% ONLINE - # Why
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk