similar to: What''s eating my disk space? Missing snapshots?

Displaying 20 results from an estimated 3000 matches similar to: "What''s eating my disk space? Missing snapshots?"

2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb magic?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 spares c0t7d0 AVAIL c1t6d0 AVAIL c1t7d0
2009 Aug 04
2
flowadm -i 1 - shows only first flow
Hi, OSOL, b118 > milek at r600:~# flowadm show-flow > FLOW LINK IPADDR PROTO PORT > DSFLD > local_25 iwh0 -- tcp 25 -- > local_22 iwh0 -- tcp 22 -- > milek at r600:~# flowadm show-flow -s -i 1 > FLOW IPACKETS RBYTES IERRORS
2014 Apr 22
2
Live snapshot merging (qemu 2.0)
Hello. The Changelog of qemu-2.0.0 mentioned "Live snapshot merging". Someone has an idea what could be ment by this? I'm asking because i'm still struggling with finding a reliable backup solution for running kvm machines. Blockcopy is my current solution. best regards Thomas
2012 Sep 13
1
After a 'virsh blockpull', 'virsh snapshot-list --tree' o/p does not reflect reality
Hi (Eric?), A couple of questions while using the 'virsh blockpull' Summary: 1] Created snapshots this way: base<-snap1<-snap2<-snap3 (online, external snapshot --disk-only) 2] I did a 'virsh blockpull' from snap2 into snap3 3] Next, did another 'virsh blockpull' from snap1 into snap3 - Here, 'qemu-img info /path/to/snap3' shows its backing file
2009 Oct 15
8
sub-optimal ZFS performance
Hello, ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. I am running OSOL on my laptop, currently b124 and i found that the performance of ZFS is not optimal in all situations. If i check the how much space the package cache for pkg(1) uses, it takes a bit longer on this host than on comparable machine to which i transferred all the data. user at host:/var/pkg$ time
2015 Oct 19
1
Re: virsh can't support VM offline blockcommit
Hi Kashyap Chamarthy: thank you very much for answer my question: 一: lead to VM filesystem becoming read-only 1: test case it lead to VM filesystem becoming read-only test case as follows: we want to snapshot for VM , to obtain VM incremental data,and use virsh blockcommit,qemu-img commit,qemu-img rebase to shorten snapshot chain. Details are as follows(when VM running state, we perform the
2015 Oct 13
2
virsh can't support VM offline blockcommit
Hi everyone! I use the libvirt(version: 1.2.2) and QEMU(version: 2.2.1) to test qemu snapshot features: I tried virsh blockcommit when VM offline, the virsh blockcommit failed: the error messase as below: error: Requested operation is not valid: domain is not running when I start the VM, the virsh blockcommit work fine! my question is:we need
2013 Jan 31
1
Managing Live Snapshots with Libvirt 1.0.1
Hello, I recently compiled libvirt 1.0.1 and qemu 1.3.0 on Ubuntu 12.04. I have performed live snapshots on VMs using "virsh snapshot-create-as" and then later re-merge the images together using "virsh blockpull". I am wondering how I can do a couple of other operations on the images while the VM is running. For example, VM1 is running from the snap3 image, with the following
2009 Mar 03
8
zfs list extentions related to pNFS
Hi, I am soliciting input from the ZFS engineers and/or ZFS users on an extension to "zfs list". Thanks in advance for your feedback. Quick Background: The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding a new DMU object set type which is used on the pNFS data server to store pNFS stripe DMU objects. A pNFS dataset gets created with the "zfs
2008 Jul 22
2
Problems mounting ZFS after install
Let me thank everyone in advance. I''ve read a number of posts here and it helped tremendously in getting the install done. I have a couple of remaining issues which I can''t seem to overcome. Here are the basics: dom0 - CentOS 5.2 32-bit Xen 3.2.1 compiles from source domU - os200805.iso The install config: [root@internetpowagroup oshman]# cat opensolaris.install name =
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
Hello all, I''d like to report a tricky situation and a workaround I''ve found useful - hope this helps someone in similar situations. To cut the long story short, I could not properly mount some datasets from a readonly pool, which had a non-"legacy" mountpoint attribute value set, but the mountpoint was not available (directory absent or not empty). In this case
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2) zpool name is rpool. The I have a 2nd hard drive in the box that I am trying to recover the ZFS data from (long story but that HD became unbootable after installing IPS on the machine) Both drives have a pool named "rpool", so I can''t import the rpool from the 2nd drive. root at hyperion:~# zpool status
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello, I have a situation where a host, which is booted off its ''rpool'', need to temporarily import the ''rpool'' of another host, edit some files in it, and export the pool back retaining its original name ''rpool''. Can this be done ? Here is what I am trying to do: # zpool import -R /a rpool temp-rpool # zfs set mountpoint=/mnt
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi. Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot, WITHOUT destroying later created clones or snapshots? Example: --($ ~)-- sudo zfs snapshot rpool/ROOT at 01 --($ ~)-- sudo zfs snapshot rpool/ROOT at 02 --($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02 --($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01 cannot rollback to ''rpool/ROOT at 01'': more
2007 May 02
16
ZFS Support for remote mirroring
Does ZFS support any type of remote mirroring? It seems at present my only two options to achieve this would be Sun Cluster or Availability Suite. I thought that this functionality was in the works, but I haven''t heard anything lately. Thanks! Aaron Newcomb http://opennewsshow.org http://thesourceshow.org This message posted from opensolaris.org
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi, After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer revision, all arrays I have using ZFS mirroring are displaying errors. This started happening immediately after ZFS upgrades. Here is an example: ormandj at neutron.corenode.com:~$ zpool status pool: rpool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was
2010 Apr 26
2
How to delegate zfs snapshot destroy to users?
Hi, I''m trying to let zfs users to create and destroy snapshots in their zfs filesystems. So rpool/vm has the permissions: osol137 19:07 ~: zfs allow rpool/vm ---- Permissions on rpool/vm ----------------------------------------- Permission sets: @virtual clone,create,destroy,mount,promote,readonly,receive,rename,rollback,send,share,snapshot,userprop Create time permissions:
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem. What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar