Displaying 20 results from an estimated 700 matches similar to: "Migrating ZFS/data pool to new pool on the same system"
2012 Sep 13
1
After a 'virsh blockpull', 'virsh snapshot-list --tree' o/p does not reflect reality
Hi (Eric?),
A couple of questions while using the 'virsh blockpull'
Summary:
1] Created snapshots this way: base<-snap1<-snap2<-snap3 (online, external snapshot
--disk-only)
2] I did a 'virsh blockpull' from snap2 into snap3
3] Next, did another 'virsh blockpull' from snap1 into snap3
- Here, 'qemu-img info /path/to/snap3' shows its backing file
2014 Apr 22
0
Re: Live snapshot merging (qemu 2.0)
On 04/22/2014 01:47 AM, Thomas Stein wrote:
> Hello.
>
> The Changelog of qemu-2.0.0 mentioned "Live snapshot merging". Someone
> has an idea what could be ment by this? I'm asking because i'm still
> struggling with finding a reliable backup solution for running kvm
> machines. Blockcopy is my current solution.
"Live snapshot merging" means going
2013 Jan 31
1
Managing Live Snapshots with Libvirt 1.0.1
Hello,
I recently compiled libvirt 1.0.1 and qemu 1.3.0 on Ubuntu 12.04. I have performed live snapshots on VMs using "virsh snapshot-create-as" and then later re-merge the images together using "virsh blockpull". I am wondering how I can do a couple of other operations on the images while the VM is running. For example, VM1 is running from the snap3 image, with the following
2015 Oct 19
1
Re: virsh can't support VM offline blockcommit
Hi Kashyap Chamarthy:
thank you very much for answer my question:
一: lead to VM filesystem becoming read-only
1: test case
it lead to VM filesystem becoming read-only test case as follows:
we want to snapshot for VM , to obtain VM incremental data,and use virsh blockcommit,qemu-img commit,qemu-img rebase to shorten snapshot chain.
Details are as follows(when VM running state, we perform the
2014 Aug 06
2
[help] Does "virsh blockpull" works on live virtual machine
Hi all,
I have a kvm virtual machine running (qemu version 2.0), and I had took several external snapshots of the disk( using "virsh snapshot-create-as"). Now, the existed disk files relationship look like: base <- snap1 <-snap2 <- current using disk file. Now I want to remove snap1 and snap2, and let current disk using the base image file as back file directly. Unfortunately,
2015 Dec 07
3
Efficient live disk backup with active blockcommit : Failed 'block-commit': Could not reopen file: Permission denied
Hi ,
Working on the simple POC : Advanced snapshot using libvirt and qemu .
Following are the exact steps which are followed .
1. Created as base VM - Ubuntu 15.10 with following libvirt and qemu
versions
Using library: libvirt 1.2.16
Using API: QEMU 1.2.16
Running hypervisor: QEMU 2.3.0
QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu9.1), Copyright
(c) 2003-2008 Fabrice
2014 Aug 06
0
Re: [help] Does "virsh blockpull" works on live virtual machine
On 08/06/2014 06:04 AM, chenyanqiu@keytonecloud.com wrote:
> Hi all,
[please configure your mailer to wrap long lines]
> ...Now I want to remove snap1 and snap2, and let current disk using the
base image file as back file directly. Unfortunately, for some reason, I
can not shutdown the vm, and exectue the "virsh blockpull" command. Can
I execute the "virsh blockpull"
2013 Feb 10
3
Re: Diff using send-receive code
Hello,
We''re a team of 4 final year computer science students and are
working on generating a diff between file system snapshots using the
send receive code.
The output of our utility looks like this-
(I''ve tested it on a small subvol with minimal changes just to give an idea)
root@nafisa-M-6319:/mnt/btrfs# btrfs sub diff -p /mnt/btrfs/snap1
/mnt/btrfs/snap2
2017 Dec 19
0
kernel: blk_cloned_rq_check_limits: over max segments limit., Device Mapper Multipath, iBFT, iSCSI COMSTAR
Hi,
WARNING: Long post ahead
I have an issue when starting multipathd. The kernel complains about "blk_cloned_rq_check_limits:
over max segments limit".
The server in question is configured for KVM hosting. It boots via iBFT to an iSCSI volume. Target
is COMSTAR and underlying that is a ZFS volume (100GB). The server also has two infiniband cards
providing four (4) more paths over SRP
2005 Nov 17
2
zpool iostat question
Hello ZFSland,
Is there any significance in the fact that the bandwidth/read figures for a simple cpio into a ZFS filesystem should be multiples of 21.3K (when non-zero) as follows? What could determine this figure? Do I need to read a manpage? ;-)
Thanks... Sean.
-----
[root at global:/36g2] # zpool iostat 3
capacity operations bandwidth
pool used avail read
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 120K 228G 21K /pool1
pool1/fs1 21K 228G 21K /vik
[root at
2014 Apr 22
2
Live snapshot merging (qemu 2.0)
Hello.
The Changelog of qemu-2.0.0 mentioned "Live snapshot merging". Someone
has an idea what could be ment by this? I'm asking because i'm still
struggling with finding a reliable backup solution for running kvm
machines. Blockcopy is my current solution.
best regards
Thomas
2004 Feb 25
1
authenticating from another samba server
Hi, I have a server, snap1 10.8.5.10, that runs samba, and have users
created by using useradd (but not added them to snap1's smbpasswd). I'd
like for users on our primary samba server, archives1 10.8.5.2, to be
able to type in \\snap1\username in windows and have the snap1 server
take them to their home directory on the snap1 server, but athenticate
the users against
2008 Oct 16
1
attaching 2nd vol unsupported?
Hi,
im trying to attach another volume aka disk to win HVM, however it
doesn''t seem to work:
+ xm block-attach win2008ss phy:/dev/zvol/dsk/pool1/win2008ss.dsk2 \
hdd:disk w 0
results in:
elkner.sol ~ > + xm block-list win2008ss --long
(0
(vbd
(uuid 7cb8fadf-619d-dde6-bda9-dcc18023c7d5)
(bootable 1)
(devid 768)
(driver paravirtualised)
2006 Jun 12
2
?: zfs mv within pool seems slow
I have just upgraded my jumpstart server to S10 u2 b9a.
It is an Ultra 10 with two 120GB EIDE drives. The second drive (disk1) is new, and has u2b9a
installed on a slice, with most of the space in slice 7 for the ZFS pool
I created pool1 on disk1, and created the filesystem pool1/ro (for legacy reasons). I them moved my
data from the original disk0 UFS file system to pool1/ro. Initially I
2014 Jul 03
0
Re: virsh blockcopy: doesn't seem to flatten the chain by default
On 07/02/2014 01:12 PM, Kashyap Chamarthy wrote:
> We have this simple chain:
>
> base <- snap1
>
> Let's quickly examine the contents of 'base' and 'snap1' images:
>
> Now, let's do a live blockcopy (with a '--finish' to graecully finish
> the mirroring):
>
> $ virsh blockcopy --domain testvm2 vda \
>
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All,
I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2009 Jan 12
1
ZFS size is different ?
Hi all,
I have 2 questions about ZFS.
1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different:
NAME USED AVAIL REFER MOUNTPOINT
pool2/data2 160G 1.44T 159G /pool2/data2
pool1/data 176G 638G 175G /pool1/data1
It keep about 30,000,000 files.
The content of p_pool/p1 and backup/p_backup
2014 Apr 10
0
Re: Help with understanding and solving snapshot problem
On 04/10/2014 12:00 AM, rolf wrote:
> Hello
>
> Fairly new to libvirt. I’m hoping to both solve a problem with this question as well as learn more detail about how libvirt works.
[Can you convince your mailer to wrap long lines? It makes it easier
for other readers]
>
> Using RHEL 6.4 and libvirt version is 0.10.2 and qemu-img version is 0.12.1.2
Have you considered raising
2009 Aug 28
0
Comstar and ESXi
Hello all,
I am running an OpenSolaris server running 06/09. I installed comstar and enabled it. I have an ESXi 4.0 server connecting to Comstar via iscsi on its own switch. (There are two esxi servers), both of which do this regardless of whether one is on or off. The error I see is on esxi "Lost connectivity to storage device
naa.600144f030bc450000004a9806980003. Path vmhba33:C0:T0:L0 is