Displaying 20 results from an estimated 600 matches similar to: "libvirtd doesn't attach Sheepdog storage VDI disk correctly"
2015 Nov 30
1
Re: libvirtd doesn't attach Sheepdog storage VDI disk correctly
Hi,
I tried two different approaches.
1.) Convert an existing Image with qemu-img
================================================
qemu-img convert -t directsync lubuntu-14.04.3-desktop-i386.iso
sheepdog:lubuntu1404.iso
=================================================
results in
====================================================
root@orion2:/var/lib/libvirt/xml# virsh vol-dumpxml --pool
2015 Nov 30
0
Re: libvirtd doesn't attach Sheepdog storage VDI disk correctly
2015-11-24 15:33 GMT+03:00 Adolf Augustin <adolf.augustin@zettamail.de>:
> t should have been solved in libvirt 1.2.17
>
> See here: https://libvirt.org/news.html
>
> =====================================================
> ....
> update sheepdog client] update sheepdog client path (Vasiliy Tolstov),
> .....
> =====================================================
2015 Dec 09
0
virt-manager 1.3.1 - broken ?? (Ubuntu 14.04 )
Hi,
i just upgraded to virt-manager 1.3.1 (Kubuntu 14.04 with getdeb).
I can connect to a remote Hypervisor (libvirtd/KVM) but when i try to
"open" a VM, virt-manager gives me the following error:
==============================================================================
summary=Fehler beim Starten der Details: Namespace Vte not available for
version 2.91
details=Fehler beim
2010 Sep 19
1
libvirt support for sheepdog file system
Hi Everyone,
I am planning to extend libvirt to support sheepdog for qemu/kvm and
found that storage code is placed in src/storage. Storing files in
sheepdog is independent of file system and we just need to specify
vm-name on qemu command ine. Here are the following comands used for
creating and running a VM using qemu
qemu-img create sheepdog:MyImage001 25G
qemu-system-x86_64
2013 Sep 18
1
problem with sheepdog backend when creating a pool
Hello all,
I am getting a error, when creating a sheepdog pool via libvirt. libvirt is compiled as:
./configure --prefix=/opt/libvirt --without-xen --with-yajl --with-storage-sheepdog=/opt/sheepdog
Sheepdog is functional, when creating manually a vdi via "qemu-img" and than using as disk in libvirt.
The error looks like this:
internal error missing backend for pool type 9
The
2013 Feb 18
0
Sheepdog support in libguestfs (was: Re: About features of libguestfs)
On Mon, Feb 18, 2013 at 11:28:47AM +0800, Edwin Cen wrote:
> I wonder if libguestfs supports sheepdog which means if I can inject
> some config info into virtual machine built on sheepdog with
> libgeustfs or not? If the current version cannot support it can I
> change something to make it happen?
It's likely that sheepdog could be made to work, and adding support
would probably
2013 Sep 24
0
How to create snapshots for sheepdog with libvirt API
Hello!
I am trying to create snapshots for sheepdog disks using libvirt API or virsh. The disk is defined in domain as follows:
<disk type='network' device='disk'>
<driver name='qemu' cache='none'/>
<source protocol='sheepdog' name='sheepvol1'/>
<target dev='vdb' bus='virtio'/>
</disk>
2013 Sep 24
0
creating snapshots for sheepdog with libvirt API
Hello!
I am trying to create snapshots for sheepdog disks using libvirt API or virsh. The disk is defined in domain as follows:
<disk type='network' device='disk'>
<driver name='qemu' cache='none'/>
<source protocol='sheepdog' name='sheepvol1'/>
<target dev='vdb' bus='virtio'/>
</disk>
2011 Aug 25
1
(no subject)
Hi,
I?ve used the libvirt and sheepdog, they are so great!
But I have one question about libvirt and sheepdog.
As we know, I can use this command to create a sheepdog image,
#qemu-img create ?f raw sheepdog:fedora15.img 40G
but how can I create a pool and volume for sheepdog with the xml of libvirt ?
________________________________
This email (including any attachments) is
2014 Aug 06
2
python-guestfs rbd
how to use python-guestfs to access rbd device? The function i found is g.add_drive_opts, but i dono know how it receive ceph's configuration.
I found this link
http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/
Is that the only way i should use to access ceph rbd? Can we use python-guestfs to get the same effect?
Thanks
2010 Jun 17
2
Question regarding print
Hi,
Does anybody know how to have output from print, without the leading [1]?
(Or must I use cat/write?)
>out="r15"
>print(out,quote=FALSE)
[1] r15
And I definitely do not want the leading [1] as I want to construct a table
from this.
Ciao, Adolf
------------------------------------------------
Adolf Stips (new email: adolf.stips at jrc.ec.europa.eu)
Global Environment
2015 Jan 10
2
missing backend for pool type 5 (iscsi)
Hi,
I try to define an iscsi pool with virsh but I always get the following
error :
error :internal error: missing backend for pool type 5 (iscsi)
And yet libvirt was compiled with iscsi support :
configure: Storage Drivers
configure:
configure: Dir: yes
configure: FS: yes
configure: NetFS: yes
configure: LVM: yes
configure: iSCSI: yes
configure: SCSI: yes
configure:
2014 Mar 13
2
--rbd volume access--
http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/#comment-8806
I came across this link and and i was able to retrieve the rbd image.
$ guestfish
><fs> set-attach-method appliance
><fs> add-drive /dev/null
><fs> config -set drive.hd0.file=rbd:pool/volume
><fs> run
I was able to retrieve file from rbd image using the above
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
??
??????? Original Message ???????
On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> >
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote:
> Thanks Ravi for your answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13.
??????? Original Message ???????
On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File: