Displaying 20 results from an estimated 40000 matches similar to: "ceph vs gluster as libvirt backend"
2023 Dec 14
2
Gluster -> Ceph
Hi all,
I am looking in to ceph and cephfs and in my
head I am comparing with gluster.
The way I have been running gluster over the years
is either a replicated or replicated-distributed clusters.
The small setup we have had has been a replicated cluster
with one arbiter and two fileservers.
These fileservers has been configured with RAID6 and
that raid has been used as the brick.
If disaster
2015 Jan 08
0
Libvirt guest can't boot up when use ceph as storage backend with Selinux enabled
Hi there,
I met one problem that guest fail to boot up when Selinux is enabled with guest storage
based on ceph. However, I can boot the guest with qemu directly. I also can boot it up
with Selinux disabled. Not sure it is a libvirt bug or wrong use case.
1. Enable Selinux
# getenforce && iptables -L
Enforcing
Chain INPUT (policy ACCEPT)
target prot opt source destination
2023 Dec 17
1
Gluster -> Ceph
Il 14/12/2023 16:08, Joe Julian ha scritto:
> With ceph, if the placement database is corrupted, all your data is lost
> (happened to my employer, once, losing 5PB of customer data).
From what I've been told (by experts) it's really hard to make it
happen. More if proper redundancy of MON and MDS daemons is implemented
on quality HW.
> With Gluster, it's just files on
2023 Dec 17
1
Gluster -> Ceph
On December 17, 2023 5:40:52 AM PST, Diego Zuccato <diego.zuccato at unibo.it> wrote:
>Il 14/12/2023 16:08, Joe Julian ha scritto:
>
>> With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data).
>
>From what I've been told (by experts) it's really hard to make it happen. More if proper
2023 Dec 14
2
Gluster -> Ceph
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times.
My main question I ask when evaluating storage solutions is, "what happens when it fails?"
With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily
2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
On Wed, Jan 15, 2014 at 05:47:35PM -0500, Joshua Dotson wrote:
> Hi,
>
> I'm trying to build an active/active virtualization cluster using a Ceph
> RBD as backing for each libvirt-managed LXC. I know live migration for LXC
> isn't yet possible, but I'd like to build my infrastructure as if it were.
> That is, I would like to be sure proper locking is in place for
2013 Jun 08
0
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On 06/07/2013 04:18 PM, John Nielsen wrote:
> On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote:
>
>> On 06/07/2013 02:41 PM, John Nielsen wrote:
>>> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this
2018 Aug 06
0
Re: ceph rbd pool and libvirt manageability (virt-install)
Hello everybody,
virt-install --version
1.4.0
How do I create a ceph network disk with virt-install without having to
edit it?
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<auth username='libvirt'>
<secret type='ceph' uuid='ec9be0c4-a60f-490e-af83-f0f27aaf48c9'/>
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hi,
some time back we had similar discussions when we, as an email provider,
discussed to move away from traditional NAS/NFS storage to Ceph.
The problem with POSIX file systems and dovecot is that e.g. with mdbox
only around ~20% of the IO operations are READ/WRITE, the rest are
metadata IOs. You will not change this with using CephFS since it will
basically behave the same way as e.g. NFS.
We
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hello Jack,
yes, I imagine I'll have to do some work on tuning the block size on
cephfs. Thanks for the advise.
I knew that using mdbox, messages are not removed but I though that was
true in sdbox too. Thanks again.
We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore
backend.
We'll have to do some some work on how to simulate user traffic, for writes
and readings.
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody,
my background: I'm doing Xen since 10++ years, many years with DRBD for
high availability, since some time I'm using preferable GlusterFS with
FUSE as replicated storage, where I place the image-files for the vms.
In my current project we started (successfully) with Xen/GlusterFS too,
but because the provider, where we placed the servers, uses widely CEPH,
we decided to
2013 Jun 07
0
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On 06/07/2013 02:41 PM, John Nielsen wrote:
> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change:
> [libvirt] [PATCH] Forbid use of ':' in RBD pool names
> ...People are known to be abusing the lack of escaping in current
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote:
> On 06/07/2013 02:41 PM, John Nielsen wrote:
>> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change:
>> [libvirt] [PATCH] Forbid use of ':'
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2016 Jan 14
0
libvirt + ceph rbd will hang
hi all:
I use openstack icehouse
and libvirt 0.10 version
use the quem+ceph to save vm disk
when I do serval operation for example migrate vm、snapshot vm
the libvirtd will be hang
I guess whether the ceph rbd will cause this error
anyone can help me?
Wang Liming
2015 Oct 13
0
[ovirt-users] CEPH rbd support in EL7 libvirt
hi
is ovirt useable with xen? is there any doc/howto how to use it?
Am 2015-10-12 15:04, schrieb Sven Kieske:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
>
> On 12/10/15 10:13, Nux! wrote:
>> Hi Nir,
>>
>> I have not tried to use Ovirt with Ceph, my question was about
>> libvirt and was directed to ask the question here, sorry for the
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack.
That's good to know. It is definitely something to consider.
In a distributed storage scenario we might build a dedicated pool for that
and tune the pool as more capacity or performance is needed.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2018 Aug 07
1
Re: ceph rbd pool and libvirt manageability (virt-install)
On Mon, Aug 06, 2018 at 09:19:59PM +0200, Jelle de Jong wrote:
> Hello everybody,
>
> virt-install --version
> 1.4.0
>
> How do I create a ceph network disk with virt-install without having to
> edit it?
>
> <disk type='network' device='disk'>
> <driver name='qemu' type='raw'/>
> <auth
2015 Jun 08
2
ceph rbd pool and libvirt manageability (virt-install)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity
2013 Sep 24
1
VM with ceph backend snapshot-revert error
Hey guys:
I have a running vm with ceph backend:
root@apc20-005:~# virsh list
> Id Name State
> ----------------------------------
> 56 one-240 running
And *snapshot-create-as* works well:
root@apc20-005:~# virsh snapshot-create-as one-240
> Domain snapshot 1380009353 created
But when exec *snapshot-revert*, error occurs:
root@apc20-005:~# virsh