Displaying 20 results from an estimated 475 matches for "cephes".
Did you mean:
cepces
2020 Sep 21
2
ceph vfs can't find specific path
Hello
Using two file server with samba 4.12.6 running as a CTDB cluster and trying to share a specific path on a cephfs. After loading the config the ctdb log shows the following error:
ctdb-eventd[248]: 50.samba: ERROR: samba directory "/plm" not available
Here is my samba configuration:
[global]
clustering = Yes
netbios name = FSCLUSTER
realm = INT.EXAMPLE.COM
registry
2015 Nov 04
1
Libvirt enhancement requests
On 11/04/2015 04:31 AM, Jean-Marc LIGER wrote:
>
>
> Le 03/11/2015 00:49, Jean-Marc LIGER a ?crit :
>>
>> Le 02/11/2015 18:28, Johnny Hughes a ?crit :
>>> On 10/31/2015 04:34 PM, Jean-Marc LIGER wrote:
>>>> Hi Lucian,
>>>>
>>>> It seems to be upstream libvirt-1.2.15-2 with options with_xen and
>>>> with_libxl enabled.
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2015 Nov 02
2
Libvirt enhancement requests
Le 02/11/2015 18:28, Johnny Hughes a ?crit :
> On 10/31/2015 04:34 PM, Jean-Marc LIGER wrote:
>> Hi Lucian,
>>
>> It seems to be upstream libvirt-1.2.15-2 with options with_xen and
>> with_libxl enabled.
>> http://cbs.centos.org/koji/buildinfo?buildID=1348
>>
> Right, and we can use that version, or a newer one and enable rbd as well.
You might use this
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack.
That's good to know. It is definitely something to consider.
In a distributed storage scenario we might build a dedicated pool for that
and tune the pool as more capacity or performance is needed.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
31.07.2024 07:55, Anoop C S via samba wrote:
> On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote:
>> Hi!
>>
>> Building current samba on debian bullseye with
>>
>> ?? ./configure --enable-cephfs
>>
>> results in the following output:
>>
>> Checking for header cephfs/libcephfs.h????????????? : yes
>> Checking for
2014 Feb 26
1
Samba and CEPH
Greetings all!
I am in the process of deploying a POC around SAMBA and CEPH. I'm
having some trouble locating concise instructions on how to get them to
work together (without having to mount CEPH to the computer first and
then exporting that mount via SAMBA).
Right now, my stopper is trying to locate ceph.so for x64 CentOS 6.5.
[2014/02/26 15:05:23.923617, 0]
2015 Jan 08
0
Libvirt guest can't boot up when use ceph as storage backend with Selinux enabled
Hi there,
I met one problem that guest fail to boot up when Selinux is enabled with guest storage
based on ceph. However, I can boot the guest with qemu directly. I also can boot it up
with Selinux disabled. Not sure it is a libvirt bug or wrong use case.
1. Enable Selinux
# getenforce && iptables -L
Enforcing
Chain INPUT (policy ACCEPT)
target prot opt source destination
2016 May 27
2
migrate local storage to ceph | exchanging the storage system
TLDR: Why is virsh migrate --persistent --live domain
qemu+ssh://root@host/system --xml domain.ceph.xml
not persistent and what could i do about it?
Hi,
after years of beeing pleased with local storage and migrating the
complete storage from one host to another, it was time for ceph.
After setting up a cluster and testing it, its time now for moving a lot
of VMs on that type of storage, without
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hello Jack,
yes, I imagine I'll have to do some work on tuning the block size on
cephfs. Thanks for the advise.
I knew that using mdbox, messages are not removed but I though that was
true in sdbox too. Thanks again.
We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore
backend.
We'll have to do some some work on how to simulate user traffic, for writes
and readings.
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote:
> Hi!
>
> Building current samba on debian bullseye with
>
> ?? ./configure --enable-cephfs
>
> results in the following output:
>
> Checking for header cephfs/libcephfs.h????????????? : yes
> Checking for library cephfs???????????????????????? : yes
> Checking for ceph_statx in
2012 Apr 20
44
Ceph on btrfs 3.4rc
After running ceph on XFS for some time, I decided to try btrfs again.
Performance with the current "for-linux-min" branch and big metadata
is much better. The only problem (?) I''m still seeing is a warning
that seems to occur from time to time:
[87703.784552] ------------[ cut here ]------------
[87703.789759] WARNING: at fs/btrfs/inode.c:2103
2016 May 30
1
Re: migrate local storage to ceph | exchanging the storage system
On 05/30/2016 09:07 AM, Dominique Ramaekers wrote:
>> root@host_a:~# virsh migrate --verbose --p2p --copy-storage-all --persistent --
>> change-protection --abort-on-error --undefinesource --live domain
>> qemu+ssh://root@host_b/system --xml domain.ceph.xml
>
> Weird: The domain should be persistent
Well, the domain is persistent. But the changes i did to domain.ceph.xml
2013 Oct 17
2
Create RBD Format 2 disk images with qemu-image
Hello,
I would like to use RBD Format 2 images so I can take advantage of layering.
However, when I use "qemu-img create -f rbd rbd:data/foo 10G", I get format
1 RBD images. (Actually, when I use the "-f rbd" flag, qemu-img core dumps,
but it looks like that feature may have been deprecated [1])
Is there any way to have qemu-img create RBD Format 2 images or am I better
off
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Wed, 2024-07-31 at 08:36 +0300, Michael Tokarev via samba wrote:
> 31.07.2024 07:55, Anoop C S via samba wrote:
> > On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote:
> > > Hi!
> > >
> > > Building current samba on debian bullseye with
> > >
> > > ??? ./configure --enable-cephfs
> > >
> > > results in
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hi,
some time back we had similar discussions when we, as an email provider,
discussed to move away from traditional NAS/NFS storage to Ceph.
The problem with POSIX file systems and dovecot is that e.g. with mdbox
only around ~20% of the IO operations are READ/WRITE, the rest are
metadata IOs. You will not change this with using CephFS since it will
basically behave the same way as e.g. NFS.
We
2023 Dec 14
2
Gluster -> Ceph
Hi all,
I am looking in to ceph and cephfs and in my
head I am comparing with gluster.
The way I have been running gluster over the years
is either a replicated or replicated-distributed clusters.
The small setup we have had has been a replicated cluster
with one arbiter and two fileservers.
These fileservers has been configured with RAID6 and
that raid has been used as the brick.
If disaster
2016 Jan 28
2
wiki editing rights request
Hi,
I'd like wiki editing rights to create/update the StorageSIG Ceph pages.
My username is Fran?oisCami
and the subject of my future Wiki contributions is going to be Ceph
(what else?).
Proposed locations:
https://wiki.centos.org/Fran%C3%A7oisCami
https://wiki.centos.org/SpecialInterestGroup/Storage/
https://wiki.centos.org/SpecialInterestGroup/Storage/Ceph
2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
On Wed, Jan 15, 2014 at 05:47:35PM -0500, Joshua Dotson wrote:
> Hi,
>
> I'm trying to build an active/active virtualization cluster using a Ceph
> RBD as backing for each libvirt-managed LXC. I know live migration for LXC
> isn't yet possible, but I'd like to build my infrastructure as if it were.
> That is, I would like to be sure proper locking is in place for
2023 Dec 14
2
Gluster -> Ceph
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times.
My main question I ask when evaluating storage solutions is, "what happens when it fails?"
With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily