similar to: virsh not connecting to libvertd ?

Displaying 20 results from an estimated 1000 matches similar to: "virsh not connecting to libvertd ?"

2023 Jun 12
2
virsh not connecting to libvertd ?
Just found my issue. After I removed the cephfs mounts it worked! I will debug ceph. I assumed because I could touch files on mounted cephfs it was working. Now virsh list works! thanks jerry Lars Kellogg-Stedman > On Tue, Jun 06, 2023 at 04:56:38PM -0400, Jerry Buburuz wrote: >> Recently both virsh stopped talking to the libvirtd. Both stopped within >> a >> few days of
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Wed, 2024-07-31 at 08:36 +0300, Michael Tokarev via samba wrote: > 31.07.2024 07:55, Anoop C S via samba wrote: > > On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: > > > Hi! > > > > > > Building current samba on debian bullseye with > > > > > > ??? ./configure --enable-cephfs > > > > > > results in
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hello Jack, yes, I imagine I'll have to do some work on tuning the block size on cephfs. Thanks for the advise. I knew that using mdbox, messages are not removed but I though that was true in sdbox too. Thanks again. We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore backend. We'll have to do some some work on how to simulate user traffic, for writes and readings.
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hi, some time back we had similar discussions when we, as an email provider, discussed to move away from traditional NAS/NFS storage to Ceph. The problem with POSIX file systems and dovecot is that e.g. with mdbox only around ~20% of the IO operations are READ/WRITE, the rest are metadata IOs. You will not change this with using CephFS since it will basically behave the same way as e.g. NFS. We
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
31.07.2024 07:55, Anoop C S via samba wrote: > On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: >> Hi! >> >> Building current samba on debian bullseye with >> >> ?? ./configure --enable-cephfs >> >> results in the following output: >> >> Checking for header cephfs/libcephfs.h????????????? : yes >> Checking for
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: > Hi! > > Building current samba on debian bullseye with > > ?? ./configure --enable-cephfs > > results in the following output: > > Checking for header cephfs/libcephfs.h????????????? : yes > Checking for library cephfs???????????????????????? : yes > Checking for ceph_statx in
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack. That's good to know. It is definitely something to consider. In a distributed storage scenario we might build a dedicated pool for that and tune the pool as more capacity or performance is needed. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2024 Jul 30
1
ceph is disabled even if explicitly asked to be enabled
Hi! Building current samba on debian bullseye with ./configure --enable-cephfs results in the following output: Checking for header cephfs/libcephfs.h : yes Checking for library cephfs : yes Checking for ceph_statx in cephfs : ok Checking for ceph_openat in cephfs : not found Ceph support disabled due to
2023 Jun 06
2
virsh not connecting to libvertd ?
I have identical two hypervisors same operating system: Ubuntu 22.04.2 LTS Recently both virsh stopped talking to the libvirtd. Both stopped within a few days of each other. Currently if I run: virsh uri virsh version virsh list # virsh list ..nothing just hangs When I ran strace on these broken machines it get stuck at same spot: strace virsh list ...
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't mind if something seems too obvious for you. Hi, I have a question for both dovecot and ceph lists and below I'll explain what's going on. Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when using sdbox, a new file is stored for each email message. When using mdbox, multiple
2016 Jan 08
1
Samba & Ceph
On 2016-01-08 at 09:31 -0800, Jeremy Allison wrote: > On Fri, Jan 08, 2016 at 04:26:24PM +0100, Dirk Laurenz wrote: > > Hello List, > > > > as anyone tried to install samba with/ontop on a ceph cluster? > > Try compiling and setting up with vfs_ceph. Correct, that's basically it. > Needs some more work, but should work. Some posix features are not quite there
2020 Sep 21
2
ceph vfs can't find specific path
Hello Using two file server with samba 4.12.6 running as a CTDB cluster and trying to share a specific path on a cephfs. After loading the config the ctdb log shows the following error: ctdb-eventd[248]: 50.samba: ERROR: samba directory "/plm" not available Here is my samba configuration: [global] clustering = Yes netbios name = FSCLUSTER realm = INT.EXAMPLE.COM registry
2018 Jan 19
0
Error: Corrupted dbox file
Hello Florent, How did you proceed with the upgrade? Did you follow the recommended steps guide to upgrade ceph? (mons first, then OSDs, then MDS) Did you interrupt dovecot before upgrading the MDS specially? Did you remount the filesystem? Did you upgrade the ceph client too? Give people the complete scene and someone might be able to help you. Ask on ceph-users list too. Regards, Webert
2024 Aug 04
1
ceph is disabled even if explicitly asked to be enabled
31.07.2024 09:38, Anoop C S via samba wrote: > On Wed, 2024-07-31 at 08:36 +0300, Michael Tokarev via samba wrote: >> The problem is that ceph is disabled by configure even if it is >> explicitly enabled by the command-line switch.? Configure should fail >> here instead of continuing, - *that* is the problem. > > This is/was always the situation because building
2018 May 23
0
ceph_vms performance
Hi Tom, On Wed, 23 May 2018 09:15:15 +0200, Thomas Bennett via samba wrote: > Hi, > > I'm testing out ceph_vms vs a cephfs mount with a cifs export. I take it you mean the Ceph VFS module (vfs_ceph)? > I currently have 3 active ceph mds servers to maximise throughput and > when I have configured a cephfs mount with a cifs export, I'm getting > a reasonable
2018 May 23
1
ceph_vms performance
On Wed, May 23, 2018 at 02:13:30PM +0200, David Disseldorp via samba wrote: > Hi Tom, > > On Wed, 23 May 2018 09:15:15 +0200, Thomas Bennett via samba wrote: > > > Hi, > > > > I'm testing out ceph_vms vs a cephfs mount with a cifs export. > > I take it you mean the Ceph VFS module (vfs_ceph)? > > > I currently have 3 active ceph mds servers to
2023 May 09
2
MacOS clients - best options
Hi list, we have migrated a single node Samba server from Ubuntu Trusty to a 3-node CTDB Cluster on Debian Bullseye with Sernet packages. Storage is CephFS. We are running Samba in Standalone Mode with LDAP Backend. Samba Version: sernet-samba 99:4.18.2-2debian11 I don't know if it is relevant here's how we have mounted CephFS on the samba nodes: (fstab):/samba /srv/samba ceph
2018 May 23
3
ceph_vms performance
Hi, I'm testing out ceph_vms vs a cephfs mount with a cifs export. I currently have 3 active ceph mds servers to maximise throughput and when I have configured a cephfs mount with a cifs export, I'm getting a reasonable benchmark results. However, when I tried some benchmarking with the ceph_vms module, I only got a 3rd of the comparable write throughput. I'm just wondering if
2019 Jul 12
1
Samba 4.10.6 for rhel7/centos7 rpms
Hi Konstantin, On Fri, 12 Jul 2019, Konstantin Shalygin wrote: > On 7/11/19 8:59 PM, vincent at cojot.name wrote: >> Thank you for the diff. I will review it and merge it today. >> About the missing directories, I think it may be doable to add them to the >> 'ctdb' rpm. As I'm not using ctdb, what should the ownership/permissions >> be for those
2023 May 11
1
MacOS clients - best options
On 5/11/23 15:35, Thomas Hukkelberg via samba wrote: > We have exact same problem but never really found the underlying > issue and whether it's a cephfs bug, a samba bug or only related to > vfs_fruit. it's a known subtle Ceph issue I'm afraid: https://tracker.ceph.com/issues/50719 If the issue still exists with a newer Ceph version then mentioned in the bugreport, please