similar to: Puppet modules for Ceph

Displaying 20 results from an estimated 1000 matches similar to: "Puppet modules for Ceph"

2012 Apr 20
44
Ceph on btrfs 3.4rc
After running ceph on XFS for some time, I decided to try btrfs again. Performance with the current "for-linux-min" branch and big metadata is much better. The only problem (?) I''m still seeing is a warning that seems to occur from time to time: [87703.784552] ------------[ cut here ]------------ [87703.789759] WARNING: at fs/btrfs/inode.c:2103
2011 Dec 02
3
[PATCH] Btrfs: protect orphan block rsv with spin_lock
We''ve been seeing warnings coming out of the orphan commit stuff forever from ceph. Turns out it''s because we''re racing with checking if the orphan block reserve is set, because we clear it outside of the spin_lock. So leave the normal fastpath checks where they are, but take the spin_lock and _recheck_ to make sure we haven''t had an orphan block rsv added in
2012 Jun 08
2
Best practices to switch from BIND to NSD
Hi, I'm a sys admin and currently working for a french hosting company. We provide DNS services to our customers and at the moment we are using BIND on Debian servers. BIND is a good software but we don't need a recursing DNS for our public DNS, and we needed better security than what BIND provides. So I made the suggestion to replace BIND by another DNS software. NSD appears to be the
2023 Dec 14
2
Gluster -> Ceph
Hi all, I am looking in to ceph and cephfs and in my head I am comparing with gluster. The way I have been running gluster over the years is either a replicated or replicated-distributed clusters. The small setup we have had has been a replicated cluster with one arbiter and two fileservers. These fileservers has been configured with RAID6 and that raid has been used as the brick. If disaster
2023 Dec 14
2
Gluster -> Ceph
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times. My main question I ask when evaluating storage solutions is, "what happens when it fails?" With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily
2006 Feb 22
1
Gram-Charlier series
Good day everyone, I want to use the Gram-Charlier series expansion to model some data. To do that, I need functions to: 1) Calculate 'n' moments from given data 2) Transform 'n' moments to 'n' central moments, or 3) Transform 'n' moments to 'n' cumulants 4) Calculate a number of Hermite polynomials Are there R-functions to do any of the above?
2020 Sep 21
2
ceph vfs can't find specific path
Hello Using two file server with samba 4.12.6 running as a CTDB cluster and trying to share a specific path on a cephfs. After loading the config the ctdb log shows the following error: ctdb-eventd[248]: 50.samba: ERROR: samba directory "/plm" not available Here is my samba configuration: [global] clustering = Yes netbios name = FSCLUSTER realm = INT.EXAMPLE.COM registry
2014 Feb 26
1
Samba and CEPH
Greetings all! I am in the process of deploying a POC around SAMBA and CEPH. I'm having some trouble locating concise instructions on how to get them to work together (without having to mount CEPH to the computer first and then exporting that mount via SAMBA). Right now, my stopper is trying to locate ceph.so for x64 CentOS 6.5. [2014/02/26 15:05:23.923617, 0]
2016 May 30
1
Re: migrate local storage to ceph | exchanging the storage system
On 05/30/2016 09:07 AM, Dominique Ramaekers wrote: >> root@host_a:~# virsh migrate --verbose --p2p --copy-storage-all --persistent -- >> change-protection --abort-on-error --undefinesource --live domain >> qemu+ssh://root@host_b/system --xml domain.ceph.xml > > Weird: The domain should be persistent Well, the domain is persistent. But the changes i did to domain.ceph.xml
2006 Jun 30
1
Empirical CDF
Good day everyone, I want to assess the error when fitting a Gram-Charlier CDF to some data 'ws', that is, I want to calculate: Err = |ecdf(ws) - GCh_ser(ws)| The problem is, I cannot get the F(x) values from the ecdf. 'Summary(ecdf())' returns some of the x-axis values, but how do you get the F(x) values? Thank you for any help you can provide. Regards, Augusto
2016 May 27
2
migrate local storage to ceph | exchanging the storage system
TLDR: Why is virsh migrate --persistent --live domain qemu+ssh://root@host/system --xml domain.ceph.xml not persistent and what could i do about it? Hi, after years of beeing pleased with local storage and migrating the complete storage from one host to another, it was time for ceph. After setting up a cluster and testing it, its time now for moving a lot of VMs on that type of storage, without
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2007 Jun 15
13
API of scriptaculous
Hi all, Is there anywhere an API with the different method and descrption of the different JS of SCRIPTACULOUS ? Thanks for your good work in Prototype and Scriptaculous !! -- Cyril --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Ruby on Rails: Spinoffs" group. To post to this group, send email to
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote: > On 06/07/2013 02:41 PM, John Nielsen wrote: >> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: >> [libvirt] [PATCH] Forbid use of ':'
2015 Oct 12
3
[ovirt-users] CEPH rbd support in EL7 libvirt
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/10/15 10:13, Nux! wrote: > Hi Nir, > > I have not tried to use Ovirt with Ceph, my question was about > libvirt and was directed to ask the question here, sorry for the > noise; I understand libvirt is not really ovirt's people concern. > > The thing is qemu can do ceph rbd in EL7, libvirt does not, > although
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too, but because the provider, where we placed the servers, uses widely CEPH, we decided to
2015 Jun 08
2
ceph rbd pool and libvirt manageability (virt-install)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello everybody, I created a rbd pool and activated it, but I can't seem to create volumes in it with virsh or virt-install? # virsh pool-dumpxml myrbdpool <pool type='rbd'> <name>myrbdpool</name> <uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid> <capacity
2012 Oct 24
2
Ceph samba size reporting troubles
Dear developement team, I want to share a massive storage casted with Ceph by samba with windows workstations. All works well. My problem so is that in windows the ceph storage size statistics are wrong. Instead of seeing a 44TB hard drive I see a 176GB hard drive. Under linux that issue doesn't show. The size are properly reported. I investigated around and it seems that the problem