Displaying 20 results from an estimated 1000 matches similar to: "An idea: rsyncfs, an rsync-based real-time replicated filesystem"
2005 Apr 13
5
An idea: rsyncfs, an rsync-based real-time replicated filesystem
This is only my second email to the rsync mailing list. My first was sent
under the title "Re: TODO hardlink performance optimizations" on Jan 3,
2004. The response of the rsync developers to that email was remarkable
(in my opinion). I felt that the rsync performance enhancements that
resulted from the ensuing discussion and code improvements were so
consequential that I dared not
2007 Jul 12
1
rsyncd.conf missing option akin to --one-file-system
It seems to me that rsyncd.conf does not provide an option akin to rsync's
--one-file-system command line argument. If that is true, it seems like a
bug of ommission, as I now face a use case where I need it.
Is there maybe some technical reason for the ommission?
Thanks,
--
Lester Hightower
2003 Oct 01
0
[releng_4 tinderbox] failure on i386/i386
TB --- 2003-10-02 04:44:07 - starting RELENG_4 tinderbox run for i386/i386
TB --- 2003-10-02 04:44:07 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/i386
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-10-02 04:49:12 - building world
TB --- cd /home/des/tinderbox/RELENG_4/i386/i386/src
TB --- /usr/bin/make -B buildworld
>>>
2003 Jul 20
0
[-STABLE tinderbox] failure on i386/pc98
TB --- 2003-07-21 05:27:28 - starting RELENG_4 tinderbox run for i386/pc98
TB --- 2003-07-21 05:27:28 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/pc98
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-07-21 05:33:30 - building world
TB --- cd /home/des/tinderbox/RELENG_4/i386/pc98/src
TB --- /usr/bin/make -B buildworld
>>>
2003 Oct 01
0
[releng_4 tinderbox] failure on alpha/alpha
TB --- 2003-10-02 04:00:01 - starting RELENG_4 tinderbox run for alpha/alpha
TB --- 2003-10-02 04:00:01 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/alpha/alpha
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-10-02 04:07:38 - building world
TB --- cd /home/des/tinderbox/RELENG_4/alpha/alpha/src
TB --- /usr/bin/make -B buildworld
>>>
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'.
As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.
I have followed the configuration steps as documented in
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0
2006 May 30
1
Cannot remove Maildir folder
Hi,
I have been using dovecot 1.0 beta 8 on Debian Sarge for a couple of
days and I experienced some problems when removing folders..
As an IMAP client I use Thunderbird 1.5 and IMAP folders are on NFS.
Here are the steps to re-produce the problem:
1. Run Thunderbird as usual
2. Create folder
3. Remove folder - In my case folder is not removed but moved to Trash
(Thunderbird setting)
4. I go to
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2013 Jul 10
1
Re: Libvirt and Glusterfs
On 07/09/2013 08:18 PM, Olivier Mauras wrote:
> On 2013-07-09 09:40, Vijay Bellur wrote:
>
>>> Hi, I'm trying to use qemu native glusterfs integration with libvirt.
>>> It's all working well from the qemu side, but libvirt fails to start
>>> a domain with a gluster drive or attach a drive. I have exactly the
>>> same error as this person:
2013 Jul 07
0
Libvirt and Glusterfs
Hi,
I'm trying to use qemu native glusterfs integration with
libvirt. It's all working well from the qemu side, but libvirt fails to
start a domain with a gluster drive or attach a drive.
I have exactly
the same error as this person:
https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
[1]
I use qemu 1.5.1 with glusterfs 3.4 beta 4 and libvirt
1.0.6.
[root@bbox ~]#
2013 Jul 09
0
Re: Libvirt and Glusterfs
On 2013-07-09 09:40, Vijay Bellur wrote:
>> Hi, I'm trying to use
qemu native glusterfs integration with libvirt. It's all working well
from the qemu side, but libvirt fails to start a domain with a gluster
drive or attach a drive. I have exactly the same error as this person:
https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
[1] I use qemu 1.5.1 with glusterfs
2013 Jul 09
2
Re: Libvirt and Glusterfs
> Hi,
>
> I'm trying to use qemu native glusterfs integration with libvirt. It's
> all working well from the qemu side, but libvirt fails to start a domain
> with a gluster drive or attach a drive.
> I have exactly the same error as this person:
> https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
>
> I use qemu 1.5.1 with glusterfs 3.4 beta 4
2008 May 05
2
I want to help translating articles
Hello to all those of the list, I introduce myself:
My name is Lester Espinosa Mart?nez, I live in Cienfuegos, a city of
Cuba. I am interested in helping Wiki of CentOS working in translating
from English to Spanish.
I met the Wiki of CentOS thanks for a great one my friends called: Alain
Reguera Delgado, and I also collaborated in translating articles of last
Wiki of CentOS in Spanish.
My
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2010 Apr 30
1
Possible bug in POSIX classes for R 2.11.0?
To the R development team;
I found an unusual behavior in zoo when I upgraded to R 2.11.0 - it abruptly terminated when I performed certain operations on large zoo objects. I sent an e-mail to Achim Zeileis and he said this was a potential bug that I should report to the R development team. The details are given below in the thread below. Basically, I can crash R with this code:
library(zoo)
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2012 May 22
0
Announce: Hiera 1.0.0rc3 Available
Hiera 1.0.0rc3 is a feature release candidate designed to accompany Puppet 3.0.
Changes to Hiera since 1.0.0rc2 were mainly to ease packaging and
improve testing.
Downloads are available:
* Source http://downloads.puppetlabs.com/hiera/hiera-1.0.0rc3.tar.gz
It includes contributions from the following people:
Kelsey Hightower and Matthaus Litteken
See the Verifying Puppet Download section at: