Displaying 20 results from an estimated 1000 matches similar to: "geo-replication and rsync"
2013 Nov 25
1
Geo Replication Hooks
Hi,
I have created a proposal for the implementation of Geo Replication Hooks.
See here:
http://www.gluster.org/community/documentation/index.php/Features/Geo_Replication_Hooks
Any comments, thoughts, etc would be great.
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Aug 28
1
GlusterFS extended attributes, "system" namespace
Hi,
I'm running GlusterFS 3.3.2 and I'm having trouble getting geo-replication to work. I think it is a problem with extended attributes. I'll using ssh with a normal user to perform the replication.
On the server log in /var/log/glusterfs/geo-replication/VOLNAME/ssh?.log I'm getting an error "ReceClient: call ?:?:? (xtime) failed on peer with OSError". On the
2012 Dec 03
1
configure: error: python does not have ctypes support
Hi,
I am trying to install glusterfs 3.3.1 from source code. At the time of
configuration i am getting the following error
* configure: error: python does not have ctypes support*
On my system python version is: 2.4.3
Kindly advice on fixing the error.
Thanks n Regards
Neetu Sharma
Bangalore
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2012 Sep 24
2
Recommended settings for geo-replication for folders with lots of files
Hi,
I'm currently using GlusterFS in an installation and it works quite well.
Only problem I'm currently facing is synching files to the slave on folders
with lots of files (e.g. images). Is there some recommended way to speed
this up? Synchronisation will take quite some time if there are changes in
one of these folders. I tried to search the Gluster.org website or via
google for a
2012 Mar 20
1
issues with geo-replication
Hi all. I'm looking to see if anyone can tell me this is already
working for them or if they wouldn't mind performing a quick test.
I'm trying to set up a geo-replication instance on 3.2.5 from a local
volume to a remote directory. This is the command I am using:
gluster volume geo-replication myvol ssh://root at remoteip:/data/path start
I am able to perform a geo-replication
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all,
I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following:
All machines are running CentOS 6.4 and using
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not in
> case of graceful exit of glusterfsd.
Exactly. if this happen, probably there
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2017 Sep 23
3
EC 1+2
Is possible to create a dispersed volume 1+2 ? (Almost the same as replica
3, the same as RAID-6)
If yes, how many server I have to add in the future to expand the storage?
1 or 3?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170923/a702ba67/attachment.html>
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Sounds less like an issue with how it's shutdown but an issue with how
it's mounted perhaps. My gluster fuse mounts seem to handle any one node
being shutdown just fine as long as
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2017 Sep 08
0
GlusterFS as virtual machine storage
On Fri, Sep 8, 2017 at 12:48 PM, Gandalf Corvotempesta
<gandalf.corvotempesta at gmail.com> wrote:
> I think this should be considered a bug
> If you have a server crash, glusterfsd process obviously doesn't exit
> properly and thus this could least to IO stop ?
I agree with you completely in this.
2017 Sep 23
1
EC 1+2
Already read that.
Seems that I have to use a multiple of 512, so 512*(3-2) is 512.
Seems fine
Il 23 set 2017 5:00 PM, "Dmitri Chebotarov" <4dimach at gmail.com> ha scritto:
> Hi
>
> Take a look at this link (under ?Optimal volumes?), for Erasure Coded
> volume optimal configuration
>
> http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/
>
2017 Oct 13
1
small files performance
Where did you read 2k IOPS?
Each disk is able to do about 75iops as I'm using SATA disk, getting even
closer to 2000 it's impossible
Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto:
> Depends what you need.
> 2K iops for small file writes is not a bad result.
> In my case I had a system that was just poorly written and it was
>
2018 May 15
4
end-to-end encryption
Hi to all
I was looking at protonmail.com
Is possible to implement and end-to-end encryption with dovecot, where
server-side there is no private key to decrypt messages?
If I understood properly, on protonmail the private key is encrypted with
user's password, so that only an user is able to decrypt the mailbox.
Anything similiar ?
2016 Oct 27
4
Server migration
On 27 Oct 2016, at 15:29, Tanstaafl <tanstaafl at libertytrek.org> wrote:
>
> On 10/26/2016 2:38 AM, Gandalf Corvotempesta
> <gandalf.corvotempesta at gmail.com> wrote:
>> This is much easier than dovecot replication as i can start immedialy with
>> no need to upgrade the old server
>>
>> my only question is: how to manage the email received on the
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
Hi,
I've setup Gluster Geo Replication according the manual,
# sudo gluster volume geo-replication flvol
ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave
config log-level DEBUG
#sudo gluster volume geo-replication flvol
ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave start
#sudo gluster volume geo-replication flvol
ssh://root at
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the
> shutdown doesn?t kill everything !
>
Killing glusterd does not kill other gluster processes.
When you shutdown a node, everything obviously gets killed but the
client does not get notified
2016 Oct 26
4
Server migration
Il 26 ott 2016 8:30 AM, "Aki Tuomi" <aki.tuomi at dovecot.fi> ha scritto:
> I would recommend using same major release with replication.
>
> If you are using maildir++ format, it should be enough to copy all the
> maildir files over and start dovecot on new server.
>
This is much easier than dovecot replication as i can start immedialy with
no need to upgrade the