similar to: seeding my georeplication

Displaying 20 results from an estimated 1000 matches similar to: "seeding my georeplication"

2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to following up on this. Unfortunately, I had already copied and shipped the data to the second datacenter before copying the GFIDs so I already stumbled before the first hurdle! I have been using the scripts in the extras/geo-rep provided for an earlier version upgrade. With a bit of tinkering, these have given me a file
2017 Oct 17
1
Distribute rebalance issues
Nithya, Is there any way to increase the logging level of the brick? There is nothing obvious (to me) in the log (see below for the same time period as the latest rebalance failure). This is the only brick on that server that has disconnects like this. Steve [2017-10-17 02:22:13.453575] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-video-server: accepted client from
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk> wrote: > Hi, > > > I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2017 Oct 17
2
Distribute rebalance issues
Hi, I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens a second or so later. Is this normal behaviour? So far it has been the same server and the same (remote)
2017 Oct 24
2
active-active georeplication?
hi everybody, Have glusterfs released a feature named active-active georeplication? If yes, in which version it is released? If no, is it planned to have this feature? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2017 Sep 17
2
georeplication sync deamon
hi all, I want to know some more detail about glusterfs georeplication, more about syncdeamon, if 'file A' was mirorred in slave volume , a change happen to 'file A', then how the syncdeamon act? 1. transfer the whole 'file A' to slave 2. transfer the changes of file A to slave thx lot -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Sep 19
3
"Input/output error" on mkdir for PPC64 based client
I recently compiled the 3.10-5 client from source on a few PPC64 systems running RHEL 7.3. They are mounting a Gluster volume which is hosted on more traditional x86 servers. Everything seems to be working properly except for creating new directories from the PPC64 clients. The mkdir command gives a "Input/output error" and for the first few minutes the new directory is
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64 client and an x86 client. Weirdly the client logs were almost identical. Here's the ppc64 gluster client log of attempting to create a folder... ------------- [2017-09-20 13:34:23.344321] D [rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Oct 24
2
active-active georeplication?
Halo replication [1] could be of interest here. This functionality is available since 3.11 and the current plan is to have it fully supported in a 4.x release. Note that Halo replication is built on existing synchronous replication in Gluster and differs from the current geo-replication implementation. Kotresh's response is spot on for the current geo-replication implementation. Regards,
2017 Oct 24
0
active-active georeplication?
Hi, No, gluster doesn't support active-active geo-replication. It's not planned in near future. We will let you know when it's planned. Thanks, Kotresh HR On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote: > hi everybody, > > Have glusterfs released a feature named active-active georeplication? If > yes, in which version it is released?
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me. How can I get these news about glusterfs new features? On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote: > > Halo replication [1] could be of interest here. This functionality is > available since 3.11 and the current plan is to have it fully supported in > a 4.x release. > > Note that Halo
2018 Jan 15
1
"linkfile not having link" occurrs sometimes after renaming
There are two users u1 & u2 in the cluster. Some files are created by u1, and they are read only for u2. Of course u2 can read these files. Later these files are renamed by u1. Then I switch to the user u2. I find that u2 can't list or access the renamed files. I see these errors in log: [2018-01-15 17:35:05.133711] I [MSGID: 109045] [dht-common.c:2393:dht_lookup_cbk] 25-data-dht:
2018 Jan 25
2
parallel-readdir is not recognized in GlusterFS 3.12.4
By the way, on a slightly related note, I'm pretty sure either parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6. I updated my servers and clients to 3.12.4 and enabled these two options after reading about them in the 3.10.0 and 3.11.0 release notes. In the days after enabling these two options all of my
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives disconnects? so we know which to disable parallel-readdir doing magic ran on pdf from last year https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf -v On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote: > By the way, on a slightly related note, I'm pretty
2018 Jan 24
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Adding Poornima to take a look at it and comment. On Tue, Jan 23, 2018 at 10:39 PM, Alan Orth <alan.orth at gmail.com> wrote: > Hello, > > I saw that parallel-readdir was an experimental feature in GlusterFS > version 3.10.0, became stable in version 3.11.0, and is now recommended for > small file workloads in the Red Hat Gluster Storage Server > documentation[2].
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
Dear Vlad, I'm sorry, I don't want to test this again on my system just yet! It caused too much instability for my users and I don't have enough resources for a development environment. The only other variables that changed before the crashes was the group metadata-cache[0], which I enabled the same day as the parallel-readdir and readdir-ahead options: $ gluster volume set homes
2013 Jul 08
1
Possible to preload data on a georeplication target? First sync taking forever...
I have about 4 TB of data in a Gluster mirror configuration on top of ZFS, mostly consisting of 20KB files. I've added a georeplication target and the sync started ok. The target is using an SSH destination. It ran pretty quick for a while but it's taken over 2 weeks to sync just under 1 TB of data to the target server and it appears to be getting slower. The two servers are connected
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option