search for: remd

Displaying 5 results from an estimated 5 matches for "remd".

Did you mean: read
2017 Dec 21
1
seeding my georeplication
...this the solution, or is something else wrong? On 27 June 2017 at 06:31, Aravinda <avishwan at redhat.com> wrote: > Answers inline, > > @Kotresh, please add if I missed anything. > > regards > Aravinda VKhttp://aravindavk.in > > On 06/23/2017 06:29 PM, Stephen Remde wrote: > > I have a ~600tb distributed gluster volume that I want to start using geo > replication on. > > The current volume is on 6 100tb bricks on 2 servers > > My plan is: > > 1) copy each of the bricks to a new arrays on the servers locally > > Before start co...
2017 Jun 23
2
seeding my georeplication
...e the new arrays to the new servers 3) create the volume on the new servers using the arrays 4) fix the layout on the new volume 5) start georeplication (which should be relatively small as most of the data should be already there?) Is this likely to succeed? Any advice welcomed. -- Dr Stephen Remde Director, Innovation and Research T: 01535 280066 M: 07764 740920 E: stephen.remde at gaist.co.uk W: www.gaist.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170623/76addb31/attachment.html&...
2017 Oct 17
1
Distribute rebalance issues
...er-handshake.c:692:server_setvolume] 0-video-server: accepted client from node-dc4-02-29040-2017/08/04-09:31:22:842268-video-client-4-7-406 (version: 3.8.13) On 17 October 2017 at 10:26, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk> > wrote: > >> Hi, >> >> >> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately s...
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk> wrote: > Hi, > > > I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on tha...
2017 Oct 17
2
Distribute rebalance issues
Hi, I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens a second or so later. Is this normal behaviour? So far it has been the same server and the same (remote)