similar to: Single node distributed volume

Displaying 20 results from an estimated 30000 matches similar to: "Single node distributed volume"

2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote: > I am trying to get up geo replication between two gluster volumes > > I have set up two replica 2 arbiter 1 volumes with 9 bricks > > [root at gfs1 ~]# gluster volume info > Volume Name: gfsvol > Type: Distributed-Replicate > Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 > Status: Started > Snapshot Count: 0 > Number
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar, I am trying to understand the problem and have few questions. 1. Is trashcan enabled only on master volume? 2. Does the 'rm -rf' done on master volume synced to slave ? 3. If trashcan is disabled, the issue goes away? The geo-rep error just says the it failed to create the directory "Oracle_VM_VirtualBox_Extension" on slave. Usually this would be because of gfid
2018 Apr 06
0
Can't stop volume using gluster volume stop
Hello, On one of my GlusterFS 3.12.7 3-way replica volume I can't stop it using the standard gluster volume stop command as you can see below: $ sudo gluster volume stop myvolume Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: myvolume: failed: geo-replication Unable to get the status of active geo-replication session for the volume
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello, in regard to https://bugzilla.redhat.com/show_bug.cgi?id=1434066 i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4) for e.g. removing an entire directory with subfolders : tron at gl-node1:/myvol-1/test1/b1$ rm -rf * afterwards listing files in the trashcan : tron at gl-node1:/myvol-1/test1$
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2017 Jun 22
1
Volume options appear twice
Hi, This is a list of volume options that appear twice when I run : gluster volume get my_volume all features.grace-timeout features.lock-heal geo-replication.ignore-pid-check geo-replication.indexing network.ping-timeout network.tcp-window-size performance.cache-size Is that normal ? Thanks Gluster version : 3.8.11 on Debian 8 -------------- next part -------------- An HTML attachment was
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > > I will try to explain how you can end up in split-brain even with cluster > > wide quorum: > > Yep, the explanation made sense. I hadn't considered the possibility of > alternating outages. Thanks! > >
2018 Mar 02
0
geo-replication
Hi Kotresh, I am expecting my hardware to show up next week. My plan is to run gluster version 3.12 on centos 7. Has the issue been fixed in version 3.12? Thanks a lot for your help! /Marcus On Fri, Mar 02, 2018 at 05:12:13PM +0530, Kotresh Hiremath Ravishankar wrote: > Hi Marcus, > > There are no issues with geo-rep and disperse volumes. It works with > disperse volume > being
2018 Feb 06
0
geo-replication
Hi again, I made some more tests and the behavior I get is that if any of the slaves are down the geo-replication stops working. It this the way distributed volumes work, if one server goes down the entire system stops to work? The servers that are online do not continue to work? Sorry, for asking stupid questions. Best regards Marcus On Tue, Feb 06, 2018 at 12:09:40PM +0100, Marcus Peders?n
2018 Feb 07
1
geo-replication
Thank you for your help! Just to make things clear to me (and get a better understanding of gluster): So, if I make the slave cluster just distributed and node 1 goes down, data (say file.txt) that belongs to node 1 will not be synced. When node 1 comes back up does the master not realize that file.txt has not been synced and makes sure that it is synced when it has contact with node 1 again? So
2018 Feb 07
0
geo-replication
We are happy to help you out. Please find the answers inline. On Tue, Feb 6, 2018 at 4:39 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > > I am planning my new gluster system and tested things out in > a bunch of virtual machines. > I need a bit of help to understand how geo-replication behaves. > > I have a master gluster cluster replica 2 > (in
2018 Mar 02
1
geo-replication
Hi again, I have been testing and reading up on other solutions and just wanted to check if my ideas are ok. I have been looking at dispersed volumes and wonder if there are any problems running replicated-distributed cluster on the master node and a dispersed-distributed cluster on the slave side of a geo-replication. Second thought, running disperesed on both sides, is that a problem (Master:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post: http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com> wrote: > Additionally the brick log file of the same brick would be required. > Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 28
0
Upgrading (online) GlusterFS-3.7.11 to 3.10 with Distributed-Disperse volume
I'm working on upgrading a set of our gluster machines from 3.7 to 3.10- at first I was going to follow the guide here: https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/ but it mentions: > * Online upgrade is only possible with replicated and distributed > replicate volumes > * Online upgrade is not supported for dispersed or distributed >
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2013 Dec 23
0
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
Hi all How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume ; the client can write data that originally on offline bricks to other online bricks ; the distributed volume crash, even if one brick offline; it's so unreliable when the failed brick online ,how to join the original distribute volume; don't want the new write data can't