search for: gluster01

Displaying 20 results from an estimated 27 matches for "gluster01".

2018 Apr 04
0
Invisible files and directories
...; Brick4: gluster02:/srv/glusterfs/bricks/DATA204/data > Brick5: gluster02:/srv/glusterfs/bricks/DATA205/data > Brick6: gluster02:/srv/glusterfs/bricks/DATA206/data > Brick7: gluster02:/srv/glusterfs/bricks/DATA207/data > Brick8: gluster02:/srv/glusterfs/bricks/DATA208/data > Brick9: gluster01:/srv/glusterfs/bricks/DATA110/data > Brick10: gluster01:/srv/glusterfs/bricks/DATA111/data > Brick11: gluster01:/srv/glusterfs/bricks/DATA112/data > Brick12: gluster01:/srv/glusterfs/bricks/DATA113/data > Brick13: gluster01:/srv/glusterfs/bricks/DATA114/data > Brick14: gluster02:/srv...
2018 Apr 04
2
Invisible files and directories
Right now the volume is running with readdir-optimize off parallel-readdir off On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi Serg, > > Do you mean that turning off readdir-optimize did not work? Or did you > mean turning off parallel-readdir did not work? > > > > On 4 April 2018 at 10:48, Serg Gulko <s.gulko at
2018 Apr 23
0
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi, What is the output of 'gluster volume info' for this volume? Regards, Nithya On 23 April 2018 at 18:52, Frank Ruehlemann <ruehlemann at itsc.uni-luebeck.de> wrote: > Hi, > > after 2 years running GlusterFS without bigger problems we're facing > some strange errors lately. > > After updating to 3.12.7 some user reported at least 4 broken > directories
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
...ya This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica. Thanks Jose [root at gluster01 ~]# gluster volume status Status of volume: scratch Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y 3140 Brick gluste...
2018 Mar 06
0
Multiple Volumes over diffrent Interfaces
Hi, I'm trying to create two gluster volumes over two nodes with two seperate networks: The names are in the hosts file of each node: root at gluster01 :~# cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 gluster01.peiker-cee.de gluster01 10.0.2.54 gluster02g1.peiker-cee.de gluster02g1 10.0.7.54 gluster02g2.peiker-cee.de gluster02g2 10.0.2.53 gluster01g1.peiker-cee.de gluster01g1 10.0.7.53 gluste...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
...nodes together as > replica, from node 1A and 1B , now when i tried to add it , i get the error > that it is already part of a volume. when i run the cluster volume info , i > see that has switch to distributed-replica. > > Thanks > > Jose > > > > > > [root at gluster01 ~]# gluster volume status > Status of volume: scratch > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick gluster01ib:/gdata/brick1/scratch 49152 49153...
2013 Mar 20
1
About adding bricks ...
...buted-Replicated Volume consisting of 4 bricks on 2 servers. # gluster volume create glusterfs replica 2 transport tcp \ gluster0{0..1}:/srv/gluster/exp0 gluster0{0..1}:/srv/gluster/exp1 Now I have the following very nice replication schema: +-------------+ +-------------+ | gluster00 | | gluster01 | +-------------+ +-------------+ | exp0 | exp1 | | exp0 | exp1 | +------+------+ +------+------+ | | | | +------+----------+ | | | +-----------------+ If one HD goes down, I'm backed! I'm even backed if one serve...
2018 Jan 10
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi, Please let us know what commands you ran so far and the output of the *gluster volume info* command. Thanks, Nithya On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hello > > We are trying to setup Gluster for our project/scratch storage HPC machine > using a replicated mode with 2 nodes, 2 bricks each (14tb each). > > Our goal is to be
2018 Jan 09
2
Creating cluster replica on 2 nodes 2 bricks each.
Hello We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we can have a total of 28tb replicated mode. Node 1 [ (Brick A) (Brick B) ] Node 2 [ (Brick A) (Brick
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya Thanks for helping me with this, I understand now , but I have few questions. When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed. > [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch > volume add-brick: failed: /gdata/brick2/scratch is already part of a volume and after that, I ran the status and info in it and on the status i get just the two brikcs > Bri...
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...Cc: gluster-users <gluster-users at gluster.org> > > > Hi Nithya > > Thanks for helping me with this, I understand now , but I have few > questions. > > When i had it setup in replica (just 2 nodes with 2 bricks) and tried to > added , it failed. > > [root at gluster01 ~]# gluster volume add-brick scratch replica 2 >> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch >> volume add-brick: failed: /gdata/brick2/scratch is already part of a >> volume >> > > Did you try the add brick operation several times with the sam...
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04). I've created a replicated volume with the 4 machines. Then on the client machine i've executed: mount -t glusterfs gluster01:/volume01 /mnt/gluster And everything works ok. The main problem occurs in every client machine that I do: umount /mnt/gluster and the mount -t...
2018 Apr 23
4
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi, after 2 years running GlusterFS without bigger problems we're facing some strange errors lately. After updating to 3.12.7 some user reported at least 4 broken directories with some invisible files. The files are at the bricks and don't start with a dot, but aren't visible in "ls". Clients still can interact with them by using the explicit path. More information:
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
...ithya Balachandran <nbalacha at redhat.com> Cc: gluster-users <gluster-users at gluster.org> Hi Nithya Thanks for helping me with this, I understand now , but I have few questions. When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed. [root at gluster01 ~]# gluster volume add-brick scratch replica 2 > gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch > volume add-brick: failed: /gdata/brick2/scratch is already part of a volume > Did you try the add brick operation several times with the same bricks? If yes, that could b...
2017 Sep 14
0
GlusterFS don't expose iSCSI target for Window server
Hi all Question 1: I follow this instruction https://github.com/gluster/gluster-block, I use 2 gluster01 (192.168.101.110), gluster02 (192.168.101.111) and create one gluster volume (block-storage). And use gluster-block to create block storage (block-store/win) [root at gluster01 ~]# gluster-block create block-store/win ha 2 192.168.101.110,192.168.101.111 40GiB IQN: iqn.2016-12.org.gluster-block:5b...
2018 Apr 25
0
Turn off replication
Hello Karthik Im having trouble adding the two bricks back online. Any help is appreciated thanks when i try to add-brick command this is what i get [root at gluster01 ~]# gluster volume add-brick scratch gluster02ib:/gdata/brick2/scratch/ volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be contained by an existing brick I have run the following commands and remove...
2018 Apr 12
2
Turn off replication
...t before removing it. i >> believe the same copy of data is on all 4 bricks, we would like to keep one >> of them, and add the other bricks as extra space >> >> Thanks for your help on this >> >> Jose >> >> >> >> >> >> [root at gluster01 ~]# gluster volume info scratch >> >> Volume Name: scratch >> Type: Distributed-Replicate >> Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 2 x 2 = 4 >> Transport-type: tcp,rdma >> Br...
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib: Any Ideas?? Jose [2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) [0x7f5464b9...
2018 Apr 27
0
Turn off replication
...ssages. Also I need to know what are the bricks that were actually removed, the command used and its output. On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Looking at the logs , it seems that it is trying to add using the same port > was assigned for gluster01ib: > > > Any Ideas?? > > Jose > > > > [2018-04-25 22:08:55.169302] I [MSGID: 106482] > [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: > Received add brick req > [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] > (-->/usr/lib64/glu...
2018 Apr 30
2
Turn off replication
...know what are the bricks that were actually removed, > the command used and its output. > > On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez <josesanc at carc.unm.edu> wrote: >> Looking at the logs , it seems that it is trying to add using the same port >> was assigned for gluster01ib: >> >> >> Any Ideas?? >> >> Jose >> >> >> >> [2018-04-25 22:08:55.169302] I [MSGID: 106482] >> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: >> Received add brick req >> [2018-04-25 22:08:55.18603...