Displaying 14 results from an estimated 14 matches for "gluster01ib".
Did you mean:
gluster01
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
...i see that has switch to distributed-replica.
Thanks
Jose
[root at gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y 3140
Brick gluster02ib:/gdata/brick1/scratch 49153 49154 Y 2634
Self-heal Daemon on localhost N/A N/A Y 3132
Self-heal Daemon on gluster02ib N/A N/A Y 2626...
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya
Thanks for helping me with this, I understand now , but I have few questions.
When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed.
> [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
and after that, I ran the status and info in it and on the status i get just the two brikcs
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
...; Jose
>
>
>
>
>
> [root at gluster01 ~]# gluster volume status
> Status of volume: scratch
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch 49153 49154 Y
> 2634
> Self-heal Daemon on localhost N/A N/A Y
> 3132
> Self-heal Daemon on gluster02ib N/A N/A Y
>...
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
...users <gluster-users at gluster.org>
Hi Nithya
Thanks for helping me with this, I understand now , but I have few
questions.
When i had it setup in replica (just 2 nodes with 2 bricks) and tried to
added , it failed.
[root at gluster01 ~]# gluster volume add-brick scratch replica 2
> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
>
Did you try the add brick operation several times with the same bricks? If
yes, that could be the cause as Gluster sets xattrs on the brick root
directory....
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...gt; Hi Nithya
>
> Thanks for helping me with this, I understand now , but I have few
> questions.
>
> When i had it setup in replica (just 2 nodes with 2 bricks) and tried to
> added , it failed.
>
> [root at gluster01 ~]# gluster volume add-brick scratch replica 2
>> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
>> volume add-brick: failed: /gdata/brick2/scratch is already part of a
>> volume
>>
>
> Did you try the add brick operation several times with the same bricks? If
> yes, that could be the cause as Gluster sets xattr...
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib:
Any Ideas??
Jose
[2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req
[2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) [0x7f5464b9b0...
2018 Apr 27
0
Turn off replication
...ssages.
Also I need to know what are the bricks that were actually removed,
the command used and its output.
On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez <josesanc at carc.unm.edu> wrote:
> Looking at the logs , it seems that it is trying to add using the same port
> was assigned for gluster01ib:
>
>
> Any Ideas??
>
> Jose
>
>
>
> [2018-04-25 22:08:55.169302] I [MSGID: 106482]
> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management:
> Received add brick req
> [2018-04-25 22:08:55.186037] I [run.c:191:runner_log]
> (-->/usr/lib64/glust...
2018 Apr 30
2
Turn off replication
...know what are the bricks that were actually removed,
> the command used and its output.
>
> On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez <josesanc at carc.unm.edu> wrote:
>> Looking at the logs , it seems that it is trying to add using the same port
>> was assigned for gluster01ib:
>>
>>
>> Any Ideas??
>>
>> Jose
>>
>>
>>
>> [2018-04-25 22:08:55.169302] I [MSGID: 106482]
>> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management:
>> Received add brick req
>> [2018-04-25 22:08:55.186037]...
2018 Apr 25
0
Turn off replication
...gluster02 ~]#
this is what I get when I run status and info
[root at gluster01 ~]# gluster volume info scratch
Volume Name: scratch
Type: Distribute
Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster01ib:/gdata/brick2/scratch
Brick3: gluster02ib:/gdata/brick1/scratch
Brick4: gluster02ib:/gdata/brick2/scratch
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root at gluster01 ~]#
[root at gluster02 ~]# gluster volume status scratch
Stat...
2018 May 02
0
Turn off replication
...d to know what are the bricks that were actually removed,
> the command used and its output.
>
> On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez <josesanc at carc.unm.edu> wrote:
>
> Looking at the logs , it seems that it is trying to add using the same port
> was assigned for gluster01ib:
>
>
> Any Ideas??
>
> Jose
>
>
>
> [2018-04-25 22:08:55.169302] I [MSGID: 106482]
> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management:
> Received add brick req
> [2018-04-25 22:08:55.186037] I [run.c:191:runner_log]
> (-->/usr/lib64/glust...
2018 Jan 10
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi,
Please let us know what commands you ran so far and the output of the *gluster
volume info* command.
Thanks,
Nithya
On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote:
> Hello
>
> We are trying to setup Gluster for our project/scratch storage HPC machine
> using a replicated mode with 2 nodes, 2 bricks each (14tb each).
>
> Our goal is to be
2018 Apr 12
2
Turn off replication
...o scratch
>>
>> Volume Name: scratch
>> Type: Distributed-Replicate
>> Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: gluster01ib:/gdata/brick1/scratch
>> Brick2: gluster02ib:/gdata/brick1/scratch
>> Brick3: gluster01ib:/gdata/brick2/scratch
>> Brick4: gluster02ib:/gdata/brick2/scratch
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> nfs.disable: on
>>
>> [root at gl...
2018 Jan 09
2
Creating cluster replica on 2 nodes 2 bricks each.
Hello
We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each).
Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we can have a total of 28tb replicated mode.
Node 1 [ (Brick A) (Brick B) ]
Node 2 [ (Brick A) (Brick
2018 Apr 07
0
Turn off replication
...at gluster01 ~]# gluster volume info scratch
>
>
> Volume Name: scratch
> Type: Distributed-Replicate
> Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Brick3: gluster01ib:/gdata/brick2/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
>
> [root at gluster01 ~]# gluster volume statu...