Displaying 16 results from an estimated 16 matches for "gluster02".
Did you mean:
gluster01
2018 Apr 04
0
Invisible files and directories
...fuse client.Our settings are:
> gluster volume info?$VOLUMENAME
> ?
> Volume Name: $VOLUMENAME
> Type: Distribute
> Volume ID: 0d210c70-e44f-46f1-862c-ef260514c9f1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 23
> Transport-type: tcp
> Bricks:
> Brick1: gluster02:/srv/glusterfs/bricks/DATA201/data
> Brick2: gluster02:/srv/glusterfs/bricks/DATA202/data
> Brick3: gluster02:/srv/glusterfs/bricks/DATA203/data
> Brick4: gluster02:/srv/glusterfs/bricks/DATA204/data
> Brick5: gluster02:/srv/glusterfs/bricks/DATA205/data
> Brick6: gluster02:/srv/glus...
2018 Apr 04
2
Invisible files and directories
Right now the volume is running with
readdir-optimize off
parallel-readdir off
On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi Serg,
>
> Do you mean that turning off readdir-optimize did not work? Or did you
> mean turning off parallel-readdir did not work?
>
>
>
> On 4 April 2018 at 10:48, Serg Gulko <s.gulko at
2013 Mar 20
1
About adding bricks ...
...+------+----------+ |
| |
+-----------------+
If one HD goes down, I'm backed! I'm even backed if one server goes down!
Great! But how will it be if I just add another Server with two bricks and
rebalance?
# gluster volume add-brick glusterfs gluster02:/srv/gluster/exp{0..1}
# gluster volume rebalance glusterfs start
The replication schema I obviously want is this:
+-------------+ +-------------+ +-------------+
| gluster00 | | gluster01 | | gluster02 |
+-------------+ +-------------+ +-------------+
| exp0 | exp1 | | exp0 |...
2018 Apr 23
0
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi,
What is the output of 'gluster volume info' for this volume?
Regards,
Nithya
On 23 April 2018 at 18:52, Frank Ruehlemann <ruehlemann at itsc.uni-luebeck.de>
wrote:
> Hi,
>
> after 2 years running GlusterFS without bigger problems we're facing
> some strange errors lately.
>
> After updating to 3.12.7 some user reported at least 4 broken
> directories
2018 Apr 25
2
Turn off replication
...scratch on port 49152
[2018-04-25 22:08:55.309659] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /gdata/brick1/scratch.rdma on port 49153
[2018-04-25 22:08:55.310231] E [MSGID: 106005] [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start brick gluster02ib:/gdata/brick1/scratch
[2018-04-25 22:08:55.310275] E [MSGID: 106074] [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add bricks
[2018-04-25 22:08:55.310304] E [MSGID: 106123] [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit failed.
[2018-04-25 22:0...
2018 Apr 27
0
Turn off replication
...18-04-25 22:08:55.309659] I [MSGID: 106143]
> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick
> /gdata/brick1/scratch.rdma on port 49153
> [2018-04-25 22:08:55.310231] E [MSGID: 106005]
> [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start
> brick gluster02ib:/gdata/brick1/scratch
> [2018-04-25 22:08:55.310275] E [MSGID: 106074]
> [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add
> bricks
> [2018-04-25 22:08:55.310304] E [MSGID: 106123]
> [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit...
2018 Apr 30
2
Turn off replication
...9659] I [MSGID: 106143]
>> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick
>> /gdata/brick1/scratch.rdma on port 49153
>> [2018-04-25 22:08:55.310231] E [MSGID: 106005]
>> [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start
>> brick gluster02ib:/gdata/brick1/scratch
>> [2018-04-25 22:08:55.310275] E [MSGID: 106074]
>> [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add
>> bricks
>> [2018-04-25 22:08:55.310304] E [MSGID: 106123]
>> [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-manageme...
2018 Apr 25
0
Turn off replication
Hello Karthik
Im having trouble adding the two bricks back online. Any help is appreciated
thanks
when i try to add-brick command this is what i get
[root at gluster01 ~]# gluster volume add-brick scratch gluster02ib:/gdata/brick2/scratch/
volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be contained by an existing brick
I have run the following commands and remove the .glusterfs hidden directories
[root at gl...
2018 May 02
0
Turn off replication
...18-04-25 22:08:55.309659] I [MSGID: 106143]
> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick
> /gdata/brick1/scratch.rdma on port 49153
> [2018-04-25 22:08:55.310231] E [MSGID: 106005]
> [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start
> brick gluster02ib:/gdata/brick1/scratch
> [2018-04-25 22:08:55.310275] E [MSGID: 106074]
> [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add
> bricks
> [2018-04-25 22:08:55.310304] E [MSGID: 106123]
> [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit...
2018 Apr 23
4
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi,
after 2 years running GlusterFS without bigger problems we're facing
some strange errors lately.
After updating to 3.12.7 some user reported at least 4 broken
directories with some invisible files. The files are at the bricks and
don't start with a dot, but aren't visible in "ls". Clients still can
interact with them by using the explicit path.
More information:
2018 Mar 06
0
Multiple Volumes over diffrent Interfaces
Hi,
I'm trying to create two gluster volumes over two nodes with two
seperate networks:
The names are in the hosts file of each node:
root at gluster01 :~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 gluster01.peiker-cee.de gluster01
10.0.2.54 gluster02g1.peiker-cee.de gluster02g1
10.0.7.54 gluster02g2.peiker-cee.de gluster02g2
10.0.2.53 gluster01g1.peiker-cee.de gluster01g1
10.0.7.53 gluster01g2.peiker-cee.de gluster01g2
Then I peer:
root at gluster01 :~# gluster peer probe gluster02g1
peer probe: succ...
2018 Apr 12
2
Turn off replication
...ation you have provided me, I would like to make sure
> that I?m running the right commands.
>
> 1. gluster volume heal scratch info
>
If the count is non zero, trigger the heal and wait for heal info count to
become zero.
> 2. gluster volume remove-brick scratch *replica 1 *
> gluster02ib:/gdata/brick1/scratch gluster02ib:/gdata/brick2/scratch force
>
3. gluster volume add-brick* ?#"* scratch gluster02ib:/gdata/brick1/
> scratch gluster02ib:/gdata/brick2/scratch
>
>
> Based on the configuration I have, Brick 1 from Node A and B are tide
> together and Brick...
2013 May 10
2
Self-heal and high load
...ating to each other
We then have multiple other servers that store and retrieve files on
Gluster using a local glusterfs mount point.
Only 1 data centre is active at any one time
The Gluster servers are VMs on a Xen hypervisor.
All our systems are CentOS 5
Gluster 3.3.1 (I've also tried 3.3.2)
gluster02 ~ gluster volume info rmfs
Volume Name: volume1
Type: Replicate
Volume ID: 3fef44e1-e840-452e-b16b-a9fc698e7dfd
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01:/mnt/store1
Brick2: gluster02:/mnt/store1
Options Reconfigured:
nfs.disable: off
auth.allow: 172...
2017 Sep 14
0
GlusterFS don't expose iSCSI target for Window server
Hi all
Question 1:
I follow this instruction https://github.com/gluster/gluster-block, I use 2
gluster01 (192.168.101.110), gluster02 (192.168.101.111) and create one
gluster volume (block-storage). And use gluster-block to create block
storage (block-store/win)
[root at gluster01 ~]# gluster-block create block-store/win ha 2
192.168.101.110,192.168.101.111 40GiB
IQN: iqn.2016-12.org.gluster-block:5b1b5077-50d0-49ce-bfca-6db4656...
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04).
I've created a replicated volume with the 4 machines.
Then on the client machine i've executed:
mount -t glusterfs gluster01:/volume01 /mnt/gluster
And everything works ok.
The main problem occurs in every client machine that I do:
umount /mnt/gluster
and the
mount -t glusterfs gluster01:/volume01 /mnt/gluster
The client
2018 Jan 18
0
issues after botched update
...Node: gluster01
Number of Scrubbed files: 150198
Number of Skipped files: 190
Last completed scrub time: Scrubber pending to complete.
Duration of last scrub (D:M:H:M:S): 0:0:0:0
Error count: 0
=========================================================
Node: gluster02
Number of Scrubbed files: 0
Number of Skipped files: 153939
Last completed scrub time: Scrubber pending to complete.
Duration of last scrub (D:M:H:M:S): 0:0:0:0
Error count: 0
=========================================================
Gluster volume heal has one failed...