Displaying 20 results from an estimated 25 matches for "gluster0".
Did you mean:
gluster
2011 Oct 17
1
brick out of space, unmounted brick
...e source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit.
Suppose I have a Gluster volume made up of four 1 MB bricks, like this
Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gluster0-node0:/brick0
Brick2: gluster0-node1:/brick1
Brick3: gluster0-node0:/brick2
Brick4: gluster0-node1:/brick3
The mounted Gluster volume will report that the size of the volume is 2 MB, which creates a false impression that it can hold a 2 MB file. This isn't too bad, since people are used to a f...
2018 Jan 05
0
Different results in setting atime
...4052 Links: 1Access:
(0664/-rw-rw-r--) Uid: ( 1000/ davids) Gid: ( 1000/ davids)Access:
1970-01-01 00:01:00.000000000 +0100Modify: 2018-01-05 09:06:34.567406414
+0100Change: 2018-01-05 09:10:22.656813779 +0100 Birth: -*
*2. Setting atime from FUSE Mount:[davids at gluster-test1 gluster0]# touch
test[davids at fgluster-test1 gluster0]# sudo touch -a -t 197001010001
test[davids at gluster-test1 gluster0]# stat test File: ?test? Size:
0 Blocks: 0 IO Block: 131072 regular empty fileDevice:
28h/40d Inode: 11420445633475641741 Links: 1Access: (0644/-rw-r--r--)...
2017 Sep 17
2
Volume Heal issue
Hi all,
I have a replica 3 with 1 arbiter.
I see the last days that one file at a volume is always showing as needing
healing:
gluster volume heal vms info
Brick gluster0:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster1:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster2:/gluster/vms/brick
*<gfid:66d3468e-00cf-44dc-a835-7624da0c5370>*
Status: Connected
Number of entries: 1
While no split brain is reported:
[root...
2018 Feb 05
2
Dir split brain resolution
Hi all,
I have a split brain issue and have the following situation:
gluster volume heal engine info split-brain
Brick gluster0:/gluster/engine/brick
/ad1f38d7-36df-4cee-a092-ab0ce1f98ce9/ha_agent
Status: Connected
Number of entries in split-brain: 1
Brick gluster1:/gluster/engine/brick
Status: Connected
Number of entries in split-brain: 0
cd ha_agent/
[root at v0 ha_agent]# ls -al
ls: cannot access hosted-engine.metadata...
2017 Sep 17
0
Volume Heal issue
...moment)
On Sun, Sep 17, 2017 at 11:30 AM, Alex K <rightkicktech at gmail.com> wrote:
> Hi all,
>
> I have a replica 3 with 1 arbiter.
>
> I see the last days that one file at a volume is always showing as needing
> healing:
>
> gluster volume heal vms info
> Brick gluster0:/gluster/vms/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster1:/gluster/vms/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster2:/gluster/vms/brick
> *<gfid:66d3468e-00cf-44dc-a835-7624da0c5370>*
> Status: Connected
> Number of...
2018 Feb 05
2
Dir split brain resolution
...f stat & getfattr -d -m . -e hex
<file-path-on-brick> from both the bricks.
Regards,
Karthik
On Mon, Feb 5, 2018 at 5:03 PM, Alex K <rightkicktech at gmail.com> wrote:
> After stoping/starting the volume I have:
>
> gluster volume heal engine info split-brain
> Brick gluster0:/gluster/engine/brick
> <gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8>
> Status: Connected
> Number of entries in split-brain: 1
>
> Brick gluster1:/gluster/engine/brick
> Status: Connected
> Number of entries in split-brain: 0
>
> gluster volume heal engine split-brai...
2018 Feb 05
0
Dir split brain resolution
After stoping/starting the volume I have:
gluster volume heal engine info split-brain
Brick gluster0:/gluster/engine/brick
<gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8>
Status: Connected
Number of entries in split-brain: 1
Brick gluster1:/gluster/engine/brick
Status: Connected
Number of entries in split-brain: 0
gluster volume heal engine split-brain latest-mtime
gfid:bb675ea6-0622-4852-9e59...
2018 Feb 05
0
Dir split brain resolution
...le-path-on-brick> from both the bricks.
>
> Regards,
> Karthik
>
> On Mon, Feb 5, 2018 at 5:03 PM, Alex K <rightkicktech at gmail.com> wrote:
>
>> After stoping/starting the volume I have:
>>
>> gluster volume heal engine info split-brain
>> Brick gluster0:/gluster/engine/brick
>> <gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8>
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>> Brick gluster1:/gluster/engine/brick
>> Status: Connected
>> Number of entries in split-brain: 0
>>
>> g...
2017 Oct 24
2
brick is down but gluster volume status says it's fine
...-----------------------
> Brick gluster-2:/export/brick7/digitalcorpo
> ra 49156 0 Y
> 125708
> Brick gluster1.vsnet.gmu.edu:/export/brick7
> /digitalcorpora 49152 0 Y
> 12345
> Brick gluster0:/export/brick7/digitalcorpor
> a 49152 0 Y
> 16098
> Self-heal Daemon on localhost N/A N/A Y
> 126625
> Self-heal Daemon on gluster1 N/A N/A Y
> 15405
> Self-heal Daemo...
2017 Oct 24
0
brick is down but gluster volume status says it's fine
...> Brick gluster-2:/export/brick7/digitalcorpo
>> ra 49156 0 Y
>> 125708
>> Brick gluster1.vsnet.gmu.edu:/export/brick7
>> /digitalcorpora 49152 0 Y
>> 12345
>> Brick gluster0:/export/brick7/digitalcorpor
>> a 49152 0 Y
>> 16098
>> Self-heal Daemon on localhost N/A N/A Y
>> 126625
>> Self-heal Daemon on gluster1 N/A N/A Y
>> 15...
2013 Mar 20
1
About adding bricks ...
Hi @all,
I've created a Distributed-Replicated Volume consisting of 4 bricks on
2 servers.
# gluster volume create glusterfs replica 2 transport tcp \
gluster0{0..1}:/srv/gluster/exp0 gluster0{0..1}:/srv/gluster/exp1
Now I have the following very nice replication schema:
+-------------+ +-------------+
| gluster00 | | gluster01 |
+-------------+ +-------------+
| exp0 | exp1 | | exp0 | exp1 |
+------+------+ +------+------+
| |...
2017 Sep 04
2
Slow performance of gluster volume
...ations).
The full details of the volume are below. Any advise on what can be tweaked
will be highly appreciated.
Volume Name: vms
Type: Replicate
Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/vms/brick
Brick2: gluster1:/gluster/vms/brick
Brick3: gluster2:/gluster/vms/brick (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
clust...
2017 Sep 05
3
Slow performance of gluster volume
...setup is a set of 3 Centos 7.3 servers and ovirt 4.1, using gluster as
storage.
Below are the current settings:
Volume Name: vms
Type: Replicate
Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/vms/brick
Brick2: gluster1:/gluster/vms/brick
Brick3: gluster2:/gluster/vms/brick (arbiter)
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
performance.client-io-threads: on
features.shard-block-size: 512MB
cluster.granular-entry-heal: enable
performance.strict-o-dire...
2017 Sep 06
2
Slow performance of gluster volume
...rrent settings:
>>
>>
>> Volume Name: vms
>> Type: Replicate
>> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster0:/gluster/vms/brick
>> Brick2: gluster1:/gluster/vms/brick
>> Brick3: gluster2:/gluster/vms/brick (arbiter)
>> Options Reconfigured:
>> server.event-threads: 4
>> client.event-threads: 4
>> performance.client-io-threads: on
>> features.shard-block-size: 512M...
2017 Sep 05
0
Slow performance of gluster volume
...y advise on what can be
> tweaked will be highly appreciated.
>
> Volume Name: vms
> Type: Replicate
> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/gluster/vms/brick
> Brick2: gluster1:/gluster/vms/brick
> Brick3: gluster2:/gluster/vms/brick (arbiter)
> Options Reconfigured:
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> network.ping-timeout: 30
> storage.owner-gid: 36
> storage.owner-uid: 36...
2017 Sep 06
0
Slow performance of gluster volume
...t;>> Volume Name: vms
>>> Type: Replicate
>>> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: gluster0:/gluster/vms/brick
>>> Brick2: gluster1:/gluster/vms/brick
>>> Brick3: gluster2:/gluster/vms/brick (arbiter)
>>> Options Reconfigured:
>>> server.event-threads: 4
>>> client.event-threads: 4
>>> performance.client-io-threads: on
>>> fea...
2017 Sep 06
2
Slow performance of gluster volume
...>>> Type: Replicate
>>>> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 1 x (2 + 1) = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: gluster0:/gluster/vms/brick
>>>> Brick2: gluster1:/gluster/vms/brick
>>>> Brick3: gluster2:/gluster/vms/brick (arbiter)
>>>> Options Reconfigured:
>>>> server.event-threads: 4
>>>> client.event-threads: 4
>>>> performance.client-io-thre...
2017 Sep 05
0
Slow performance of gluster volume
...as
> storage.
>
> Below are the current settings:
>
>
> Volume Name: vms
> Type: Replicate
> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/gluster/vms/brick
> Brick2: gluster1:/gluster/vms/brick
> Brick3: gluster2:/gluster/vms/brick (arbiter)
> Options Reconfigured:
> server.event-threads: 4
> client.event-threads: 4
> performance.client-io-threads: on
> features.shard-block-size: 512MB
> cluster.granular-entr...
2017 Sep 10
2
Slow performance of gluster volume
...> storage.
>
> Below are the current settings:
>
>
> Volume Name: vms
> Type: Replicate
> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/gluster/vms/brick
> Brick2: gluster1:/gluster/vms/brick
> Brick3: gluster2:/gluster/vms/brick (arbiter)
> Options Reconfigured:
> server.event-threads: 4
> client.event-threads: 4
> performance.client-io-threads: on
> features.shard-block-size: 512MB
> cluster.granular-entr...
2017 Sep 08
0
Slow performance of gluster volume
...>>> Type: Replicate
>>>> Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 1 x (2 + 1) = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: gluster0:/gluster/vms/brick
>>>> Brick2: gluster1:/gluster/vms/brick
>>>> Brick3: gluster2:/gluster/vms/brick (arbiter)
>>>> Options Reconfigured:
>>>> server.event-threads: 4
>>>> client.event-threads: 4
>>>> performance.client-io-thre...