Displaying 20 results from an estimated 573 matches for "brick3".
Did you mean:
brick
2017 Jul 06
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...; volume in fact I have this
[root at ovirt01 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 1
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Options Reconfigured:
...
While on the other two nodes
[root at ovirt02 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote:
>>
>>>
>>>
>>>> ...
>>>>
>>>> then the commands I need to run would be:
>>>>
>>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>>> start
>>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>>> gl01.localdomain.local:/gluster/brick3/export commit force
>>>>
>>>> Correct?
>>>>
>>>
>>> Yes, co...
2017 Jul 05
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>> start
>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>> gl01.localdomain.local:/gluster/brick3/export commit force
>>>
>>> Correct?
>>>
>>
>> Yes, correct. gl01.localdomain.local sh...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...;sabose at redhat.com> wrote:
>>>
>>>>
>>>>
>>>>> ...
>>>>>
>>>>> then the commands I need to run would be:
>>>>>
>>>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>>>> start
>>>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>>>> gl01.localdomain.local:/gluster/brick3/export commit force
>>>>>
>>>>> Correct?
>>>>>
>>&...
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...I have
> to set debug for the the nodes too?)
>
You have to set the log level to debug for glusterd instance where the
commit fails and share the glusterd log of that particular node.
>
>
> [root at ovirt01 ~]# gluster volume reset-brick export
> gl01.localdomain.local:/gluster/brick3/export start
> volume reset-brick: success: reset-brick start operation successful
>
> [root at ovirt01 ~]# gluster volume reset-brick export
> gl01.localdomain.local:/gluster/brick3/export ovirt01.localdomain.local:/gluster/brick3/export
> commit force
> volume reset-brick: faile...
2023 Jul 05
1
remove_me files building up
.../data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0 /data/glusterfs/gv1/brick1/brick/.glusterfs-anonymous-inode-d3d1fdec-7df9-4f71-b9fc-660d12c2a046
2.3G /data/glusterfs/gv1/brick1/brick
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick3/brick
11G /data/glusterfs/gv1/brick3/brick/.glusterfs
15M /data/glusterfs/gv1/brick3/brick/scalelite-recordings
460K /data/glusterfs/gv1/brick3/brick/mytute
151M /data/glusterfs/gv1/brick3/brick/.shard
0 /data/glusterfs/gv1/brick3/brick/.glusterfs-anonymous-inode-d3d1fdec-7df9-4...
2023 Jun 30
1
remove_me files building up
...cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour, whereby a lot of 'remove_me' files are building up under `/data/glusterfs/gv1/brick2/brick/.shard/.remove_me/` and `/data/glusterfs/gv1/brick3/brick/.shard/.remove_me/`. This is causing the arbiter to run out of space on brick2 and brick3, as the remove_me files are constantly increasing.
brick1 appears to be fine, the disk usage increases throughout the day and drops down in line with the trend of the brick on the data nodes. We see the...
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version did?you upgrade and which one is now ??I assume all gluster TSP members (the servers)?have the same version, but it?s nice to double check.
Does the arc...
2023 Jul 03
1
remove_me files building up
...cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour, whereby a lot of 'remove_me' files are building up under `/data/glusterfs/gv1/brick2/brick/.shard/.remove_me/` and `/data/glusterfs/gv1/brick3/brick/.shard/.remove_me/`.?This is causing the arbiter to run out of space on brick2 and brick3, as the remove_me files are constantly increasing.
brick1 appears to be fine, the disk usage increases throughout the day and drops down in line with the trend of the brick on the data nodes. We see the...
2017 Dec 21
3
Wrong volume size with df
...cted
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick3/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick3/gv0
Status: Connected
Number of entries: 0
> 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
Attached
> 3 - output of gluster volume <volname> info
[root at pod-sjc1-gluster2 ~]...
2018 Jan 02
0
Wrong volume size with df
...brick1/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster1:/data/brick2/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster2:/data/brick2/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster1:/data/brick3/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster2:/data/brick3/gv0
> Status: Connected
> Number of entries: 0
>
> > 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
>
> Attached
>
> > 3 - output of gluster vo...
2023 Jul 04
1
remove_me files building up
...ze=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick3
meta-data=/dev/sdd1 isize=512 agcount=13, agsize=327616 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsi...
2023 Jul 04
1
remove_me files building up
...they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79% /data/glusterfs/gv1/brick3
/dev/sdc1 15G 2.8G 13G 19% /data/glusterfs/gv1/brick1
/dev/sde1 15G 14G 1.6G 90% /data/glusterfs/gv1/brick2
And this is the df -hi? output for the bricks on the arb server:
/dev/sdd1 7.5M 2.7M 4.9M 35% /data/glusterfs/gv1/brick3
/dev/sdc1...
2023 Jul 04
1
remove_me files building up
...size=4096 ? ascii-ci=0, ftype=1log ? ? ?=internal log ? ? ? ? ? bsize=4096 ? blocks=2560, version=2? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? sectsz=512 ? sunit=0 blks, lazy-count=1realtime =none ? ? ? ? ? ? ? ? ? extsz=4096 ? blocks=0, rtextents=0
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick3meta-data=/dev/sdd1 ? ? ? ? ? ? ?isize=512 ? ?agcount=13, agsize=327616 blks? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? sectsz=512 ? attr=2, projid32bit=1? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? crc=1 ? ? ? ?finobt=1, sparse=1, rmapbt=0? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? reflink=1data ? ? = ? ? ? ? ? ? ? ? ? ? ? bsize=40...
2018 Feb 09
1
Tiering Volumns
...Glus1 ~]# gluster volume info
Volume Name: ColdTier
Type: Replicate
Volume ID: 1647487b-c05a-4cf7-81a7-08102ae348b6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/ColdTier/brick1
Brick2: Glus2:/data/glusterfs/ColdTier/brick2
Brick3: Glus3:/data/glusterfs/ColdTier/brick3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: HotTier
Type: Replicate
Volume ID: 6294035d-a199-4574-be11-d48ab7c4b33c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transpor...
2018 Jan 24
1
Split brain directory
...615fa8a5c374/raw/52ff8dd6a9cc8ba09b7f258aa85743d2854f9acc/splitinfo.txt
I discovered the splitted directory by the extended attributes (lines
172,173, 291,292,
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.vol-video-client-13=0x000000000000000000000000
Seen on the bricks
* /bricks/video/brick3/safe/video.mysite.it/htdocs/ su glusterserver05
(lines 278 ro 294)
* /bricks/video/brick3/safe/video.mysite.it/htdocs/ su glusterserver03
(lines 159 to 175)
Reading the documentation about afr extended attributes, this situation
seems unclear (Docs from [1] and [2])
as own changelog is 0, same as...
2018 Feb 04
2
halo not work as desired!!!
...on, each DC have 3 severs, I
have created glusterfs volume with 4 replica, this is glusterfs volume info
output:
Volume Name: test-halo
Type: Replicate
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/mnt/test1
Brick2: 10.0.0.3:/mnt/test2
Brick3: 10.0.0.5:/mnt/test3
Brick4: 10.0.0.6:/mnt/test4
Options Reconfigured:
cluster.halo-shd-max-latency: 5
cluster.halo-max-latency: 10
cluster.quorum-count: 2
cluster.quorum-type: fixed
cluster.halo-enabled: yes
transport.address-family: inet
nfs.disable: on
bricks with ip 10.0.0.1 & 10.0.0.3 are...
2018 Jan 12
1
Reading over than the file size on dispersed volume
...elow.
------------------------------------------------------
Volume Name: TEST_VOL
Type: Disperse
Volume ID: be52b68d-ae83-46e3-9527-0e536b867bcc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (6 + 3) = 9
Transport-type: tcp
Bricks:
Brick1: server1:/data/brick1
Brick2: server2:/data/brick1
Brick3: server3:/data/brick1
Brick4: server1:/data/brick2
Brick5: server2:/data/brick2
Brick6: server3:/data/brick2
Brick7: server1:/data/brick3
Brick8: server2:/data/brick3
Brick9: server3:/data/brick3
Options Reconfigured:
network.ping-timeout: 10
performance.write-behind: on
features.quota-deem-statfs:...
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
...em.
>
>
> Current: 3.8.4
>
> Volume Name: shchst01
> Type: Distributed-Replicate
> Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4 x 3 = 12
> Transport-type: tcp
> Bricks:
> Brick1: shchhv01-sto:/data/brick3/shchst01
> Brick2: shchhv02-sto:/data/brick3/shchst01
> Brick3: shchhv03-sto:/data/brick3/shchst01
> Brick4: shchhv01-sto:/data/brick1/shchst01
> Brick5: shchhv02-sto:/data/brick1/shchst01
> Brick6: shchhv03-sto:/data/brick1/shchst01
> Brick7: shchhv02-sto:/data/brick2/shchst01
&g...
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
...like to run my plan through some (more?) educated minds.
>>
>> The current setup is:
>>
>> Volume Name: vol0
>> Distributed-Replicate
>> Number of Bricks: 2 x (2 + 1) = 6
>> Bricks:
>> Brick1: glt01:/vol/vol0
>> Brick2: glt02:/vol/vol0
>> Brick3: glt05:/vol/vol0 (arbiter)
>> Brick4: glt03:/vol/vol0
>> Brick5: glt04:/vol/vol0
>> Brick6: glt06:/vol/vol0 (arbiter)
>>
>> Volume Name: vol1
>> Distributed-Replicate
>> Number of Bricks: 2 x (2 + 1) = 6
>> Bricks:
>> Brick1: glt07:/vol/vol1
>...