Displaying 20 results from an estimated 2367 matches for "brick".
Did you mean:
trick
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello,
We have a very fresh gluster 3.10.10 installation.
Our volume is created as distributed volume, 9 bricks 96TB in total
(87TB after 10% of gluster disk space reservation)
For some reasons I can?t ?heal? the volume:
# gluster volume heal gv0
Launching heal operation to perform index self heal on volume gv0 has
been unsuccessful on bricks that are down. Please check if all brick
processes are runnin...
2017 Aug 19
2
Add brick to a disperse volume
Hello,
I'm using Gluster since 2 years but only with distributed volumes.
I'm trying now to set dispersed volumes to have some redundancy.
I had any problem to create a functional test volume with 4 bricks and 1 redundancy ( Number of Bricks: 1 x (3 + 1) = 4 ).
I had also any problem to replace a supposed faulty brick with another one.
My problem is that I can not add a brick to increase the size of the volume as I do with distributed ones. I would have a volume of 5 bricks ( Number of Bricks: 1 x...
2018 Apr 10
0
glusterfs disperse volume input output error
...s/vmfs/slake-test-bck-m1-d1.qcow2
md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error
Configuration and status of volume is:
# gluster volume info vol1
Volume Name: vol1
Type: Disperse
Volume ID: a7d52933-fccc-4b07-9c3b-5b92f398aa79
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (13 + 2) = 15
Transport-type: tcp
Bricks:
Brick1: glfs-node11.local:/data1/bricks/brick1
Brick2: glfs-node12.local:/data1/bricks/brick1
Brick3: glfs-node13.local:/data1/bricks/brick1
Brick4: glfs-node14.local:/data1/bricks/brick1
Brick5: glfs-node15.local:/data1/bricks/brick1
Brick6: glfs-nod...
2017 Aug 20
0
Add brick to a disperse volume
Hi,
Adding bricks to a disperse volume is very easy and same as replica volume.
You just need to add bricks in the multiple of the number of bricks which you already have.
So if you have disperse volume with n+k configuration, you need to add n+k more bricks.
Example :
If your disperse volume is 4+2, where 2 i...
2017 Nov 09
2
GlusterFS healing questions
Hi,
We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
nics)
1.
Tests show that healing takes about double the time on healing 200gb vs
100, and abit under the double on 400gb vs 200gb bricksizes. Is this
expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
hours...
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
...umbers for my
legacy volumes.
Newly created volumes after the upgrade, df works just fine.
I have been researching since Monday and have not found any reference to
this symptom.
"vm-images" is the old legacy volume, "test" is the new one.
[root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
bricks')|sort
/dev/sda1????????????????????????? 7.3T? 991G? 6.4T? 14% /bricks/sda1
/dev/sda1????????????????????????? 7.3T? 991G? 6.4T? 14% /bricks/sda1
/dev/sdb1????????????????????????? 7.3T? 557G? 6.8T?? 8% /bricks/sdb1
/dev/sdb1????????????????????????? 7.3T...
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
...ster options are not described there or there are no explanation
> what is doing...
>
>
>
> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
>
>> Hello,
>>
>> We have a very fresh gluster 3.10.10 installation.
>> Our volume is created as distributed volume, 9 bricks 96TB in total
>> (87TB after 10% of gluster disk space reservation)
>>
>> For some reasons I can?t ?heal? the volume:
>> # gluster volume heal gv0
>> Launching heal operation to perform index self heal on volume gv0 has
>> been unsuccessful on bricks that are do...
2017 Sep 27
2
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
...be shown in command ?gluster volume heal <volume-name> info?, also, no entry in .glusterfs/indices/xattrop directory, can you help to shed some lights on this issue? Thanks!
Following is some info from our env,
Checking from sn-0 cliet, nothing is shown in-split-brain!
[root at sn-0:/mnt/bricks/services/brick/netserv/ethip]
# gluster v heal services info
Brick sn-0:/mnt/bricks/services/brick/
Number of entries: 0
Brick sn-1:/mnt/bricks/services/brick/
Number of entries: 0
[root at sn-0:/mnt/bricks/services/brick/netserv/ethip]
[root at sn-0:/mnt/bricks/services/brick/netserv/ethip]
# g...
2017 Dec 19
3
How to make sure self-heal backlog is empty ?
...performing a
rolling upgrade before going to the next node), how can I tell, while
reading this, if it's okay to reboot / upgrade my next node in the pool ?
Here is what I do for checking :
for i in `gluster volume list`; do gluster volume heal $i info; done
And here is what I get :
Brick ngluster-1.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-3.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of...
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all,
I accidentally removed the brick directory of a volume on one node, the
replica for this volume is 2.
now the situation is , there is no corresponding glusterfsd process on
this node, and 'glusterfs volume status' shows that the brick is offline,
like this:
Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y...
2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
Hi,
To resolve the gfid split-brain you can follow the steps at [1].
Since we don't have the pending markers set on the files, it is not showing
in the heal info.
To debug this issue, need some more data from you. Could you provide these
things?
1. volume info
2. mount log
3. brick logs
4. shd log
May I also know which version of gluster you are running. From the info you
have provided it looks like an old version.
If it is, then it would be great if you can upgarde to one of the latest
supported release.
[1]
http://docs.gluster.org/en/latest/Troubleshooting/split-brain/#fi...
2017 Nov 09
0
GlusterFS healing questions
Hi Rolf,
answers follow inline...
On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
> Hi,
>
> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
> nics)
>
> 1.
> Tests show that healing takes about double the time on healing 200gb vs
> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
> expected behaviour? In light of this would make 6,4...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...except doc.gluster.org? As I see
many gluster options are not described there or there are no explanation
what is doing...
On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume is created as distributed volume, 9 bricks 96TB in total
> (87TB after 10% of gluster disk space reservation)
>
> For some reasons I can?t ?heal? the volume:
> # gluster volume heal gv0
> Launching heal operation to perform index self heal on volume gv0 has
> been unsuccessful on bricks that are down. Please check if all...
2017 Nov 09
2
GlusterFS healing questions
...andez <jahernan at redhat.com> wrote:
> Hi Rolf,
>
> answers follow inline...
>
> On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
>>
>> Hi,
>>
>> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
>> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
>> nics)
>>
>> 1.
>> Tests show that healing takes about double the time on healing 200gb vs
>> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
>> expected behaviour? In light...
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib:
Any Ideas??
Jose
[2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req
[2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) [0x7f5464b9b045] -->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) [0x7f5464c33d85...
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi!
I am running a replica 3 volume. On server2 I wanted to move the brick
to a new disk.
I removed the brick from the volume:
gluster volume remove-brick VOLUME rep 2
server2:/gluster/VOLUME/brick0/brick force
I unmounted the old brick and mounted the new disk to the same location.
I added the empty new brick to the volume:
gluster volume add-brick VOLUME rep 3 server...
2017 Sep 28
2
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
...al info command !
Hi,
To resolve the gfid split-brain you can follow the steps at [1].
Since we don't have the pending markers set on the files, it is not showing in the heal info.
To debug this issue, need some more data from you. Could you provide these things?
1. volume info
2. mount log
3. brick logs
4. shd log
May I also know which version of gluster you are running. From the info you have provided it looks like an old version.
If it is, then it would be great if you can upgarde to one of the latest supported release.
[1] http://docs.gluster.org/en/latest/Troubleshooting/split-brain/#fi...
2018 Apr 30
2
Turn off replication
Hi All
We were able to get all 4 bricks are distributed , we can see the right amount of space. but we have been rebalancing since 4 days ago for 16Tb. and still only 8tb. is there a way to speed up. there is also data we can remove from it to speed it up, but what is the best procedures removing data , is it from the Gluster main expor...
2018 Apr 27
0
Turn off replication
Hi Jose,
Why are all the bricks visible in volume info if the pre-validation
for add-brick failed? I suspect that the remove brick wasn't done
properly.
You can provide the cmd_history.log to verify this. Better to get the
other log messages.
Also I need to know what are the bricks that were actually removed,
the command u...
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
...nodes.
#example log msg from /var/log/glusterfs/home-volbackups.log
[2023-05-01 21:43:15.450502 +0000] W [MSGID: 114031]
[client-rpc-fops_v2.c:670:client4_0_writev_cbk] 0-volbackups-client-18:
remote operation failed. [{errno=28}, {error=No space left on device}]
Each glusterfs node has a single brick and mounts a single distributed
volume as a glusterfs client locally and receives backup files to the volume
each weekend.
We distribute the ftp upload load between the servers through a combination
of /etc/hosts entries and AWS weighted dns.
We have 91 TB available on the volume though and ea...