search for: brickstatus

Displaying 7 results from an estimated 7 matches for "brickstatus".

2023 Jun 30
1
remove_me files building up
Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause. Since then however, we've seen some strange behaviour,
2023 Jul 03
1
remove_me files building up
...-80ded0903d88State: Peer in Cluster (Connected) root at uk3-prod-gfs-arb-01:~# gluster volume heal gv1 info Brick uk1-prod-gfs-01:/data/glusterfs/gv1/brick1/brick<gfid:5b57e1f6-3e3d-4334-a0db-b2560adae6d1>Status: ConnectedNumber of entries: 1 Brick uk2-prod-gfs-01:/data/glusterfs/gv1/brick1/brickStatus: ConnectedNumber of entries: 0 Brick uk3-prod-gfs-arb-01:/data/glusterfs/gv1/brick1/brickStatus: ConnectedNumber of entries: 0 Brick uk1-prod-gfs-01:/data/glusterfs/gv1/brick3/brickStatus: ConnectedNumber of entries: 0 Brick uk2-prod-gfs-01:/data/glusterfs/gv1/brick3/brickStatus: ConnectedNumber of...
2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2023 Jul 04
1
remove_me files building up
...-80ded0903d88State: Peer in Cluster (Connected) root at uk3-prod-gfs-arb-01:~# gluster volume heal gv1 info Brick uk1-prod-gfs-01:/data/glusterfs/gv1/brick1/brick<gfid:5b57e1f6-3e3d-4334-a0db-b2560adae6d1>Status: ConnectedNumber of entries: 1 Brick uk2-prod-gfs-01:/data/glusterfs/gv1/brick1/brickStatus: ConnectedNumber of entries: 0 Brick uk3-prod-gfs-arb-01:/data/glusterfs/gv1/brick1/brickStatus: ConnectedNumber of entries: 0 Brick uk1-prod-gfs-01:/data/glusterfs/gv1/brick3/brickStatus: ConnectedNumber of entries: 0 Brick uk2-prod-gfs-01:/data/glusterfs/gv1/brick3/brickStatus: ConnectedNumber of...
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
...-80ded0903d88State: Peer in Cluster (Connected) root at uk3-prod-gfs-arb-01:~# gluster volume heal gv1 info Brick uk1-prod-gfs-01:/data/glusterfs/gv1/brick1/brick<gfid:5b57e1f6-3e3d-4334-a0db-b2560adae6d1>Status: ConnectedNumber of entries: 1 Brick uk2-prod-gfs-01:/data/glusterfs/gv1/brick1/brickStatus: ConnectedNumber of entries: 0 Brick uk3-prod-gfs-arb-01:/data/glusterfs/gv1/brick1/brickStatus: ConnectedNumber of entries: 0 Brick uk1-prod-gfs-01:/data/glusterfs/gv1/brick3/brickStatus: ConnectedNumber of entries: 0 Brick uk2-prod-gfs-01:/data/glusterfs/gv1/brick3/brickStatus: ConnectedNumber of...
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0