Displaying 20 results from an estimated 741 matches for "brick1".
Did you mean:
brick
2013 Mar 06
0
where is the free space?
...tually LXC VM) with no lock.
After umounting the space freed up right, though it took for a long time.
If I write directly to the fs, this behaviour is different.
This is a zfs the volume:
NAME PROPERTY VALUE SOURCE
tank/lxc/tipper/brick1 type filesystem -
tank/lxc/tipper/brick1 creation Fri Feb 15 14:26 2013 -
tank/lxc/tipper/brick1 used 16.4G -
tank/lxc/tipper/brick1 available 1.98T...
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
...nning.
Which processes should be run on every brick for heal operation?
# gluster volume status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
70850
Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
102951
Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
57535
Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y
56676
Brick cn...
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
...n?
>>
>> # gluster volume status
>> Status of volume: gv0
>> Gluster process TCP Port RDMA Port Online
>> Pid
>> ------------------------------------------------------------
>> ------------------
>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
>> 70850
>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
>> 102951
>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
>> 57535
>> Brick cn04-ib:/gfs/gv0/brick1/brick 0...
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up,
8><---
[root at glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root at glusterp2 fb]# ls -al
total 3130892
drwx------. 2 root root 64 May 22 13:01 .
drwx------. 4 root root 24 May 8 14:27 ..
-rw-------. 1 root root 3294887936 May 4 11:07
eafb8799-4e7a-4264-9213-26997c5a4693
-rw-r--r--. 1 root root 1396 May 22 13:01...
2018 May 22
2
split brain? but where?
.../centos-data1 112G 33M
> >112G 1% /data1
> > /dev/mapper/centos-var_lib 9.4G 178M
> >9.2G 2% /var/lib
> > /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
> >668G 29% /bricks/brick1
> > /dev/sda1 950M 235M
> >715M 25% /boot
> > tmpfs 771M 12K
> >771M 1% /run/user/42
> > glusterp2:gv0/glusterp2/images 932G 273G...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...shows a bad total size.
My configuration for one volume: volumedisk1
[root at stor1 ~]# gluster volume status volumedisk1 detail
Status of volume: volumedisk1
------------------------------------------------------------------------------
Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 13579
File System : xfs
Device : /dev/sdc1
Mount Options : rw,noatime
Inode Size : 512
Disk Space Free : 35.0TB
Total Disk Space : 49.1TB
Inode Coun...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...un on every brick for heal operation?
>
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------------------------
> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
> 70850
> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
> 102951
> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
> 57535
> Brick cn04-ib:/gfs/gv0/brick1/brick 0 4915...
2018 May 21
2
split brain? but where?
...47G 38M
47G 1% /home
/dev/mapper/centos-data1 112G 33M
112G 1% /data1
/dev/mapper/centos-var_lib 9.4G 178M
9.2G 2% /var/lib
/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
668G 29% /bricks/brick1
/dev/sda1 950M 235M
715M 25% /boot
tmpfs 771M 12K
771M 1% /run/user/42
glusterp2:gv0/glusterp2/images 932G 273G
659G 30% /var/lib/libvirt/images
gluste...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
.../sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
/dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
[root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/gl...
2018 May 22
0
split brain? but where?
I tried this already.
8><---
[root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root at glusterp2 fb]#
8><---
gluster 4
Centos 7.4
8><---
df -h
[root at glusterp2 fb]# df -h
Filesystem...
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
...OL??? MASTER BRICK???? SLAVE USER???
SLAVE????????????????? SLAVE NODE????? STATUS???? CRAWL STATUS??????
LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------
gl-node1-int??? mvol1???????? /brick1/mvol1 root?????????
gl-node5-int::mvol1??? N/A???????????? Faulty N/A??????????????? N/A
gl-node3-int??? mvol1???????? /brick1/mvol1 root?????????
gl-node5-int::mvol1??? gl-node7-int??? Passive N/A??????????????? N/A
gl-node2-int??? mvol1???????? /brick1/mvol1 root?????????
gl-node5-int::mvol1??...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...nfiguration for one volume: volumedisk1
> [root at stor1 ~]# gluster volume status volumedisk1 detail
>
> Status of volume: volumedisk1
> ------------------------------------------------------------
> ------------------
> Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
> TCP Port : 49153
> RDMA Port : 0
> Online : Y
> Pid : 13579
> File System : xfs
> Device : /dev/sdc1
> Mount Options : rw,noatime
> Inode Size : 512
> Disk Space Free : 35...
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
...luster volume status
>>> Status of volume: gv0
>>> Gluster process TCP Port RDMA Port Online
>>> Pid
>>> ------------------------------------------------------------
>>> ------------------
>>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 70850
>>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 102951
>>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 57535
>>> Brick cn04-ib:/gfs/gv0/brick...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...35T 30% /mnt/disk_c/glusterfs/vol1
> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
>
>
> [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option
> shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor2data.mnt-glusterfs-vol1-brick1.vol:3: option share...
2018 May 21
0
split brain? but where?
.../home
> /dev/mapper/centos-data1 112G 33M
>112G 1% /data1
> /dev/mapper/centos-var_lib 9.4G 178M
>9.2G 2% /var/lib
> /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
>668G 29% /bricks/brick1
> /dev/sda1 950M 235M
>715M 25% /boot
> tmpfs 771M 12K
>771M 1% /run/user/42
> glusterp2:gv0/glusterp2/images 932G 273G
>659G 30% /var/lib/l...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...luster volume status
>>> Status of volume: gv0
>>> Gluster process TCP Port RDMA Port Online
>>> Pid
>>> ------------------------------------------------------------
>>> ------------------
>>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 70850
>>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 102951
>>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y
>>> 57535
>>> Brick cn04-ib:/gfs/gv0/brick...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...----------------------------
Brick stor1data:/mnt/glusterfs/vol0/bri
ck1 49152 0 Y
13533
Brick stor2data:/mnt/glusterfs/vol0/bri
ck1 49152 0 Y
13302
Brick stor3data:/mnt/disk_b1/glusterfs/
vol0/brick1 49152 0 Y
17371
Brick stor3data:/mnt/disk_b2/glusterfs/
vol0/brick1 49153 0 Y
17391
NFS Server on localhost N/A N/A N
N/A
NFS Server on stor3data N/A N/A...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...d be run on every brick for heal operation?
>
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y 70850
> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y 102951
> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y 57535
> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y 566...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...stor1data:/mnt/glusterfs/vol0/bri
> ck1 49152 0 Y
> 13533
> Brick stor2data:/mnt/glusterfs/vol0/bri
> ck1 49152 0 Y
> 13302
> Brick stor3data:/mnt/disk_b1/glusterfs/
> vol0/brick1 49152 0 Y
> 17371
> Brick stor3data:/mnt/disk_b2/glusterfs/
> vol0/brick1 49153 0 Y
> 17391
> NFS Server on localhost N/A N/A N
> N/A
> NFS Server on stor3...
2018 May 10
0
broken gluster config
[trying to read,
I cant understand what is wrong?
root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp3:/bricks/brick1/gv0
<...