Jose V. Carrión
2018-Feb-27 21:33 UTC
[Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume: volumedisk1
[root at stor1 ~]# gluster volume status volumedisk1 detail
Status of volume: volumedisk1
------------------------------------------------------------------------------
Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 13579
File System : xfs
Device : /dev/sdc1
Mount Options : rw,noatime
Inode Size : 512
Disk Space Free : 35.0TB
Total Disk Space : 49.1TB
Inode Count : 5273970048
Free Inodes : 5273123069
------------------------------------------------------------------------------
Brick : Brick stor2data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 13344
File System : xfs
Device : /dev/sdc1
Mount Options : rw,noatime
Inode Size : 512
Disk Space Free : 35.0TB
Total Disk Space : 49.1TB
Inode Count : 5273970048
Free Inodes : 5273124718
------------------------------------------------------------------------------
Brick : Brick stor3data:/mnt/disk_c/glusterfs/vol1/brick1
TCP Port : 49154
RDMA Port : 0
Online : Y
Pid : 17439
File System : xfs
Device : /dev/sdc1
Mount Options : rw,noatime
Inode Size : 512
Disk Space Free : 35.7TB
Total Disk Space : 49.1TB
Inode Count : 5273970048
Free Inodes : 5273125437
------------------------------------------------------------------------------
Brick : Brick stor3data:/mnt/disk_d/glusterfs/vol1/brick1
TCP Port : 49155
RDMA Port : 0
Online : Y
Pid : 17459
File System : xfs
Device : /dev/sdd1
Mount Options : rw,noatime
Inode Size : 512
Disk Space Free : 35.6TB
Total Disk Space : 49.1TB
Inode Count : 5273970048
Free Inodes : 5273127036
Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
= *196,4 TB *but df shows:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 21G 25G 46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
stor1data:/volumedisk0
76T 1,6T 74T 3% /volumedisk0
stor1data:/volumedisk1
*148T* 42T 106T 29% /volumedisk1
Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB
It's a production system so I hope you can help me.
Thanks in advance.
Jose V.
Below some other data of my configuration:
[root at stor1 ~]# gluster volume info
Volume Name: volumedisk0
Type: Distribute
Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: stor1data:/mnt/glusterfs/vol0/brick1
Brick2: stor2data:/mnt/glusterfs/vol0/brick1
Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1
Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1
Options Reconfigured:
performance.cache-size: 4GB
cluster.min-free-disk: 1%
performance.io-thread-count: 16
performance.readdir-ahead: on
Volume Name: volumedisk1
Type: Distribute
Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: stor1data:/mnt/glusterfs/vol1/brick1
Brick2: stor2data:/mnt/glusterfs/vol1/brick1
Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1
Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1
Options Reconfigured:
cluster.min-free-inodes: 6%
performance.cache-size: 4GB
cluster.min-free-disk: 1%
performance.io-thread-count: 16
performance.readdir-ahead: on
[root at stor1 ~]# grep -n "share"
/var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180227/623ee52a/attachment.html>
Nithya Balachandran
2018-Feb-28 04:07 UTC
[Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260 On 28 February 2018 at 03:03, Jose V. Carri?n <jocarbur at gmail.com> wrote:> Hi, > > Some days ago all my glusterfs configuration was working fine. Today I > realized that the total size reported by df command was changed and is > smaller than the aggregated capacity of all the bricks in the volume. > > I checked that all the volumes status are fine, all the glusterd daemons > are running, there is no error in logs, however df shows a bad total size. > > My configuration for one volume: volumedisk1 > [root at stor1 ~]# gluster volume status volumedisk1 detail > > Status of volume: volumedisk1 > ------------------------------------------------------------ > ------------------ > Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1 > TCP Port : 49153 > RDMA Port : 0 > Online : Y > Pid : 13579 > File System : xfs > Device : /dev/sdc1 > Mount Options : rw,noatime > Inode Size : 512 > Disk Space Free : 35.0TB > Total Disk Space : 49.1TB > Inode Count : 5273970048 > Free Inodes : 5273123069 > ------------------------------------------------------------ > ------------------ > Brick : Brick stor2data:/mnt/glusterfs/vol1/brick1 > TCP Port : 49153 > RDMA Port : 0 > Online : Y > Pid : 13344 > File System : xfs > Device : /dev/sdc1 > Mount Options : rw,noatime > Inode Size : 512 > Disk Space Free : 35.0TB > Total Disk Space : 49.1TB > Inode Count : 5273970048 > Free Inodes : 5273124718 > ------------------------------------------------------------ > ------------------ > Brick : Brick stor3data:/mnt/disk_c/glusterfs/vol1/brick1 > TCP Port : 49154 > RDMA Port : 0 > Online : Y > Pid : 17439 > File System : xfs > Device : /dev/sdc1 > Mount Options : rw,noatime > Inode Size : 512 > Disk Space Free : 35.7TB > Total Disk Space : 49.1TB > Inode Count : 5273970048 > Free Inodes : 5273125437 > ------------------------------------------------------------ > ------------------ > Brick : Brick stor3data:/mnt/disk_d/glusterfs/vol1/brick1 > TCP Port : 49155 > RDMA Port : 0 > Online : Y > Pid : 17459 > File System : xfs > Device : /dev/sdd1 > Mount Options : rw,noatime > Inode Size : 512 > Disk Space Free : 35.6TB > Total Disk Space : 49.1TB > Inode Count : 5273970048 > Free Inodes : 5273127036 > > > Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB > = *196,4 TB *but df shows: > > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sda2 48G 21G 25G 46% / > tmpfs 32G 80K 32G 1% /dev/shm > /dev/sda1 190M 62M 119M 35% /boot > /dev/sda4 395G 251G 124G 68% /data > /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0 > /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1 > stor1data:/volumedisk0 > 76T 1,6T 74T 3% /volumedisk0 > stor1data:/volumedisk1 > *148T* 42T 106T 29% /volumedisk1 > > Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB > > It's a production system so I hope you can help me. > > Thanks in advance. > > Jose V. > > > Below some other data of my configuration: > > [root at stor1 ~]# gluster volume info > > Volume Name: volumedisk0 > Type: Distribute > Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10 > Status: Started > Snapshot Count: 0 > Number of Bricks: 4 > Transport-type: tcp > Bricks: > Brick1: stor1data:/mnt/glusterfs/vol0/brick1 > Brick2: stor2data:/mnt/glusterfs/vol0/brick1 > Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1 > Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1 > Options Reconfigured: > performance.cache-size: 4GB > cluster.min-free-disk: 1% > performance.io-thread-count: 16 > performance.readdir-ahead: on > > Volume Name: volumedisk1 > Type: Distribute > Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2 > Status: Started > Snapshot Count: 0 > Number of Bricks: 4 > Transport-type: tcp > Bricks: > Brick1: stor1data:/mnt/glusterfs/vol1/brick1 > Brick2: stor2data:/mnt/glusterfs/vol1/brick1 > Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1 > Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1 > Options Reconfigured: > cluster.min-free-inodes: 6% > performance.cache-size: 4GB > cluster.min-free-disk: 1% > performance.io-thread-count: 16 > performance.readdir-ahead: on > > [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/* > /var/lib/glusterd/vols/volumedisk1/volumedisk1. > stor1data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1 > /var/lib/glusterd/vols/volumedisk1/volumedisk1. > stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option > shared-brick-count 1 > /var/lib/glusterd/vols/volumedisk1/volumedisk1. > stor2data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 0 > /var/lib/glusterd/vols/volumedisk1/volumedisk1. > stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option > shared-brick-count 0 > /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c- > glusterfs-vol1-brick1.vol:3: option shared-brick-count 0 > /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c- > glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 0 > /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d- > glusterfs-vol1-brick1.vol:3: option shared-brick-count 0 > /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d- > glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 0 > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180228/9cc92b58/attachment.html>
Jose V. Carrión
2018-Feb-28 12:58 UTC
[Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor2data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor2data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor3 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
/dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
/dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
/dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
stor3data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor3data:/volumedisk1
197T 61T 136T 31% /volumedisk1
However I'm concerned because, as you can see, the volumedisk0 on stor3data
is composed by 2 bricks on thesame disk but on different partitions
(/dev/sdb1 and /dev/sdb2).
After to aplly the workarround, the shared-brick-count parameter was
setted to 1 in all the bricks and all the servers (see below). Could be
this an issue ?
Also, I can check that stor3data is now unbalanced respect stor1data and
stor2data. The three nodes have the same size of brick but stor3data bricks
have used 1TB less than stor1data and stor2data:
stor1data:
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor2data bricks:
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor3data bricks:
/dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
/dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
/dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
[root at stor1 ~]# grep -n "share"
/var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
[root at stor2 ~]# grep -n "share"
/var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
[root at stor3t ~]# grep -n "share"
/var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3:
option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0
Thaks for your help,
Greetings.
Jose V.
2018-02-28 5:07 GMT+01:00 Nithya Balachandran <nbalacha at redhat.com>:
> Hi Jose,
>
> There is a known issue with gluster 3.12.x builds (see [1]) so you may be
> running into this.
>
> The "shared-brick-count" values seem fine on stor1. Please send
us "grep
> -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for
the other
> nodes so we can check if they are the cause.
>
>
> Regards,
> Nithya
>
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
>
> On 28 February 2018 at 03:03, Jose V. Carri?n <jocarbur at gmail.com>
wrote:
>
>> Hi,
>>
>> Some days ago all my glusterfs configuration was working fine. Today I
>> realized that the total size reported by df command was changed and is
>> smaller than the aggregated capacity of all the bricks in the volume.
>>
>> I checked that all the volumes status are fine, all the glusterd
daemons
>> are running, there is no error in logs, however df shows a bad total
size.
>>
>> My configuration for one volume: volumedisk1
>> [root at stor1 ~]# gluster volume status volumedisk1 detail
>>
>> Status of volume: volumedisk1
>> ------------------------------------------------------------
>> ------------------
>> Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
>> TCP Port : 49153
>> RDMA Port : 0
>> Online : Y
>> Pid : 13579
>> File System : xfs
>> Device : /dev/sdc1
>> Mount Options : rw,noatime
>> Inode Size : 512
>> Disk Space Free : 35.0TB
>> Total Disk Space : 49.1TB
>> Inode Count : 5273970048
>> Free Inodes : 5273123069
>> ------------------------------------------------------------
>> ------------------
>> Brick : Brick stor2data:/mnt/glusterfs/vol1/brick1
>> TCP Port : 49153
>> RDMA Port : 0
>> Online : Y
>> Pid : 13344
>> File System : xfs
>> Device : /dev/sdc1
>> Mount Options : rw,noatime
>> Inode Size : 512
>> Disk Space Free : 35.0TB
>> Total Disk Space : 49.1TB
>> Inode Count : 5273970048
>> Free Inodes : 5273124718
>> ------------------------------------------------------------
>> ------------------
>> Brick : Brick
stor3data:/mnt/disk_c/glusterfs/vol1/brick1
>> TCP Port : 49154
>> RDMA Port : 0
>> Online : Y
>> Pid : 17439
>> File System : xfs
>> Device : /dev/sdc1
>> Mount Options : rw,noatime
>> Inode Size : 512
>> Disk Space Free : 35.7TB
>> Total Disk Space : 49.1TB
>> Inode Count : 5273970048
>> Free Inodes : 5273125437
>> ------------------------------------------------------------
>> ------------------
>> Brick : Brick
stor3data:/mnt/disk_d/glusterfs/vol1/brick1
>> TCP Port : 49155
>> RDMA Port : 0
>> Online : Y
>> Pid : 17459
>> File System : xfs
>> Device : /dev/sdd1
>> Mount Options : rw,noatime
>> Inode Size : 512
>> Disk Space Free : 35.6TB
>> Total Disk Space : 49.1TB
>> Inode Count : 5273970048
>> Free Inodes : 5273127036
>>
>>
>> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>> +49.1TB = *196,4 TB *but df shows:
>>
>> [root at stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda2 48G 21G 25G 46% /
>> tmpfs 32G 80K 32G 1% /dev/shm
>> /dev/sda1 190M 62M 119M 35% /boot
>> /dev/sda4 395G 251G 124G 68% /data
>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>> 76T 1,6T 74T 3% /volumedisk0
>> stor1data:/volumedisk1
>> *148T* 42T 106T 29% /volumedisk1
>>
>> Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB
>>
>> It's a production system so I hope you can help me.
>>
>> Thanks in advance.
>>
>> Jose V.
>>
>>
>> Below some other data of my configuration:
>>
>> [root at stor1 ~]# gluster volume info
>>
>> Volume Name: volumedisk0
>> Type: Distribute
>> Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: stor1data:/mnt/glusterfs/vol0/brick1
>> Brick2: stor2data:/mnt/glusterfs/vol0/brick1
>> Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1
>> Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1
>> Options Reconfigured:
>> performance.cache-size: 4GB
>> cluster.min-free-disk: 1%
>> performance.io-thread-count: 16
>> performance.readdir-ahead: on
>>
>> Volume Name: volumedisk1
>> Type: Distribute
>> Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: stor1data:/mnt/glusterfs/vol1/brick1
>> Brick2: stor2data:/mnt/glusterfs/vol1/brick1
>> Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1
>> Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1
>> Options Reconfigured:
>> cluster.min-free-inodes: 6%
>> performance.cache-size: 4GB
>> cluster.min-free-disk: 1%
>> performance.io-thread-count: 16
>> performance.readdir-ahead: on
>>
>> [root at stor1 ~]# grep -n "share"
/var/lib/glusterd/vols/volumedisk1/*
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol:3: option shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3: option
>> shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol:3: option shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3: option
>> shared-brick-count 0
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180228/6b448032/attachment.html>
Maybe Matching Threads
- df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
- df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
- df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
- df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
- df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)