Pranith Kumar Karampuri
2016-Jan-12 03:42 UTC
[Gluster-users] different free disk space size on distributed replicated
On 01/08/2016 06:30 PM, Patrick Kaiser wrote:> hi, > > I am running a distributed replicated gluster fs setup with 4 nodes. > currently i have no problems but i was wondering when i am running > gluster volume status > and seeing different free disk space on every node. > I am wondering if I should not have the same free and used size on > gluster00 and gluster01 > and also on gluster02 and gluster03 (as they are the replicated ones)It doesn't look right to me either. Do you have any self-heals that need to happen on the first replica subvolume? "gluster volume heal <volname> info" Pranith> > > > 1. > root at gluster0:~# gluster volume status GV01 detail > 2. > Status of volume: GV01 > 3. > ------------------------------------------------------------------------------ > 4. > Brick : Brick gluster00.storage.domain:/brick/gv01 > 5. > Port : 49163 > 6. > Online : Y > 7. > Pid : 3631 > 8. > File System : xfs > 9. > Device : /dev/mapper/vg--gluster0-DATA >10. > Mount Options : rw,relatime,attr2,delaylog,noquota >11. > Inode Size : 256 >12. > Disk Space Free : 5.7TB >13. > Total Disk Space : 13.6TB >14. > Inode Count : 2923388928 >15. > Free Inodes : 2922850330 >16. > ------------------------------------------------------------------------------ >17. > Brick : Brick gluster01.storage.domain:/brick/gv01 >18. > Port : 49163 >19. > Online : Y >20. > Pid : 2976 >21. > File System : xfs >22. > Device : /dev/mapper/vg--gluster1-DATA >23. > Mount Options : rw,relatime,attr2,delaylog,noquota >24. > Inode Size : 256 >25. > Disk Space Free : 4.4TB >26. > Total Disk Space : 13.6TB >27. > Inode Count : 2923388928 >28. > Free Inodes : 2922826116 >29. > ------------------------------------------------------------------------------ >30. > Brick : Brick gluster02.storage.domain:/brick/gv01 >31. > Port : 49163 >32. > Online : Y >33. > Pid : 3051 >34. > File System : xfs >35. > Device : /dev/mapper/vg--gluster2-DATA >36. > Mount Options : rw,relatime,attr2,delaylog,noquota >37. > Inode Size : 256 >38. > Disk Space Free : 6.4TB >39. > Total Disk Space : 13.6TB >40. > Inode Count : 2923388928 >41. > Free Inodes : 2922851020 >42. > ------------------------------------------------------------------------------ >43. > Brick : Brick gluster03.storage.domain:/brick/gv01 >44. > Port : N/A >45. > Online : N >46. > Pid : 29822 >47. > File System : xfs >48. > Device : /dev/mapper/vg--gluster3-DATA >49. > Mount Options : rw,relatime,attr2,delaylog,noquota >50. > Inode Size : 256 >51. > Disk Space Free : 6.2TB >52. > Total Disk Space : 13.6TB >53. > Inode Count : 2923388928 >54. > Free Inodes : 2922847631 > > > friendly regards, > Patrick > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160112/6bf0dce5/attachment.html>
Patrick Kaiser
2016-Jan-12 08:18 UTC
[Gluster-users] different free disk space size on distributed replicated
Hi,
thanks for your feedback. I've figured out, that a brick was not working
anymore. Only after restarting the whole server with the failed brick, the
volume now has identical sizes.
Thanks
Mit freundlichen Gr??en
Patrick Kaiser
VNC - Virtual Network Consult GmbH
From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
To: "Patrick Kaiser" <patrick.kaiser at vnc.biz>, gluster-users
at gluster.org
Sent: Tuesday, January 12, 2016 4:42:22 AM
Subject: Re: [Gluster-users] different free disk space size on distributed
replicated
On 01/08/2016 06:30 PM, Patrick Kaiser wrote:
hi,
I am running a distributed replicated gluster fs setup with 4 nodes.
currently i have no problems but i was wondering when i am running gluster
volume status
and seeing different free disk space on every node.
I am wondering if I should not have the same free and used size on gluster00 and
gluster01
and also on gluster02 and gluster03 (as they are the replicated ones)
It doesn't look right to me either. Do you have any self-heals that need to
happen on the first replica subvolume? "gluster volume heal <volname>
info"
Pranith
BQ_BEGIN
1.
root at gluster0:~# gluster volume status GV01 detail
2.
Status of volume: GV01
3.
------------------------------------------------------------------------------
4.
Brick : Brick gluster00.storage.domain:/brick/gv01
5.
Port : 49163
6.
Online : Y
7.
Pid : 3631
8.
File System : xfs
9.
Device : /dev/mapper/vg--gluster0-DATA
10.
Mount Options : rw,relatime,attr2,delaylog,noquota
11.
Inode Size : 256
12.
Disk Space Free : 5.7TB
13.
Total Disk Space : 13.6TB
14.
Inode Count : 2923388928
15.
Free Inodes : 2922850330
16.
------------------------------------------------------------------------------
17.
Brick : Brick gluster01.storage.domain:/brick/gv01
18.
Port : 49163
19.
Online : Y
20.
Pid : 2976
21.
File System : xfs
22.
Device : /dev/mapper/vg--gluster1-DATA
23.
Mount Options : rw,relatime,attr2,delaylog,noquota
24.
Inode Size : 256
25.
Disk Space Free : 4.4TB
26.
Total Disk Space : 13.6TB
27.
Inode Count : 2923388928
28.
Free Inodes : 2922826116
29.
------------------------------------------------------------------------------
30.
Brick : Brick gluster02.storage.domain:/brick/gv01
31.
Port : 49163
32.
Online : Y
33.
Pid : 3051
34.
File System : xfs
35.
Device : /dev/mapper/vg--gluster2-DATA
36.
Mount Options : rw,relatime,attr2,delaylog,noquota
37.
Inode Size : 256
38.
Disk Space Free : 6.4TB
39.
Total Disk Space : 13.6TB
40.
Inode Count : 2923388928
41.
Free Inodes : 2922851020
42.
------------------------------------------------------------------------------
43.
Brick : Brick gluster03.storage.domain:/brick/gv01
44.
Port : N/A
45.
Online : N
46.
Pid : 29822
47.
File System : xfs
48.
Device : /dev/mapper/vg--gluster3-DATA
49.
Mount Options : rw,relatime,attr2,delaylog,noquota
50.
Inode Size : 256
51.
Disk Space Free : 6.2TB
52.
Total Disk Space : 13.6TB
53.
Inode Count : 2923388928
54.
Free Inodes : 2922847631
friendly regards,
Patrick
_______________________________________________
Gluster-users mailing list Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
BQ_END
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160112/a214651f/attachment.html>