Serkan Çoban
2016-May-02 06:11 UTC
[Gluster-users] [Gluster-devel] Fwd: dht_is_subvol_filled messages on client
>1. What is the out put of du -hs <back-end-export>? Please get this information for each of the brick that are part of disperse.There are 20 bricks in disperse-56 and the du -hs output is like: 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 1.8M /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 I see that gluster is not writing to this disperse set. All other disperse sets are filled 13GB but this one is empty. I see directory structure created but no files in directories. How can I fix the issue? I will try to rebalance but I don't think it will write to this disperse set... On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G <raghavendra at gluster.com> wrote:> > > On Fri, Apr 29, 2016 at 12:32 AM, Serkan ?oban <cobanserkan at gmail.com> > wrote: >> >> Hi, I cannot get an answer from user list, so asking to devel list. >> >> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht: >> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider >> adding more bricks. >> >> message on client logs.My cluster is empty there are only a couple of >> GB files for testing. Why this message appear in syslog? > > > dht uses disk usage information from backend export. > > 1. What is the out put of du -hs <back-end-export>? Please get this > information for each of the brick that are part of disperse. > 2. Once you get du information from each brick, the value seen by dht will > be based on how cluster/disperse aggregates du info (basically statfs fop). > > The reason for 100% disk usage may be, > In case of 1, backend fs might be shared by data other than brick. > In case of 2, some issues with aggregation. > >> Is is safe to >> ignore it? > > > dht will try not to have data files on the subvol in question > (v0-disperse-56). Hence lookup cost will be two hops for files hashing to > disperse-56 (note that other fops like read/write/open still have the cost > of single hop and dont suffer from this penalty). Other than that there is > no significant harm unless disperse-56 is really running out of space. > > regards, > Raghavendra > >> _______________________________________________ >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-devel > > > > > -- > Raghavendra G
Serkan Çoban
2016-May-02 09:00 UTC
[Gluster-users] [Gluster-devel] Fwd: dht_is_subvol_filled messages on client
I started a rebalace and it did not fix the issue... On Mon, May 2, 2016 at 9:11 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:>>1. What is the out put of du -hs <back-end-export>? Please get this information for each of the brick that are part of disperse. > There are 20 bricks in disperse-56 and the du -hs output is like: > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 1.8M /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > > I see that gluster is not writing to this disperse set. All other > disperse sets are filled 13GB but this one is empty. I see directory > structure created but no files in directories. > How can I fix the issue? I will try to rebalance but I don't think it > will write to this disperse set... > > > > On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G <raghavendra at gluster.com> wrote: >> >> >> On Fri, Apr 29, 2016 at 12:32 AM, Serkan ?oban <cobanserkan at gmail.com> >> wrote: >>> >>> Hi, I cannot get an answer from user list, so asking to devel list. >>> >>> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht: >>> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider >>> adding more bricks. >>> >>> message on client logs.My cluster is empty there are only a couple of >>> GB files for testing. Why this message appear in syslog? >> >> >> dht uses disk usage information from backend export. >> >> 1. What is the out put of du -hs <back-end-export>? Please get this >> information for each of the brick that are part of disperse. >> 2. Once you get du information from each brick, the value seen by dht will >> be based on how cluster/disperse aggregates du info (basically statfs fop). >> >> The reason for 100% disk usage may be, >> In case of 1, backend fs might be shared by data other than brick. >> In case of 2, some issues with aggregation. >> >>> Is is safe to >>> ignore it? >> >> >> dht will try not to have data files on the subvol in question >> (v0-disperse-56). Hence lookup cost will be two hops for files hashing to >> disperse-56 (note that other fops like read/write/open still have the cost >> of single hop and dont suffer from this penalty). Other than that there is >> no significant harm unless disperse-56 is really running out of space. >> >> regards, >> Raghavendra >> >>> _______________________________________________ >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> http://www.gluster.org/mailman/listinfo/gluster-devel >> >> >> >> >> -- >> Raghavendra G
Raghavendra G
2016-May-03 10:40 UTC
[Gluster-users] [Gluster-devel] Fwd: dht_is_subvol_filled messages on client
On Mon, May 2, 2016 at 11:41 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:> >1. What is the out put of du -hs <back-end-export>? Please get this > information for each of the brick that are part of disperse. >Sorry. I needed df output of the filesystem containing brick. Not du. Sorry about that.> There are 20 bricks in disperse-56 and the du -hs output is like: > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 1.8M /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > 80K /bricks/20 > > I see that gluster is not writing to this disperse set. All other > disperse sets are filled 13GB but this one is empty. I see directory > structure created but no files in directories. > How can I fix the issue? I will try to rebalance but I don't think it > will write to this disperse set... > > > > On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G <raghavendra at gluster.com> > wrote: > > > > > > On Fri, Apr 29, 2016 at 12:32 AM, Serkan ?oban <cobanserkan at gmail.com> > > wrote: > >> > >> Hi, I cannot get an answer from user list, so asking to devel list. > >> > >> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht: > >> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider > >> adding more bricks. > >> > >> message on client logs.My cluster is empty there are only a couple of > >> GB files for testing. Why this message appear in syslog? > > > > > > dht uses disk usage information from backend export. > > > > 1. What is the out put of du -hs <back-end-export>? Please get this > > information for each of the brick that are part of disperse. > > 2. Once you get du information from each brick, the value seen by dht > will > > be based on how cluster/disperse aggregates du info (basically statfs > fop). > > > > The reason for 100% disk usage may be, > > In case of 1, backend fs might be shared by data other than brick. > > In case of 2, some issues with aggregation. > > > >> Is is safe to > >> ignore it? > > > > > > dht will try not to have data files on the subvol in question > > (v0-disperse-56). Hence lookup cost will be two hops for files hashing to > > disperse-56 (note that other fops like read/write/open still have the > cost > > of single hop and dont suffer from this penalty). Other than that there > is > > no significant harm unless disperse-56 is really running out of space. > > > > regards, > > Raghavendra > > > >> _______________________________________________ > >> Gluster-devel mailing list > >> Gluster-devel at gluster.org > >> http://www.gluster.org/mailman/listinfo/gluster-devel > > > > > > > > > > -- > > Raghavendra G > _______________________________________________ > Gluster-devel mailing list > Gluster-devel at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-devel >-- Raghavendra G -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160503/aa1993b7/attachment.html>