Pranith Kumar Karampuri
2018-Jul-27 07:22 UTC
[Gluster-users] Gluter 3.12.12: performance during heal and in general
On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert <revirii at googlemail.com> wrote:> 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>: > > > > > > On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert <revirii at googlemail.com> > wrote: > >> > >> > Do you already have all the 190000 directories already created? If not > >> > could you find out which of the paths need it and do a stat directly > instead > >> > of find? > >> > >> Quite probable not all of them have been created (but counting how > >> much would take very long...). Hm, maybe running stat in a double loop > >> (thx to our directory structure) would help. Something like this (may > >> be not 100% correct): > >> > >> for a in ${100..999}; do > >> for b in ${100..999}; do > >> stat /$a/$b/ > >> done > >> done > >> > >> Should run stat on all directories. I think i'll give this a try. > > > > > > Just to prevent these served from a cache, it is probably better to do > this > > from a fresh mount? > > > > -- > > Pranith > > Good idea. I'll install glusterfs client on a little used machine, so > there should be no caching. Thx! Have a good weekend when the time > comes :-) >If this proves effective, what you need to also do is unmount and mount again, something like: mount for a in ${100..999}; do for b in ${100..999}; do stat /$a/$b/ done done unmount -- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180727/81056875/attachment.html>
Hu Bert
2018-Jul-27 08:02 UTC
[Gluster-users] Gluter 3.12.12: performance during heal and in general
2018-07-27 9:22 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:> > > On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert <revirii at googlemail.com> wrote: >> >> 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>: >> > >> > >> > On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert <revirii at googlemail.com> >> > wrote: >> >> >> >> > Do you already have all the 190000 directories already created? If >> >> > not >> >> > could you find out which of the paths need it and do a stat directly >> >> > instead >> >> > of find? >> >> >> >> Quite probable not all of them have been created (but counting how >> >> much would take very long...). Hm, maybe running stat in a double loop >> >> (thx to our directory structure) would help. Something like this (may >> >> be not 100% correct): >> >> >> >> for a in ${100..999}; do >> >> for b in ${100..999}; do >> >> stat /$a/$b/ >> >> done >> >> done >> >> >> >> Should run stat on all directories. I think i'll give this a try. >> > >> > >> > Just to prevent these served from a cache, it is probably better to do >> > this >> > from a fresh mount? >> > >> > -- >> > Pranith >> >> Good idea. I'll install glusterfs client on a little used machine, so >> there should be no caching. Thx! Have a good weekend when the time >> comes :-) > > > If this proves effective, what you need to also do is unmount and mount > again, something like: > > mount > for a in ${100..999}; do > for b in ${100..999}; do > stat /$a/$b/ > done > done > unmountI'll see what is possible over the weekend. Btw.: i've seen in the munin stats that the disk utilization for bricksdd1 on the healthy gluster servers is between 70% (night) and almost 99% (daytime). So it looks like that the basic problem is the disk which seems not to be able to work faster? If so (heal) performance won't improve with this setup, i assume. Maybe switching to RAID10 (conventional hard disks), SSDs or even add 3 additional gluster servers (distributed replicated) could help?