Marco Lorenzo Crociani
2016-Jul-20 09:46 UTC
[Gluster-users] Really long df-h and high swap usage after rsync 3.7.11
Hi, I have a CentOS7 with rsnapshot that mount glusterfs volume and do backups every day. # yum list installed |grep glusterfs glusterfs.x86_64 3.7.11-1.el7 glusterfs-api.x86_64 3.7.11-1.el7 glusterfs-client-xlators.x86_64 3.7.11-1.el7 glusterfs-fuse.x86_64 3.7.11-1.el7 glusterfs-libs.x86_64 3.7.11-1.el7 # free -m total used free shared buff/cache available Mem: 7958 3401 213 4 4343 4146 Swap: 8063 2467 5596 After running the backups df -h is really slow: # time df -h File system Dim. Usati Dispon. Uso% Montato su [.....] s25gfs.ovirt:VOL_*** 100G 40G 61G 40% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 100G 50G 51G 50% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 200G 138G 63G 69% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 500G 412G 89G 83% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 500G 246G 255G 50% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 200G 98G 103G 49% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 200G 90G 111G 45% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 100G 43G 58G 43% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 500G 385G 116G 77% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 100G 52G 49G 52% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 100G 15G 86G 15% /mnt/VOL_*** s25gfs.ovirt:VOL_*** 400G 348G 53G 87% /mnt/VOL_*** real 0m24.068s user 0m0.003s sys 0m0.002s while on another machine it took: real 0m0.057s user 0m0.000s sys 0m0.006s after umounting all gluster volumes and mounting those back: # free -m total used free shared buff/cache available Mem: 7958 652 3014 4 4291 6915 Swap: 8063 55 8008 time df -h real 0m0.037s user 0m0.001s sys 0m0.002s I mount the volumes in fstab with: s25gfs.ovirt:VOL_*** /mnt/VOL_*** glusterfs defaults,acl 0 0 Is there any memory leak or something nasty? Regards, -- Marco Crociani Prisma Telecom Testing S.r.l. via Petrocchi, 4 20127 MILANO ITALY Phone: +39 02 26113507 Fax: +39 02 26113597 e-mail: marcoc at prismatelecomtesting.com web: http://www.prismatelecomtesting.com Questa email (e I suoi allegati) costituisce informazione riservata e confidenziale e pu? essere soggetto a legal privilege. Pu? essere utilizzata esclusivamente dai suoi destinatari legittimi. Se avete ricevuto questa email per errore, siete pregati di informarne immediatamente il mittente e quindi cancellarla. A meno che non siate stati a ci? espressamente autorizzati, la diffusione o la riproduzione di questa email o del suo contenuto non sono consentiti. Salvo che questa email sia espressamente qualificata come offerta o accettazione contrattuale, il mittente non intende con questa email dare vita ad un vincolo giuridico e questa email non pu? essere interpretata quale offerta o accettazione che possa dare vita ad un contratto. Qualsiasi opinione manifestata in questa email ? un'opinione personale del mittente, salvo che il mittente dichiari espressamente che si tratti di un'opinione di Prisma Engineering. ******************************************************************************* This e-mail (including any attachments) is private and confidential, and may be privileged. It is for the exclusive use of the intended recipient(s). If you have received this email in error, please inform the sender immediately and then delete this email. Unless you have been given specific permission to do so, please do not distribute or copy this email or its contents. Unless the text of this email specifically states that it is a contractual offer or acceptance, the sender does not intend to create a legal relationship and this email shall not constitute an offer or acceptance which could give rise to a contract. Any views expressed in this communication are those of the individual sender, except where the sender specifically states them to be the views of Prisma Engineering.
Marco Lorenzo Crociani
2016-Jul-21 08:42 UTC
[Gluster-users] Really long df-h and high swap usage after rsync 3.7.11
Hello, I have done a swapoff and then the df -h run fast. # free -m total used free shared buff/cache available Mem: 7958 5720 280 16 1956 1825 Swap: 0 0 0 time df -h real 0m0.054s user 0m0.000s sys 0m0.003s Should I reduce swappiness? Now it is 60. Is it really needed all that ram to mount twelve glusterfs volumes ( ~3764 GB )? Regards, -- Marco Crociani Prisma Telecom Testing S.r.l. via Petrocchi, 4 20127 MILANO ITALY Phone: +39 02 26113507 Fax: +39 02 26113597 e-mail: marcoc at prismatelecomtesting.com web: http://www.prismatelecomtesting.com On 20/07/2016 11:46, Marco Lorenzo Crociani wrote:> Hi, > I have a CentOS7 with rsnapshot that mount glusterfs volume and do > backups every day. > > # yum list installed |grep glusterfs > glusterfs.x86_64 3.7.11-1.el7 > glusterfs-api.x86_64 3.7.11-1.el7 > glusterfs-client-xlators.x86_64 3.7.11-1.el7 > glusterfs-fuse.x86_64 3.7.11-1.el7 > glusterfs-libs.x86_64 3.7.11-1.el7 > > # free -m > total used free shared buff/cache > available > Mem: 7958 3401 213 4 4343 4146 > Swap: 8063 2467 5596 > > > After running the backups df -h is really slow: > > # time df -h > File system Dim. Usati Dispon. Uso% Montato su > [.....] > s25gfs.ovirt:VOL_*** 100G 40G 61G 40% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 100G 50G 51G 50% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 200G 138G 63G 69% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 500G 412G 89G 83% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 500G 246G 255G 50% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 200G 98G 103G 49% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 200G 90G 111G 45% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 100G 43G 58G 43% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 500G 385G 116G 77% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 100G 52G 49G 52% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 100G 15G 86G 15% /mnt/VOL_*** > s25gfs.ovirt:VOL_*** 400G 348G 53G 87% /mnt/VOL_*** > > real 0m24.068s > user 0m0.003s > sys 0m0.002s > > while on another machine it took: > real 0m0.057s > user 0m0.000s > sys 0m0.006s > > after umounting all gluster volumes and mounting those back: > > # free -m > total used free shared buff/cache > available > Mem: 7958 652 3014 4 4291 6915 > Swap: 8063 55 8008 > > time df -h > real 0m0.037s > user 0m0.001s > sys 0m0.002s > > I mount the volumes in fstab with: > s25gfs.ovirt:VOL_*** /mnt/VOL_*** glusterfs > defaults,acl 0 0 > > Is there any memory leak or something nasty? > Regards, >