Displaying 12 results from an estimated 12 matches for "lru_siz".
Did you mean:
lru_size
2023 Jul 21
2
log file spewing on one node, but not the others
...nu/glusterfs/6.10/xlator/features/shard.so(+0x21b47)
[0x7fb261c13b47]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
[0x7fb26947f416]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
[0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but
with (-2) lru_size
[2023-07-21 18:51:38.261231] W [inode.c:1638:inode_table_prune]
(-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/mount/fuse.so(+0xba51)
[0x7fb266cdca51]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
[0x7fb26947f416]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0...
2023 Jul 25
1
log file spewing on one node, but not the others
...nu/glusterfs/6.10/xlator/features/shard.so(+0x21b47)
[0x7fb261c13b47]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
[0x7fb26947f416]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
[0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but
with (-2) lru_size
[2023-07-21 18:51:38.261231] W [inode.c:1638:inode_table_prune]
(-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/mount/fuse.so(+0xba51)
[0x7fb266cdca51]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
[0x7fb26947f416]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0...
2018 Feb 02
3
Run away memory with gluster mount
...results:
>>>> # grep itable <client-statedump> | grep active | wc -l
>>>> # grep itable <client-statedump> | grep active_size
>>>> # grep itable <client-statedump> | grep lru | wc -l
>>>> # grep itable <client-statedump> | grep lru_size
>>>> # grep itable <client-statedump> | grep purge | wc -l
>>>> # grep itable <client-statedump> | grep purge_size
>>>>
>>>
>>> Had to restart the test and have been running for 36 hours now. RSS is
>>> currently up to 23g....
2018 Feb 01
0
Run away memory with gluster mount
...he
>>> results:
>>> # grep itable <client-statedump> | grep active | wc -l
>>> # grep itable <client-statedump> | grep active_size
>>> # grep itable <client-statedump> | grep lru | wc -l
>>> # grep itable <client-statedump> | grep lru_size
>>> # grep itable <client-statedump> | grep purge | wc -l
>>> # grep itable <client-statedump> | grep purge_size
>>
>> Had to restart the test and have been running for 36 hours now. RSS is
>> currently up to 23g.
>>
>> Working on getting...
2018 Jan 29
2
Run away memory with gluster mount
...wing query on statedump files and report us the results:
> # grep itable <client-statedump> | grep active | wc -l
> # grep itable <client-statedump> | grep active_size
> # grep itable <client-statedump> | grep lru | wc -l
> # grep itable <client-statedump> | grep lru_size
> # grep itable <client-statedump> | grep purge | wc -l
> # grep itable <client-statedump> | grep purge_size
Had to restart the test and have been running for 36 hours now. RSS is
currently up to 23g.
Working on getting a bug report with link to the dumps. In the mean
time, I...
2018 Feb 03
0
Run away memory with gluster mount
...# grep itable <client-statedump> | grep active | wc -l
> # grep itable <client-statedump> | grep active_size
> # grep itable <client-statedump> | grep lru | wc -l
> # grep itable <client-statedump> | grep lru_size
> # grep itable <client-statedump> | grep purge | wc -l
> # grep itable <client-statedump> | grep purge_size
>
>
> Had to restart the test and have been running for 36 hours
> now. RSS is
> curre...
2018 Jan 30
1
Run away memory with gluster mount
...and report us the
> > results:
> > # grep itable <client-statedump> | grep active | wc -l
> > # grep itable <client-statedump> | grep active_size
> > # grep itable <client-statedump> | grep lru | wc -l
> > # grep itable <client-statedump> | grep lru_size
> > # grep itable <client-statedump> | grep purge | wc -l
> > # grep itable <client-statedump> | grep purge_size
>
> Had to restart the test and have been running for 36 hours now. RSS is
> currently up to 23g.
>
> Working on getting a bug report with link...
2018 Feb 21
1
Run away memory with gluster mount
...? # grep itable <client-statedump> | grep active | wc -l
>> ??????????????? # grep itable <client-statedump> | grep active_size
>> ??????????????? # grep itable <client-statedump> | grep lru | wc -l
>> ??????????????? # grep itable <client-statedump> | grep lru_size
>> ??????????????? # grep itable <client-statedump> | grep purge | wc -l
>> ??????????????? # grep itable <client-statedump> | grep purge_size
>>
>>
>> ??????????? Had to restart the test and have been running for 36 hours
>> ??????????? now. RSS is...
2018 Feb 05
1
Run away memory with gluster mount
...grep itable <client-statedump> | grep active | wc -l
> > # grep itable <client-statedump> | grep active_size
> > # grep itable <client-statedump> | grep lru | wc -l
> > # grep itable <client-statedump> | grep lru_size
> > # grep itable <client-statedump> | grep purge | wc -l
> > # grep itable <client-statedump> | grep purge_size
> >
> >
> > Had to restart the test and have been running for 36 hours
> > now....
2018 Jan 29
0
Run away memory with gluster mount
...Please run the following query on statedump files and report us the results:
# grep itable <client-statedump> | grep active | wc -l
# grep itable <client-statedump> | grep active_size
# grep itable <client-statedump> | grep lru | wc -l
# grep itable <client-statedump> | grep lru_size
# grep itable <client-statedump> | grep purge | wc -l
# grep itable <client-statedump> | grep purge_size
>
> I've CC'd the fuse/ dht devs to see if these data types have potential
> leaks. Could you raise a bug with the volume info and a (dropbox?) link
> from whi...
2018 Jan 27
6
Run away memory with gluster mount
On 01/27/2018 02:29 AM, Dan Ragle wrote:
>
> On 1/25/2018 8:21 PM, Ravishankar N wrote:
>>
>>
>> On 01/25/2018 11:04 PM, Dan Ragle wrote:
>>> *sigh* trying again to correct formatting ... apologize for the
>>> earlier mess.
>>>
>>> Having a memory issue with Gluster 3.12.4 and not sure how to
>>> troubleshoot. I don't
2010 Sep 10
11
Large directory performance
We have been struggling with our Lustre performance for some time now especially with large directories. I recently did some informal benchmarking (on a live system so I know results are not scientifically valid) and noticed a huge drop in performance of reads(stat operations) past 20k files in a single directory. I''m using bonnie++, disabling IO testing (-s 0) and just creating, reading,