Displaying 9 results from an estimated 9 matches for "4063228".
2017 Dec 18
3
High named cpu
...op - 17:27:48 up 2:29, 1 user, load average: 1.08, 1.13, 1.09
Tasks: 139 total, 1 running, 138 sleeping, 0 stopped, 0 zombie
%Cpu(s): 49.7 us, 0.8 sy, 0.0 ni, 49.1 id, 0.0 wa, 0.0 hi, 0.3 si,
0.0 st
KiB Mem : 735668 total, 203860 free, 299444 used, 232364 buff/cache
KiB Swap: 4063228 total, 4033608 free, 29620 used. 177296 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2011 named 20 0 646880 64712 11924 S 100.3 8.8 128:53.76 named
9 root 20 0 0 0 0 S 0.3 0.0 0:37.72
rcu_sched
1188 root...
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...0.0 ni, 38.6 id, 30.0 wa, 0.0 hi, 3.8 si,
0.0 st
%Cpu2 : 8.7 us, 6.9 sy, 0.0 ni, 48.7 id, 34.9 wa, 0.0 hi, 0.7 si,
0.0 st
%Cpu3 : 10.6 us, 7.8 sy, 0.0 ni, 57.1 id, 24.1 wa, 0.0 hi, 0.4 si,
0.0 st
KiB Mem : 3881708 total, 3543280 free, 224008 used, 114420 buff/cache
KiB Swap: 4063228 total, 3836612 free, 226616 used. 3457708 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14115 root 20 0 2504832 27640 2612 S 43.5 0.7 432:10.35
glusterfsd
1319 root 20 0 1269620 23780 2636 S 38.9 0.6 752:44.78
glusterfs
133...
2019 Dec 26
2
nfs causes Centos 7.7 system to hang
...10:09:40 up 10:16,? 1 user,? load average: 53.66, 54.13, 52.98
Tasks: 475 total,?? 2 running, 436 sleeping,?? 0 stopped,? 37 zombie
%Cpu(s):? 0.1 us,? 0.6 sy,? 0.0 ni, 99.2 id,? 0.0 wa,? 0.0 hi, 0.0 si,?
0.1 st
KiB Mem :? 3879928 total,?? 813504 free,? 1733216 used,? 1333208 buff/cache
KiB Swap:? 4063228 total,? 4062708 free,????? 520 used.? 1797264 avail Mem
and finally hangs showing messages (which I have not recorded precisely)
in the CLI login screen like "System out of memory". Then I have to reboot.
I tried to downgrade nfs-utils and rpcbind to earlier versions (in case
there is...
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...0.0 ni, 47.2 id, 0.0 wa, 0.0 hi, 1.5 si,
0.0 st
%Cpu2 : 20.2 us, 13.5 sy, 0.0 ni, 64.1 id, 0.0 wa, 0.0 hi, 2.3 si,
0.0 st
%Cpu3 : 30.0 us, 16.2 sy, 0.0 ni, 47.5 id, 0.0 wa, 0.0 hi, 6.3 si,
0.0 st
KiB Mem : 3881708 total, 3207488 free, 346680 used, 327540 buff/cache
KiB Swap: 4063228 total, 4062828 free, 400 used. 3232208 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1319 root 20 0 819036 12928 4036 S 32.3 0.3 1:19.64
glusterfs
1310 root 20 0 1232428 25636 4364 S 12.1 0.7 0:41.25
glusterfsd
Ne...
2017 Dec 18
0
High named cpu
...1 user, load average: 1.08, 1.13, 1.09
> Tasks: 139 total, 1 running, 138 sleeping, 0 stopped, 0 zombie
> %Cpu(s): 49.7 us, 0.8 sy, 0.0 ni, 49.1 id, 0.0 wa, 0.0 hi, 0.3
> si, 0.0 st
> KiB Mem : 735668 total, 203860 free, 299444 used, 232364
> buff/cache KiB Swap: 4063228 total, 4033608 free, 29620 used.
> 177296 avail Mem
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
> COMMAND 2011 named 20 0 646880 64712 11924 S 100.3 8.8
> 128:53.76 named 9 root 20 0 0 0 0 S 0.3
> 0.0 0:37.72 rc...
2017 Dec 20
1
High named cpu
...08, 1.13, 1.09
> > Tasks: 139 total, 1 running, 138 sleeping, 0 stopped, 0 zombie
> > %Cpu(s): 49.7 us, 0.8 sy, 0.0 ni, 49.1 id, 0.0 wa, 0.0 hi, 0.3
> > si, 0.0 st
> > KiB Mem : 735668 total, 203860 free, 299444 used, 232364
> > buff/cache KiB Swap: 4063228 total, 4033608 free, 29620 used.
> > 177296 avail Mem
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
> > COMMAND 2011 named 20 0 646880 64712 11924 S 100.3 8.8
> > 128:53.76 named 9 root 20 0 0 0 0 S 0...
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design.
Nothing you can do about it.
Ondrej
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys
Sent: Monday, March 19, 2018 10:38 AM
To: gluster-users at
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...hi, 3.8 si,
> 0.0 st
> %Cpu2 : 8.7 us, 6.9 sy, 0.0 ni, 48.7 id, 34.9 wa, 0.0 hi, 0.7 si,
> 0.0 st
> %Cpu3 : 10.6 us, 7.8 sy, 0.0 ni, 57.1 id, 24.1 wa, 0.0 hi, 0.4 si,
> 0.0 st
> KiB Mem : 3881708 total, 3543280 free, 224008 used, 114420 buff/cache
> KiB Swap: 4063228 total, 3836612 free, 226616 used. 3457708 avail Mem
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 14115 root 20 0 2504832 27640 2612 S 43.5 0.7 432:10.35
> glusterfsd
> 1319 root 20 0 1269620 23780 2636 S 38.9 0.6 752:4...
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either.? For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core somewhere trying
> to facilitate read / writes to the other bricks.
>
>