Displaying 14 results from an estimated 14 matches for "44m".
Did you mean:
440
2018 Mar 09
0
Memory Leak with PHP
...85 apache 20 0 1454m 1.2g 9824 S 1.3 1.9 167:25.43 php-fpm
13850 apache 20 0 464m 45m 5872 S 1.0 0.1 0:43.55 httpd
20044 root 20 0 15276 1496 952 R 1.0 0.0 15:05.99 top
1936 apache 20 0 1684m 1.4g 10m S 0.7 2.2 192:37.37 php-fpm
19109 apache 20 0 463m 44m 5808 S 0.7 0.1 0:18.66 httpd
24782 apache 20 0 465m 46m 5720 S 0.7 0.1 0:12.01 httpd
1848 apache 20 0 1579m 1.3g 10m S 0.3 2.1 183:02.13 php-fpm
2718 root 20 0 1074m 12m 2536 S 0.3 0.0 7:32.85 fail2ban-server
15221 apache 20 0 475m 49m 9508 S 0.3 0.1...
2003 Apr 02
12
segmentation fault
...6 0:00 asterisk
15987 root 8 0 6440 6436 2144 S 0.0 0.6 0:00 asterisk
Top in several hours:
15986 root 9 0 9192 9188 2148 S 0.0 0.9 0:00 asterisk
15987 root 9 0 9192 9188 2148 S 0.0 0.9 0:00 asterisk
Top after a day:
27441 root 9 0 45980 44M 2156 S 0.0 4.5 0:00 asterisk
27442 root 8 0 45980 44M 2156 S 0.0 4.5 0:16 asterisk
Actually, I saw it over 50.
There were some warning messages on the way. For example:
Apr 1 23:22:33 WARNING[10251]: File chan_zap.c, Line 5248 (zt_pri_error):
PRI:
Read on 86 failed: Unk...
2009 May 18
4
unable to read partition table in log
...13 104391 83 Linux
/dev/sda2 14 13275 106527015 8e Linux LVM
[root at mail srvadmin]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
99G 42G 52G 45% /
/dev/sda1 99M 44M 51M 47% /boot
none 3.9G 0 3.9G 0% /dev/shm
[root at mail srvadmin]#
I looked at the hardware itself and all disk in this raid system are fine,
my question is, is this something I need to worry about and what is causing
this issue.
Although everything looking fine,...
2010 Sep 24
1
switching
...want 5 virtual machines with unshared storage
[root@virtualintranet /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 4.0G 1.7G 2.1G 46% /
none 376M 0 376M 0% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
44M 44M 0 100% /var/xen/xc-install
[root@virtualintranet /]# fdisk -l
Disk /dev/sda: 1998.2 GB, 1998233534464 bytes
255 heads, 63 sectors/track, 242938 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 *...
2007 Jun 21
0
Network issue in RHCS/GFS environment
...478B 260k>
1 33 40 25 1 0| 0 52M| 35B 35B: 38M 41M: 874B 576k>
1 41 19 39 1 0| 0 59M|1293B 936B: 60M 54M: 462B 552k>
0 25 61 13 0 0| 0 42M| 35B 0 : 62M 62M: 575B 453k>
1 40 56 2 1 0| 0 56M| 0 35B: 41M 44M: 484B 400k>
1 39 52 7 1 0| 0 60M| 0 0 : 63M 59M: 442B 636k>
1 39 58 2 1 0| 0 57M| 35B 35B: 63M 63M: 638B 607k>
1 25 74 0 1 0| 0 38M| 0 0 : 56M 56M: 847B 221k>
1 37 60 2 1 0| 0 55M| 35B 0 : 44M...
2009 Jun 04
6
CPU usage over estimated?
I have a quad core CPU running Centos5.
When I use top, I see that running processes use 245% instead of 100%.
If I use gkrellm, I just see one core being used 100%.
top:
PID USER PR NI VIRT RES SWAP SHR S %CPU %MEM TIME+ COMMAND
18037 thba 31 15 304m 242m 62m 44m R 245.3 4.1 148:58.72 ic
Also in the log of some programs I see this strange factor:
CPU Seconds = 2632 Wall Clock Seconds = 1090
There are all single threaded programs, so it's not that more cores are
being used.
[thba at fazant]$ uname -a
Linux fazant 2.6.18-128.1.6.el5 #1 SMP Wed Apr...
2006 Nov 14
4
Samba 3.0.14 (Debian Sarge) Memory Leakage
...er out of the box (debian sarge) on kernel 2.6.16.31.
After a while users who have some shares and files open are acquiring
more and more memory until the smbd dies.
Here is a small shortcut from top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13843 gxxxxx 16 0 51792 44m 2552 S 0.3 17.8 0:22.97 smbd
13840 bxxxxx 17 0 31400 25m 2764 S 0.7 10.0 2:42.78 smbd
smbd is allocation and more memory. We are using user security and
smbpasswd as password backend.
The output from 'smbcontrol 13843 pool-usage' shows:
global talloc allocations in pid: 13843
n...
2008 Sep 17
0
Compiz Fusion & CPU Usage During Idle Times.
...0%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 1035332k total, 631816k used, 403516k free, 44472k buffers
Swap: 2361512k total, 0k used, 2361512k free, 375876k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4483 ant 24 4 64088 44m 7032 S 6.0 4.4 7:24.40 compiz.real
4322 ant 24 4 38436 12m 8428 S 3.0 1.2 2:41.47 gkrellm
4214 root 20 0 79232 40m 8344 S 2.0 4.0 3:24.26 Xorg
2907 messageb 20 0 2616 952 696 S 0.7 0.1 0:00.78...
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
...0 CET
2012 x86_64 GNU/Linux
Filesystem was created with :
# mkfs.btrfs -L test -d raid1 -m raid1 /dev/vd[bcd]
I downloaded a lot of linux kernel tarball and untared in into this
filesystem until it tell me enough:
drwxr-xr-x 1 root root 330 2007-10-09 20:31 linux-2.6.23
-rw-r--r-- 1 root root 44M 2007-10-09 20:48 linux-2.6.23.tar.bz2
drwxr-xr-x 1 root root 344 2008-01-24 22:58 linux-2.6.24
-rw-r--r-- 1 root root 45M 2008-01-24 23:16 linux-2.6.24.tar.bz2
drwxr-xr-x 1 root root 352 2008-04-17 02:49 linux-2.6.25
Some output of btrfs tools
# btrfs fi sh
Label: ''test'' uui...
2003 Jun 13
10
state of ide raid
Hello,
While shopping for another 3Ware card, I found that this market has gotten
much larger since I bought my 6500. So I started looking at the various
cards while browsing the hardware notes for FreeBSD 4.8 and 5.1.
So I'm wondering if anyone can provide some insight here.
-3Ware has a whole new series of cards. The twe(4) manpage states that
only 5xxx and 6xxx series cards are
2011 Dec 02
12
puppet master under passenger locks up completely
I came in this morning to find all the servers all locked up solid:
# passenger-status
----------- General information -----------
max = 20
count = 20
active = 20
inactive = 0
Waiting on global queue: 236
----------- Domains -----------
/etc/puppet/rack:
PID: 2720 Sessions: 1 Processed: 939 Uptime: 9h 22m 18s
PID: 1615 Sessions: 1 Processed: 947 Uptime: 9h 23m
2019 Mar 22
4
Problems with Samba 4.5.16 - configuring a second failover AD DC and joining this to an existing domain SAMDOM
...UYel='\e[4;33m'; IYel='\e[0;93m'; BIYel='\e[1;93m'; On_Yel='\e[43m'; On_IYel='\e[0;103m';
Blu='\e[0;34m'; BBlu='\e[1;34m'; UBlu='\e[4;34m'; IBlu='\e[0;94m'; BIBlu='\e[1;94m'; On_Blu='\e[44m'; On_IBlu='\e[0;104m';
Pur='\e[0;35m'; BPur='\e[1;35m'; UPur='\e[4;35m'; IPur='\e[0;95m'; BIPur='\e[1;95m'; On_Pur='\e[45m'; On_IPur='\e[0;105m';
Cya='\e[0;36m'; BCya='\e[1;36m'; UCya=...
2015 Sep 05
5
RFC: Reducing Instr PGO size overhead
On Fri, Sep 4, 2015 at 9:11 PM, Sean Silva <chisophugis at gmail.com> wrote:
>
>
> On Fri, Sep 4, 2015 at 5:42 PM, Xinliang David Li <davidxl at google.com>
> wrote:
>>
>> On Fri, Sep 4, 2015 at 5:21 PM, Sean Silva <chisophugis at gmail.com> wrote:
>> >
>> >
>> > On Fri, Sep 4, 2015 at 3:57 PM, Xinliang David Li <davidxl at
2019 Apr 30
6
Disk space and RAM requirements in docs
...b/Frontend/Rewrite
47M build/lib/LTO/CMakeFiles/LLVMLTO.dir
47M build/lib/LTO/CMakeFiles
47M build/lib/LTO
46M build/tools/polly/lib/CMakeFiles/PollyCore.dir/Support
45M build/tools/clang/unittests/Analysis/CMakeFiles/ClangAnalysisTests.dir
45M build/tools/clang/unittests/Analysis/CMakeFiles
44M build/lib/Object/CMakeFiles/LLVMObject.dir
44M build/lib/Object/CMakeFiles
44M build/lib/Object
43M build/lib/DebugInfo/PDB/CMakeFiles/LLVMDebugInfoPDB.dir/Native
42M build/tools/clang/unittests/CodeGen/CMakeFiles/ClangCodeGenTests.dir
42M build/tools/clang/unittests/CodeGen/CMakeFiles
42M bu...