Displaying 10 results from an estimated 10 matches for "86m".
Did you mean:
86
2010 Oct 20
0
Increased memory usage between 4.8 and 5.5
...d
31734 nobody 15 0 340m 91m 41m S 0.0 2.3 1:13.52 httpd
8904 nobody 15 0 341m 89m 39m S 0.0 2.3 0:35.28 httpd
7353 nobody 15 0 336m 87m 42m S 0.0 2.2 1:21.17 httpd
26097 nobody 15 0 333m 87m 43m S 0.0 2.2 1:28.84 httpd
20765 nobody 15 0 335m 86m 42m S 0.0 2.2 0:48.50 httpd
23299 nobody 15 0 334m 86m 42m S 0.0 2.2 1:13.35 httpd
--- centos 4.8 ---
$ uname -a
Linux ws89 2.6.9-89.0.28.ELsmp #1 SMP Fri Aug 20 16:11:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND...
2018 Apr 27
3
Size of produced binaries when compiling llvm & clang sources
...g-offload-bundler
122M llvm-mca
121M verify-uselistorder
121M llvm-cat
120M llvm-as
117M llvm-special-case-list-fuzzer
117M llvm-demangle-fuzzer
116M llvm-modextract
114M obj2yaml
112M llvm-xray
105M sanstats
105M llvm-symbolizer
96M llvm-readobj
93M llvm-cov
90M lli-child-target
86M llvm-cxxdump
85M llvm-objcopy
83M llvm-cvtres
82M llvm-size
76M clang-apply-replacements
75M clang-format
64M llvm-diff
59M llvm-dis
53M llvm-stress
50M llvm-tblgen
48M llvm-profdata
31M yaml2obj
17M clang-tblgen
- regards,
Manuel
-------------- next part --------------
A...
2018 Apr 27
0
Size of produced binaries when compiling llvm & clang sources
...121M llvm-cat
> 120M llvm-as
> 117M llvm-special-case-list-fuzzer
> 117M llvm-demangle-fuzzer
> 116M llvm-modextract
> 114M obj2yaml
> 112M llvm-xray
> 105M sanstats
> 105M llvm-symbolizer
> 96M llvm-readobj
> 93M llvm-cov
> 90M lli-child-target
> 86M llvm-cxxdump
> 85M llvm-objcopy
> 83M llvm-cvtres
> 82M llvm-size
> 76M clang-apply-replacements
> 75M clang-format
> 64M llvm-diff
> 59M llvm-dis
> 53M llvm-stress
> 50M llvm-tblgen
> 48M llvm-profdata
> 31M yaml2obj
> 17M clang-tblgen
&g...
2003 Aug 20
0
my file transfers are incredibly slow
...ome info from top as I was transfering a large file:
last pid: 39875; load averages: 0.00, 0.00, 0.00
up 10+10:22:57 01:38:14
22 processes: 2 running, 20 sleeping
CPU states: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, 98.4%
idle
Mem: 17M Active, 546M Inact, 150M Wired, 36M Cache, 86M Buf, 1328K Free
Swap: 384M Total, 384M Free
$ ping 172.16.16.1
PING 172.16.16.1 (172.16.16.1): 56 data bytes
64 bytes from 172.16.16.1: icmp_seq=0 ttl=64 time=0.760 ms
64 bytes from 172.16.16.1: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from 172.16.16.1: icmp_seq=2 ttl=64 time=0.070 ms
64 bytes f...
2006 Oct 03
1
Samba 3.0.23c memory usage increased ten fold to over 70Mb / smbd process
...00:10 0.1% smbd/1
13551 root 77M 73M sleep 59 0 0:00:18 0.1% smbd/1
19888 root 77M 73M sleep 59 0 0:00:10 0.1% smbd/1
29251 root 77M 73M sleep 59 0 0:00:13 0.1% smbd/1
20490 root 78M 73M sleep 59 0 0:00:19 0.1% smbd/1
1311 root 86M 81M sleep 59 0 0:05:58 0.1% smbd/1
7095 root 77M 70M sleep 59 0 0:00:00 0.0% smbd/1
1969 root 77M 73M sleep 59 0 0:00:02 0.0% smbd/1
10797 root 84M 79M sleep 59 0 0:06:06 0.0% smbd/1
7638 root 74M 49M sleep 59 0 0:00:00 0.0...
2007 Apr 01
8
zfs destroy <snapshot> takes hours
Hello,
I am having a problem destroying zfs snapshots. The machine is almost not responding for more than 4 hours, after I started the command and I can''t run anything else during that time -
I get (bash): fork: Resource temporarily unavailable - errors.
The machine is still responding somewhat, but very, very slow.
It is: P4, 2.4 GHz with 512 MB RAM, 8 x 750 GB disks as raidZ,
2005 Apr 02
4
mkbootdisk!
...ror: No space left on device
cat: write error: No space left on device
20+0 records in
20+0 records out
I have enogh space on the disk:
[root at server:~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md5 17G 1.8G 14G 12% /
/dev/md1 99M 8.4M 86M 9% /boot
none 126M 0 126M 0% /dev/shm
/dev/md3 9.7G 2.7G 6.5G 30% /home
/dev/md0 981M 26M 905M 3% /usr/local/salva
/dev/md4 4.9G 262M 4.4G 6% /var/log
/dev/md2 981M 24M 908M 3% /var/spool
regards,
Isra...
2015 Jun 24
6
LVM hatred, was Re: /boot on a separate partition?
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
> Ok, you made me curious. Just how dramatic can it be? From where I'm
> sitting, a read/write to a disk takes the amount of time it takes, the
> hardware has a certain physical speed, regardless of the presence of
> LVM. What am I missing?
Well, there's best and worst case scenarios. Best case for file-backed
VMs is
2005 Jan 11
2
dnat problem
...0 DNAT tcp -- * * 0.0.0.0/0
193.205.140.6 tcp dpt:443 to:10.2.15.23
0 0 DNAT tcp -- * * 0.0.0.0/0
193.205.140.6 multiport dports 3389,4330 to:10.2.15.25
Mangle Table
Chain PREROUTING (policy ACCEPT 221K packets, 86M bytes)
pkts bytes target prot opt in out source
destination
837 164K pretos all -- * * 0.0.0.0/0
0.0.0.0/0
Chain INPUT (policy ACCEPT 173K packets, 67M bytes)
pkts bytes target prot opt in out source...
2019 Apr 30
6
Disk space and RAM requirements in docs
...VMLanaiCodeGen.dir
93M build/lib/Target/Lanai/CMakeFiles
93M build/lib/MC
91M build/lib/Target/XCore/CMakeFiles/LLVMXCoreCodeGen.dir
91M build/lib/Target/XCore/CMakeFiles
89M build/tools/llvm-exegesis
87M build/lib/Target/BPF/CMakeFiles/LLVMBPFCodeGen.dir
87M build/lib/Target/BPF/CMakeFiles
86M build/tools/clang/lib/Index/CMakeFiles/clangIndex.dir
86M build/tools/clang/lib/Index/CMakeFiles
86M build/tools/clang/lib/Index
85M build/lib/Target/MSP430
84M build/lib/Target/Sparc/CMakeFiles/LLVMSparcCodeGen.dir
84M build/lib/Target/Sparc/CMakeFiles
81M build/tools/llvm-exegesis/lib
81M...