search for: 131m

Displaying 19 results from an estimated 19 matches for "131m".

Did you mean: 131
2006 Aug 13
2
can ext3 directory entries be overwritten? -- Re: extremely slow "ls" on a cleared fatty ext3 directory on FC4/5
...list the > cleaned /tmp directory > > again -- even now the directory holds only 8 > files total. > > > > so I try to 'ls' the directory itself (not any > files and > > subdirectories on it) and find that its size is > stupidly large (it is > > 131M even after deletion) compared with 4K for > normal directories. > > > > -bash-3.00# ls -alFdh /tmp* drwxrwxrwt 4 root > staff 4.0K Aug 12 > > 23:17 new_tmp/ drwxrwxrwt 4 root staff 131M Aug > 12 20:30 tmp/ > > > > Anyone know why the former fatty direct...
2018 Apr 27
3
Size of produced binaries when compiling llvm & clang sources
...lude-fixer 444M modularize 443M clang-func-mapping 442M clang-diff 441M libToolingExample00 438M pp-trace 434M diagtool 184M llvm-cfi-verify 170M llvm-objdump 168M sancov 158M llvm-rtdyld 149M llvm-ar 148M llvm-nm 145M llvm-extract 145M llvm-link 142M llvm-dwarfdump 141M llvm-split 131M llvm-mc 127M llvm-pdbutil 126M clang-offload-bundler 122M llvm-mca 121M verify-uselistorder 121M llvm-cat 120M llvm-as 117M llvm-special-case-list-fuzzer 117M llvm-demangle-fuzzer 116M llvm-modextract 114M obj2yaml 112M llvm-xray 105M sanstats 105M llvm-symbolizer 96M llvm-readobj...
2006 Aug 13
2
extremely slow "ls" on a cleared fatty ext3 directory on FC4/5
...ut 10 hours. but after a file system sync, it still take me 20 minutes to list the cleaned /tmp directory again -- even now the directory holds only 8 files total. so I try to 'ls' the directory itself (not any files and subdirectories on it) and find that its size is stupidly large (it is 131M even after deletion) compared with 4K for normal directories. -bash-3.00# ls -alFdh /tmp* drwxrwxrwt 4 root staff 4.0K Aug 12 23:17 new_tmp/ drwxrwxrwt 4 root staff 131M Aug 12 20:30 tmp/ Anyone know why the former fatty directory still looks unchanged and takes hours to traverse even after 9...
2018 Apr 27
0
Size of produced binaries when compiling llvm & clang sources
...> 441M libToolingExample00 > 438M pp-trace > 434M diagtool > 184M llvm-cfi-verify > 170M llvm-objdump > 168M sancov > 158M llvm-rtdyld > 149M llvm-ar > 148M llvm-nm > 145M llvm-extract > 145M llvm-link > 142M llvm-dwarfdump > 141M llvm-split > 131M llvm-mc > 127M llvm-pdbutil > 126M clang-offload-bundler > 122M llvm-mca > 121M verify-uselistorder > 121M llvm-cat > 120M llvm-as > 117M llvm-special-case-list-fuzzer > 117M llvm-demangle-fuzzer > 116M llvm-modextract > 114M obj2yaml > 112M llvm-xray &gt...
2020 Jul 22
3
samba-tool domain backup offline stalls
...run 'samba-tool domain backup offline targetdir=/tmp' I see this: running backup on dirs: /var/db/samba4/private /var/db/samba4 /usr/local/etc Starting transaction on /var/db/samba4/private/secrets At which point samba-tool enters a permanent wait state. 86064 root 1 52 0 131M 78M wait 3 0:01 0.00% python3.7 Trace shows this: . . . --- modulename: subprocess, funcname: __enter__ subprocess.py(845): return self subprocess.py(340): try: subprocess.py(341): return p.wait(timeout=timeout) --- modulename: subprocess, funcname: wait s...
2008 Jul 05
2
Question on number of processes engendered by BDRb
...107m 3440 S 0 10.5 0:50.61 mongrel_rails 11013 raghus 15 0 179m 103m 3348 S 0 10.1 0:45.18 mongrel_rails * 7084 raghus 15 0 152m 73m 2036 S 11 7.2 116:31.68 packet_worker_r* 11129 raghus 15 0 134m 58m 3336 S 0 5.7 0:05.20 mongrel_rails * 7085 raghus 15 0 131m 53m 2020 S 0 5.2 2:23.61 packet_worker_r* 5094 mysql 15 0 215m 38m 3272 S 0 3.7 44:13.99 mysqld * 7083 raghus 15 0 97.9m 36m 1192 S 0 3.5 2:28.98 packet_worker_r* 7081 raghus 15 0 98.3m 34m 1036 S 0 3.4 3:21.40 ruby 10996 raghus 15 0 55820 12m 134...
2012 Jun 23
3
How to upgrade from 5.8 to 6.2
Good day, Please am new on CentOS, may you help me with the upgrade from 5.8 to 6.2 using? Thanks a lot -- -- You Truly Eric Kom System Administrator - Metropolitan College _________________________________________ / You are scrupulously honest, frank, and \ | straightforward. Therefore you have few | \ friends. / -----------------------------------------
2008 Jan 30
3
newfs locks entire machine for 20seconds
...1 -8 0 4752K 1256K physrd 1 0:01 19.64% newfs 4 root 1 -8 - 0K 16K - 0 0:00 0.10% g_down 1048 root 1 96 0 7656K 2544K CPU0 0 0:01 0.00% top 1054 root 1 96 0 7656K 2348K CPU1 1 0:01 0.00% top 863 root 1 96 0 131M 15768K select 0 0:00 0.00% httpd 1055 root 1 96 0 32928K 4656K select 0 0:00 0.00% sshd last pid: 1102; load averages: 0.02, 0.08, 0.07 up 0+00:09:37 21:39:13 162 processes: 4 running, 145 sleeping, 13 waiting CPU states: 0.0% us...
2017 May 02
2
samba process use 100% cpu
Hi! I need some help. We use samba4 as AD, and now when clients connect to server, samba process stuck at 100% cpu. samba Version: 4.3.4 Release: 13.el6 top: 3777 root 20 0 131m 46m 28m R 99.7 0.3 219:20.53 /usr/local/samba4//sbin/samba -D 24541 csertam 20 0 49260 11m 9048 S 25.1 0.1 0:01.56 smbd -D 7080 squid 20 0 926m 908m 6428 S 9.9 6.2 11:43.50 (squid-1) -f /etc/squid/squid.conf 25503 squid 20 0 40236 36m 1040 S 3.3 0.2 0:00.10 (squ...
2017 May 09
2
samba process use 100% cpu
...10:48 keltezéssel, Papp Bence via samba írta: > Hi! > > > I need some help. > > We use samba4 as AD, and now when clients connect to server, samba > process stuck at 100% cpu. > > samba Version: 4.3.4 Release: 13.el6 > > > top: > > 3777 root 20 0 131m 46m 28m R 99.7 0.3 219:20.53 > /usr/local/samba4//sbin/samba -D > 24541 csertam 20 0 49260 11m 9048 S 25.1 0.1 0:01.56 smbd -D > 7080 squid 20 0 926m 908m 6428 S 9.9 6.2 11:43.50 (squid-1) > -f /etc/squid/squid.conf > 25503 squid 20 0 40236 36m 1040 S...
2017 May 09
0
samba process use 100% cpu
...10:48 keltezéssel, Papp Bence via samba írta: > Hi! > > > I need some help. > > We use samba4 as AD, and now when clients connect to server, samba > process stuck at 100% cpu. > > samba Version: 4.3.4 Release: 13.el6 > > > top: > > 3777 root 20 0 131m 46m 28m R 99.7 0.3 219:20.53 > /usr/local/samba4//sbin/samba -D > 24541 csertam 20 0 49260 11m 9048 S 25.1 0.1 0:01.56 smbd -D > 7080 squid 20 0 926m 908m 6428 S 9.9 6.2 11:43.50 (squid-1) > -f /etc/squid/squid.conf > 25503 squid 20 0 40236 36m 1040 S...
2017 May 09
0
samba process use 100% cpu
...Hi! >> >> >> I need some help. >> >> We use samba4 as AD, and now when clients connect to server, samba >> process stuck at 100% cpu. >> >> samba Version: 4.3.4 Release: 13.el6 >> >> >> top: >> >> 3777 root 20 0 131m 46m 28m R 99.7 0.3 219:20.53 >> /usr/local/samba4//sbin/samba -D >> 24541 csertam 20 0 49260 11m 9048 S 25.1 0.1 0:01.56 smbd -D >> 7080 squid 20 0 926m 908m 6428 S 9.9 6.2 11:43.50 (squid-1) >> -f /etc/squid/squid.conf >> 25503 squid 20 0...
2013 Jan 26
4
Write failure on distributed volume with free space available
...2 s, 182 MB/s Filesystem Size Used Avail Use% Mounted on 192.168.192.5:/test 291M 145M 147M 50% /mnt/gluster1 1+0 records in 1+0 records out 16777216 bytes (17 MB) copied, 0.0861475 s, 195 MB/s Filesystem Size Used Avail Use% Mounted on 192.168.192.5:/test 291M 161M 131M 56% /mnt/gluster1 dd: writing `16_10': No space left on device dd: closing output file `16_10': No space left on device Filesystem Size Used Avail Use% Mounted on 192.168.192.5:/test 291M 170M 121M 59% /mnt/gluster1 dd: writing `16_11': No space left on device dd: clos...
2008 May 14
1
Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.
....4 Any ideas about what to check/do next? I only could find a post which suggests using: kern.ipc.shm_use_phys: 1 But I already set it and it has no effect...box has 4gb of memory: CPU: 0.1% user, 0.0% nice, 0.0% system, 0.0% interrupt, 99.9% idle Mem: 180M Active, 1584M Inact, 467M Wired, 131M Cache, 214M Buf, 1578M Free Swap: 8192M Total, 8548K Used, 8184M Free I have the following in make.conf CPUTYPE?=core2 CFLAGS= -O2 -fno-strict-aliasing -pipe CXXFLAGS+= -fconserve-space COPTFLAGS= -O -pipe NO_GAMES=true NO_PROFILE=true WITHOUT_X11=true below is the kernel config file: cpu...
2008 Nov 18
3
High system in %system load .
...-------------------------------------- #top last pid: 47964; load averages: 1.26, 1.62, 1.75 up 0+19:17:13 17:11:06 287 processes: 10 running, 277 sleeping CPU states: 2.2% user, 0.0% nice, 28.3% system, 0.2% interrupt, 69.3% idle Mem: 1286M Active, 1729M Inact, 478M Wired, 131M Cache, 214M Buf, 302M Free Swap: 8192M Total, 8192M Free ------------------------------------------------------------------- #vmstat 5 procs memory page disks faults cpu r b w avm fre flt re pi po fr sr ad4 ad6 in sy cs us sy id 1 24 0...
2005 Jan 11
1
Squid and DMZ (ProxyARP)
...0.0.0.0/0 0.0.0.0/0 state NEW 396 79941 pretos all -- * * 0.0.0.0/0 0.0.0.0/0 6 729 MARK tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 MARK set 0xca Chain OUTPUT (policy ACCEPT 872K packets, 131M bytes) pkts bytes target prot opt in out source destination 24 1916 outtos all -- * * 0.0.0.0/0 0.0.0.0/0 Chain man1918 (1 references) pkts bytes target prot opt in out source destination...
2012 Sep 29
2
Doubled up RAM to 32 GB - now how to speed up a LAPP server?
...#5 [0002013000 - 000201334e] BRK ==> [0002013000 - 000201334e] #6 [0000010000 - 0000012000] PGTABLE ==> [0000010000 - 0000012000] #7 [0000012000 - 000002f000] PGTABLE ==> [0000012000 - 000002f000] found SMP MP-table at [ffff8800000fcd90] fcd90 Reserving 131MB of memory at 48MB for crashkernel (System RAM: 33790MB) [ffffea0000000000-ffffea00147fffff] PMD -> [ffff88002c600000-ffff88003fffffff] on node 0 [ffffea0014800000-ffffea001cdfffff] PMD -> [ffff880040200000-ffff8800487fffff] on node 0 Zone PFN ranges: DMA 0x00000010 -> 0x00001000...
2012 Nov 13
1
thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
...t, 21.5% idle CPU 22: 1.9% user, 0.0% nice, 75.5% system, 0.0% interrupt, 22.6% idle CPU 23: 1.1% user, 0.0% nice, 75.5% system, 0.0% interrupt, 23.4% idle Mem: 688M Active, 1431M Inact, 3064M Wired, 8K Cache, 7488K Buf, 88G Free ARC: 1212M Total, 107M MRU, 965M MFU, 1040K Anon, 8010K Header, 131M Other Swap: 8192M Total, 8192M Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 0 root 8 0 0K 3968K CPU12 12 9:49 100.00% kernel{thread taskq} 2873 root 84 0 22584K 4180K CPU1 1 2:53 44.58% relayd 2879 _relayd 84...
2019 Apr 30
6
Disk space and RAM requirements in docs
...ld/lib/Transforms/IPO 149M build/tools/lld/ELF/CMakeFiles/lldELF.dir 149M build/tools/lld/ELF/CMakeFiles 149M build/tools/lld/ELF 147M build/tools/clang/unittests/Format 143M build/lib/DebugInfo 140M build/lib/Target/SystemZ/CMakeFiles/LLVMSystemZCodeGen.dir 140M build/lib/Target/SystemZ/CMakeFiles 131M build/lib/Target/NVPTX 129M build/tools/clang/lib/ASTMatchers 128M build/tools/clang/lib/Tooling/Refactoring/CMakeFiles/clangToolingRefactor.dir 128M build/tools/clang/lib/Tooling/Refactoring/CMakeFiles 128M build/tools/clang/lib/Tooling/Refactoring 126M build/lib/ExecutionEngine 122M build/tools/c...