Displaying 13 results from an estimated 13 matches for "117m".
Did you mean:
117
2005 Nov 17
2
zpool iostat question
...t could determine this figure? Do I need to read a manpage? ;-)
Thanks... Sean.
-----
[root at global:/36g2] # zpool iostat 3
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
pool1 117M 8.26G 0 1 696 74.6K
pool1 117M 8.26G 0 0 0 0
pool1 117M 8.26G 0 0 0 0
pool1 117M 8.26G 0 0 0 0
pool1 117M 8.26G 0 0 0 0
pool1 117M 8.26G 0 0 0 0...
2018 Apr 27
3
Size of produced binaries when compiling llvm & clang sources
...llvm-cfi-verify
170M llvm-objdump
168M sancov
158M llvm-rtdyld
149M llvm-ar
148M llvm-nm
145M llvm-extract
145M llvm-link
142M llvm-dwarfdump
141M llvm-split
131M llvm-mc
127M llvm-pdbutil
126M clang-offload-bundler
122M llvm-mca
121M verify-uselistorder
121M llvm-cat
120M llvm-as
117M llvm-special-case-list-fuzzer
117M llvm-demangle-fuzzer
116M llvm-modextract
114M obj2yaml
112M llvm-xray
105M sanstats
105M llvm-symbolizer
96M llvm-readobj
93M llvm-cov
90M lli-child-target
86M llvm-cxxdump
85M llvm-objcopy
83M llvm-cvtres
82M llvm-size
76M clang-apply-repla...
2018 Apr 27
0
Size of produced binaries when compiling llvm & clang sources
...; 149M llvm-ar
> 148M llvm-nm
> 145M llvm-extract
> 145M llvm-link
> 142M llvm-dwarfdump
> 141M llvm-split
> 131M llvm-mc
> 127M llvm-pdbutil
> 126M clang-offload-bundler
> 122M llvm-mca
> 121M verify-uselistorder
> 121M llvm-cat
> 120M llvm-as
> 117M llvm-special-case-list-fuzzer
> 117M llvm-demangle-fuzzer
> 116M llvm-modextract
> 114M obj2yaml
> 112M llvm-xray
> 105M sanstats
> 105M llvm-symbolizer
> 96M llvm-readobj
> 93M llvm-cov
> 90M lli-child-target
> 86M llvm-cxxdump
> 85M llvm-objcopy
&g...
2016 Aug 11
5
Software RAID and GRUB on CentOS 7
Hi,
When I perform a software RAID 1 or RAID 5 installation on a LAN server
with several hard disks, I wonder if GRUB already gets installed on each
individual MBR, or if I have to do that manually. On CentOS 5.x and 6.x,
this had to be done like this:
# grub
grub> device (hd0) /dev/sda
grub> device (hd1) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> root (hd1,0)
grub>
2016 Aug 11
0
Software RAID and GRUB on CentOS 7
...remember
where, sorry. $0.02, no more, no less ....
[root at Q6600:/etc, Thu Aug 11, 08:25 AM] 1018 # df -h
Filesystem Type Size Used Avail Use% Mounted on
/dev/md1 ext4 917G 8.0G 863G 1% /
tmpfs tmpfs 4.0G 0 4.0G 0% /dev/shm
/dev/md0 ext4 186M 60M 117M 34% /boot
/dev/md3 ext4 1.8T 1.4T 333G 81% /home
[root at Q6600:/etc, Thu Aug 11, 08:26 AM] 1019 # uname -a
Linux Q6600 2.6.35.14-106.fc14.x86_64 #1 SMP Wed Nov 23 13:07:52 UTC
2011 x86_64 x86_64 x86_64 GNU/Linux
[root at Q6600:/etc, Thu Aug 11, 08:26 AM] 1020 #
--
William A. Maha...
2013 Jun 13
4
puppet: 3.1.1 -> 3.2.1 load increase
Hi,
I recently updated from puppet 3.1.1 to 3.2.1 and noticed quite a bit of
increased load on the puppetmaster machine. I''m using
the Apache/passenger/rack way of puppetmastering.
Main symptom is: higher load on puppetmaster machine (8 cores):
- 3.1.1: around 4
- 3.2.1: around 9-10
Any idea why there''s more load on the machine with 3.2.1?
--
You received this
2008 Oct 18
2
pre-built images
Tru has already been building vmware images (
http://people.centos.org/tru/vmware/ ) for various roles.
I was wondering if there was any interest in prebuilt Xen images as well
? And if so, what would be the roles people would want images done for,
and what might be suiteable package sets to include for those roles.
--
Karanbir Singh : http://www.karan.org/ : 2522219 at icq
2017 May 02
2
samba process use 100% cpu
...3.0 0.2 0:00.09
(squidGuard) -c /etc/squid/squidguard.conf
25508 squid 20 0 39840 35m 1036 S 3.0 0.2 0:00.09
(squidGuard) -c /etc/squid/squidguard.conf
25511 squid 20 0 39836 35m 1036 S 2.6 0.2 0:00.08
(squidGuard) -c /etc/squid/squidguard.conf
7123 root 20 0 117m 26m 4004 S 1.7 0.2 11:48.54
/usr/bin/python /usr/sbin/presenced -d
24798 3000039 20 0 143m 23m 6204 S 1.0 0.2 0:01.03
/usr/local/samba4/sbin/smbd -D --option=server role check:inhibit=yes
--foreground
25408 root 20 0 9760 1704 1072 R 1.0 0.0 0:00.23 top -c
2565 root...
2009 Jun 04
6
CPU usage over estimated?
I have a quad core CPU running Centos5.
When I use top, I see that running processes use 245% instead of 100%.
If I use gkrellm, I just see one core being used 100%.
top:
PID USER PR NI VIRT RES SWAP SHR S %CPU %MEM TIME+ COMMAND
18037 thba 31 15 304m 242m 62m 44m R 245.3 4.1 148:58.72 ic
Also in the log of some programs I see this strange factor:
CPU Seconds = 2632
2017 May 09
2
samba process use 100% cpu
...quidGuard) -c /etc/squid/squidguard.conf
> 25508 squid 20 0 39840 35m 1036 S 3.0 0.2 0:00.09
> (squidGuard) -c /etc/squid/squidguard.conf
> 25511 squid 20 0 39836 35m 1036 S 2.6 0.2 0:00.08
> (squidGuard) -c /etc/squid/squidguard.conf
> 7123 root 20 0 117m 26m 4004 S 1.7 0.2 11:48.54
> /usr/bin/python /usr/sbin/presenced -d
> 24798 3000039 20 0 143m 23m 6204 S 1.0 0.2 0:01.03
> /usr/local/samba4/sbin/smbd -D --option=server role check:inhibit=yes
> --foreground
> 25408 root 20 0 9760 1704 1072 R 1.0 0.0 0:0...
2017 May 09
0
samba process use 100% cpu
...quidGuard) -c /etc/squid/squidguard.conf
> 25508 squid 20 0 39840 35m 1036 S 3.0 0.2 0:00.09
> (squidGuard) -c /etc/squid/squidguard.conf
> 25511 squid 20 0 39836 35m 1036 S 2.6 0.2 0:00.08
> (squidGuard) -c /etc/squid/squidguard.conf
> 7123 root 20 0 117m 26m 4004 S 1.7 0.2 11:48.54
> /usr/bin/python /usr/sbin/presenced -d
> 24798 3000039 20 0 143m 23m 6204 S 1.0 0.2 0:01.03
> /usr/local/samba4/sbin/smbd -D --option=server role check:inhibit=yes
> --foreground
> 25408 root 20 0 9760 1704 1072 R 1.0 0.0 0:0...
2017 May 09
0
samba process use 100% cpu
...squid/squidguard.conf
>> 25508 squid 20 0 39840 35m 1036 S 3.0 0.2 0:00.09
>> (squidGuard) -c /etc/squid/squidguard.conf
>> 25511 squid 20 0 39836 35m 1036 S 2.6 0.2 0:00.08
>> (squidGuard) -c /etc/squid/squidguard.conf
>> 7123 root 20 0 117m 26m 4004 S 1.7 0.2 11:48.54
>> /usr/bin/python /usr/sbin/presenced -d
>> 24798 3000039 20 0 143m 23m 6204 S 1.0 0.2 0:01.03
>> /usr/local/samba4/sbin/smbd -D --option=server role check:inhibit=yes
>> --foreground
>> 25408 root 20 0 9760 1704 1072...
2004 Dec 30
0
MultipleIPĀ“s in one Zone
...=48 TOS=0x00 PREC=0x00 TTL=117 ID=52760 DF PROTO=TCP
SPT=4043 DPT=1433 WINDOW=65535 RES=0x00 SYN URGP=0
NAT Table
Chain PREROUTING (policy ACCEPT 2221K packets, 115M bytes)
pkts bytes target prot opt in out source
destination
Chain POSTROUTING (policy ACCEPT 2530K packets, 117M bytes)
pkts bytes target prot opt in out source
destination
7200 433K eth1_masq all -- * eth1 0.0.0.0/0
0.0.0.0/0
Chain OUTPUT (policy ACCEPT 630K packets, 51M bytes)
pkts bytes target prot opt in out source
destination
Chain eth1...