Displaying 20 results from an estimated 27 matches for "224k".
Did you mean:
224
2009 Feb 06
1
Deciphering top's data
...ys, 5:53, 2 users, load average: 3.00, 2.95, 2.52
Tasks: 121 total, 1 running, 120 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0% us, 0.3% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 1928300k total, 1911640k used, 16660k free, 10760k buffers
Swap: 2031608k total, 224k used, 2031384k free, 1561196k cached
top - 01:20:22 up 146 days, 5:54, 2 users, load average: 3.06, 2.97, 2.54
Tasks: 121 total, 1 running, 120 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0% us, 0.2% sy, 0.0% ni, 99.8% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 1928300k total, 1911192k u...
2016 Jul 08
3
llvm 3.8.1 Release
...m "Tom Stellard <tom at stellard.net>" [unknown]
$ xz -t distfiles/llvm-3.8.1.src.tar.xz
xz: distfiles/llvm-3.8.1.src.tar.xz: Unexpected end of input
For the "bad" files, sizes are much smaller then their v3.8.0 counterparts:
$ du -sh llvm-3.8.*
16M llvm-3.8.0.src.tar.xz
224K llvm-3.8.1.src.tar.xz
$ du -sh lldb-3.8.*
11M lldb-3.8.0.src.tar.xz
12K lldb-3.8.1.src.tar.xz
Perhaps the files didn't finish uploading?
Cheers,
> -Tom
>
> On Wed, Jul 06, 2016 at 03:49:39PM +0000, Del Myers via llvm-dev wrote:
> > We just wanted to check to see if they are...
2011 Nov 09
3
Data distribution not even between vdevs
...90008528890000041C490FAFA0d0 - - 3 10
215K 119K
c3t6002219000854867000003C0490FB27Dd0 - - 3 10
214K 119K
raidz1 4.64T 1.67T 8 32 24.6K
581K
c3t6002219000854867000003C2490FB2BFd0 - - 3 10
224K 98.2K
c3t60022190008528890000041F490FAFD0d0 - - 3 10
222K 98.2K
c3t600221900085288900000428490FB0D8d0 - - 3 10
222K 98.2K
c3t600221900085288900000422490FB02Cd0 - - 3 10
223K 98.3K
c3t600221900085288900000425490FB07Cd0...
2011 Sep 01
1
No buffer space available - loses network connectivity
...32 10 128K ip_dst_cache
308 227 73% 0.50K 44 7 176K skbuff_fclone_cache
258 247 95% 0.62K 43 6 172K sock_inode_cache
254 254 100% 1.84K 127 2 508K task_struct
252 225 89% 0.81K 28 9 224K signal_cache
240 203 84% 0.73K 48 5 192K shmem_inode_cache
204 204 100% 2.06K 68 3 544K sighand_cache
202 4 1% 0.02K 1 202 4K revoke_table
195 194 99% 0.75K 39 5 156K UDP
159 77 48...
2011 May 24
0
kvm guest ram
Hi,
On a 4gb 64 bit centos kvm guest per libvirt
<memory>4194304</memory>
<currentMemory>4194304</currentMemory>
but in dmesg of the guest I find
Memory: 4039712k/5242880k available (2592k kernel code, 154076k
reserved, 1653k data, 224k init)
Why is dmesg reporting 5g of ram ? Or am I reading wrong ?
How much memory on the Host is necessary on the kvm host for this "4g"
machine in the worst case (I try to not overcommit for now). I want to
avoid the situation that this "4g" machine requires "5g".
A...
2011 May 13
0
Xapian Index 253 million documents = 704G
...n 84G 2011-05-13 02:26 postlist.DB
-rw-r--r-- 1 kevin kevin 14 2011-05-13 02:26 record.baseA
-rw-r--r-- 1 kevin kevin 301K 2011-05-13 03:02 record.baseB
-rw-r--r-- 1 kevin kevin 151G 2011-05-13 03:02 record.DB
-rw-r--r-- 1 kevin kevin 14 2011-05-13 03:02 termlist.baseA
-rw-r--r-- 1 kevin kevin 224K 2011-05-13 03:28 termlist.baseB
-rw-r--r-- 1 kevin kevin 112G 2011-05-13 03:28 termlist.DB
Thanks,
Kevin Duraj
http://myhealthcare.com
2001 Jun 12
1
Mounting / as ext3 when ext3 is modularized in 2.4
...: 300k freed
VFS: Mounted root (ext2 filesystem).
Loading jbd module
Journalled Block Device driver loaded
Loading ext3 module
VFS: Mounted root (ext2 filesystem) readonly. <--- ?? not ext3 ?
change_root: old root has d_count=2
Trying to unmount old root ... okay
Freeing unused kernel memory: 224k freed
Adding Swap: 385520k swap-space (priority -1)
--
Fab
2012 Jul 18
1
About GlusterFS
...orking . But when I check " df -h "
then error is "
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 290G 83G 193G 30% /
none 984M 248K 983M 1% /dev
none 988M 180K 988M 1% /dev/shm
none 988M 224K 988M 1% /var/run
none 988M 0 988M 0% /var/lock
none 988M 0 988M 0% /lib/init/rw
none 290G 83G 193G 30% /var/lib/ureadahead/debugfs
/dev/sda1 184M 22M 154M 13% /boot
*df: `/mnt/glusterfs': Transport endpoint...
2011 Sep 01
0
No buffer space available - loses network connectivity
...32 10 128K ip_dst_cache
308 227 73% 0.50K 44 7 176K skbuff_fclone_cache
258 247 95% 0.62K 43 6 172K sock_inode_cache
254 254 100% 1.84K 127 2 508K task_struct
252 225 89% 0.81K 28 9 224K signal_cache
240 203 84% 0.73K 48 5 192K shmem_inode_cache
204 204 100% 2.06K 68 3 544K sighand_cache
202 4 1% 0.02K 1 202 4K revoke_table
195 194 99% 0.75K 39 5 156K UDP
159 77 48%...
2010 Oct 16
0
RHEL 6 /etc/inittab misconfigured
...tf_cmos: registered platform RTC device (no PNP device found)
Event-channel device installed
i8042.c: No controller found.
blkfront: xvda: barriers enabled
xvda: xvda1
*XENBUS: Device with no driver: device/console/0
*VFS: Mounted root (ext2 filesystem) on device 202:1
Freeing unused kernel memory: 224k freed
Warning: unable to open an initial console.
Here is my .config for domU:
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_GENERIC_TIME=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_CL...
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2010 Mar 02
3
Very unresponsive, sometimes stalling domU (5.4, x86_64)
...0 0| 336k 392k| 126B 178B| 0 0 | 105 115
0 0 39 61 0 0| 152k 3504k| 126B 178B| 0 0 | 199 63
0 0 49 51 0 0| 40k 992k| 186B 178B| 0 0 | 122 40
0 0 56 44 0 0| 0 216k| 186B 178B| 0 0 | 73 39
0 0 42 58 0 0| 0 224k| 66B 178B| 0 0 | 69 30
- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0 0 50 50 0 0| 0 216k| 66B 178B| 0 0 | 89 36
0 0 51 50 0 0| 0 272k| 126B 322B|...
2010 Mar 02
3
Very unresponsive, sometimes stalling domU (5.4, x86_64)
...0 0| 336k 392k| 126B 178B| 0 0 | 105 115
0 0 39 61 0 0| 152k 3504k| 126B 178B| 0 0 | 199 63
0 0 49 51 0 0| 40k 992k| 186B 178B| 0 0 | 122 40
0 0 56 44 0 0| 0 216k| 186B 178B| 0 0 | 73 39
0 0 42 58 0 0| 0 224k| 66B 178B| 0 0 | 69 30
- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0 0 50 50 0 0| 0 216k| 66B 178B| 0 0 | 89 36
0 0 51 50 0 0| 0 272k| 126B 322B|...
2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps.
Now space maps, intent log, spa history are compressed.
Not I''m thinking about disabling checksums. All
2014 Sep 28
2
Re: Why libguestfs guest exist exceptionally?
...: 11, 16384 bytes)
Console: colour *CGA 80x25
Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
Checking aperture...
ACPI: DMAR not present
Memory: 494496k/511992k available (2603k kernel code, 17044k reserved,
1660k data, 224k init)
Calibrating delay loop (skipped), value calculated using timer
frequency.. 4805.60 BogoMIPS (lpj=2402800)
Security Framework v1.0.0 initialized
SELinux: Disabled at boot.
Capability LSM initialized
Mount-cache hash table entries: 256
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 byte...
2014 Sep 28
2
Why libguestfs guest exist exceptionally?
...: 11, 16384 bytes)
Console: colour *CGA 80x25
Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
Checking aperture...
ACPI: DMAR not present
Memory: 494496k/511992k available (2603k kernel code, 17044k reserved,
1660k data, 224k init)
Calibrating delay loop (skipped), value calculated using timer
frequency.. 4806.63 BogoMIPS (lpj=2403317)
Security Framework v1.0.0 initialized
SELinux: Disabled at boot.
Capability LSM initialized
Mount-cache hash table entries: 256
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 byte...
2014 Sep 28
0
Re: Why libguestfs guest exist exceptionally?
...lour *CGA 80x25
> Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
> Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
> Checking aperture...
> ACPI: DMAR not present
> Memory: 494496k/511992k available (2603k kernel code, 17044k reserved,
> 1660k data, 224k init)
> Calibrating delay loop (skipped), value calculated using timer
> frequency.. 4805.60 BogoMIPS (lpj=2402800)
> Security Framework v1.0.0 initialized
> SELinux: Disabled at boot.
> Capability LSM initialized
> Mount-cache hash table entries: 256
> CPU: L1 I Cache: 64K (6...
2014 Sep 28
2
Re: Why libguestfs guest exist exceptionally?
...Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
>> Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
>> Checking aperture...
>> ACPI: DMAR not present
>> Memory: 494496k/511992k available (2603k kernel code, 17044k reserved,
>> 1660k data, 224k init)
>> Calibrating delay loop (skipped), value calculated using timer
>> frequency.. 4805.60 BogoMIPS (lpj=2402800)
>> Security Framework v1.0.0 initialized
>> SELinux: Disabled at boot.
>> Capability LSM initialized
>> Mount-cache hash table entries: 256
>...
2014 Sep 28
2
Re: Why libguestfs guest exist exceptionally?
On Sun, Sep 28, 2014 at 6:26 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
> On Sun, Sep 28, 2014 at 04:30:37PM +0800, Zhi Yong Wu wrote:
>> HI,
>>
>> On a RHEL5 box, i tried to directly run guest which was issued by
>> libguestfs virt-xxx commands as below. But after some minutes, it
>> exited exceptionally.
>>
>> Does anyone also hit the
2016 Jul 06
2
llvm 3.8.1 Release
We just wanted to check to see if they are real bugs first. We will report them if they are not already fixed. Thanks.
Del
From: Renato Golin [mailto:renato.golin at linaro.org]
Sent: Wednesday, July 6, 2016 12:07 AM
To: Del Myers <delmyers at microsoft.com>
Cc: LLVM Dev <llvm-dev at lists.llvm.org>
Subject: RE: [llvm-dev] llvm 3.8.1 Release
Did you report the bugs? It would be a