search for: 136k

Displaying 20 results from an estimated 34 matches for "136k".

Did you mean: 136
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
...10.0.0.23@tcp   lustre:failover.node=10.0.0.22@tcp   lustre:sys.timeout=5000   lustre:mgsnode=10.0.0.22@tcp   lustre:mgsnode=10.0.0.23@tcp a few basic sanity checks: # zfs list NAME               USED  AVAIL  REFER  MOUNTPOINT lustre-mdt0        824K  3.57T   136K  /lustre-mdt0 lustre-mdt0/mdt0   136K  3.57T   136K  /lustre-mdt0/mdt0 lustre-mdt1        716K  3.57T   136K  /lustre-mdt1 lustre-mdt1/mdt1   136K  3.57T   136K  /lustre-mdt1/mdt1 lustre-mgs        4.78M  3.57T   136K  /lustre-mgs lustre-mgs/mgs    4.18M  3.57T  4.18M  /lus...
2012 Oct 10
2
[LLVMdev] [PATCH / PROPOSAL] bitcode encoding that is ~15% smaller for large bitcode files...
Yes, I had about 133K hits for INST_PHI with a negative value, out of 136K hits of any "INST_.*" with a negative valued operand. Overall there were 474K INST_PHI and 12 million "INST_.*" in my tests. - Jan On Wed, Oct 10, 2012 at 11:23 AM, Rafael Espíndola < rafael.espindola at gmail.com> wrote: > This looks good to me. > > Just one...
2005 Dec 13
3
is my initramdisk right? also getting ''Error opening /dev/console'' error
...92 bytes) Xen reported: 1295.792 MHz processor. Dentry cache hash table entries: 16384 (order: 4, 65536 bytes) Inode-cache hash table entries: 8192 (order: 3, 32768 bytes) vmalloc area: c5000000-fb7fe000, maxmem 34000000 Memory: 59460k/73728k available (1817k kernel code, 5920k reserved, 478k data, 136k init, 0k highmem) Checking if this processor honours the WP bit even in supervisor mode... Ok. Mount-cache hash table entries: 512 CPU: L1 I cache: 32K, L1 D cache: 32K CPU: L2 cache: 1024K Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Checking ...
2012 Oct 10
0
[LLVMdev] [PATCH / PROPOSAL] bitcode encoding that is ~15% smaller for large bitcode files...
On 10 October 2012 15:15, Jan Voung <jvoung at chromium.org> wrote: > Yes, I had about 133K hits for INST_PHI with a negative value, out of 136K > hits of any "INST_.*" with a negative valued operand. > > Overall there were 474K INST_PHI and 12 million "INST_.*" in my tests. Cool! Thanks again for working on this! > - Jan Cheers, Rafael
2018 Nov 02
2
[PATCH 0/1] vhost: add vhost_blk driver
...gt; # G: vhost-blk kiocb over file > # > # A B C D E F G > > 1 171k 151k 148k 151k 195k 187k 175k > 2 328k 302k 249k 241k 349k 334k 296k > 3 479k 437k 179k 174k 501k 464k 404k > 4 622k 568k 143k 183k 620k 580k 492k > 5 755k 697k 136k 128k 737k 693k 579k > 6 887k 808k 131k 120k 830k 782k 640k > 7 1004k 926k 126k 131k 926k 863k 693k > 8 1099k 1015k 117k 115k 1001k 931k 712k > 9 1194k 1119k 115k 111k 1055k 991k 711k > 10 1278k 1207k 109k 114k 1130k 1046k 695k > 11 1345k 1280k 110k 108k 1119k 1091k...
2018 Jan 20
2
PDFs getting mangled
...or receive of the filter. Can you give me any advice? 1) $ jot 200000 1 > numbers.txt $ du -a . | grep numbers 1260 ./numbers-sent.txt 1248 ./numbers-received.txt 2) root at imap:~# ll test-* 125 -rw------- 1 root wheel 123K Jan 20 09:35 test-afterbogo.msg 149 -rw------- 1 root wheel 136K Jan 20 09:35 test-beforebogo.msg root at imap:~# tail -20 test-beforebogo.msg IAowMDAwMTAxMTUzIDAwMDAwIG4gCjAwMDAxMDEyMDYgMDAwMDAgbiAKMDAwMDEwMTIzMiAwMDAw MCBuIAowMDAwMTAxMjc0IDAwMDAwIG4gCjAwMDAxMDEyOTMgMDAwMDAgbiAKdHJhaWxlcgo8PCAv U2l6ZSAxOSAvUm9vdCAxMiAwIFIgL0luZm8gMSAwIFIgL0lEIFsgPGM2YTE5OTc3MW...
2018 Nov 05
2
[PATCH 0/1] vhost: add vhost_blk driver
...> # G: vhost-blk kiocb over file > # > # A B C D E F G > > 1 171k 151k 148k 151k 195k 187k 175k > 2 328k 302k 249k 241k 349k 334k 296k > 3 479k 437k 179k 174k 501k 464k 404k > 4 622k 568k 143k 183k 620k 580k 492k > 5 755k 697k 136k 128k 737k 693k 579k > 6 887k 808k 131k 120k 830k 782k 640k > 7 1004k 926k 126k 131k 926k 863k 693k > 8 1099k 1015k 117k 115k 1001k 931k 712k > 9 1194k 1119k 115k 111k 1055k 991k 711k > 10 1278k 1207k 109k 114k 1130k 1046k 695k > 11 1345k 1280k 110k 108k 1119k 1091k...
2018 Nov 05
2
[PATCH 0/1] vhost: add vhost_blk driver
...> # G: vhost-blk kiocb over file > # > # A B C D E F G > > 1 171k 151k 148k 151k 195k 187k 175k > 2 328k 302k 249k 241k 349k 334k 296k > 3 479k 437k 179k 174k 501k 464k 404k > 4 622k 568k 143k 183k 620k 580k 492k > 5 755k 697k 136k 128k 737k 693k 579k > 6 887k 808k 131k 120k 830k 782k 640k > 7 1004k 926k 126k 131k 926k 863k 693k > 8 1099k 1015k 117k 115k 1001k 931k 712k > 9 1194k 1119k 115k 111k 1055k 991k 711k > 10 1278k 1207k 109k 114k 1130k 1046k 695k > 11 1345k 1280k 110k 108k 1119k 1091k...
2018 Jan 21
2
PDFs getting mangled
...t; numbers.txt >> $ du -a . | grep numbers >> 1260??? ./numbers-sent.txt >> 1248??? ./numbers-received.txt >> >> 2) >> root at imap:~# ll test-* >> 125 -rw-------? 1 root? wheel?? 123K Jan 20 09:35 test-afterbogo.msg >> 149 -rw-------? 1 root? wheel?? 136K Jan 20 09:35 test-beforebogo.msg > > The more I look into it, the more it looks to me like pigeonhole is > somehow losing the last 4-6K of messages over 100K. > > When my filter script is: > cat /dev/stdin | tee /tmp/input | bogofilter[...] | tee /tmp/output > Then /tmp/output...
2006 May 04
1
Debian DomU not properly mounting swap
...#39;hda2''], [''mode'', ''w'']]], [''device'', [''vif'']]]) However, during the boot up sequence, the following error messages occur on the console: VFS: Mounted root (ext3 filesystem) readonly. Freeing unused kernel memory: 136k freed INIT: version 2.86 booting Will now activate swap. swapon on /dev/hda2 swapon: cannot stat /dev/hda2: No such file or directory * Swap activation failed with error code 255. ... Will now mount local filesystems. * Mounting proc filesystems failed with error code 32. mount: none already mounte...
2005 Oct 18
0
RE: Fix for SMP xen dom0/domU for x86_64
...24 MHz processor. >> > Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes) >> > Inode-cache hash table entries: 65536 (order: 7, 524288 bytes) >> > Memory: 509440k/524288k available (1921k kernel code, 14336k reserved, >> > 639k data >> > , 136k init) >> > Mount-cache hash table entries: 256 >> > CPU: Trace cache: 12K uops, L1 D cache: 16K >> > CPU: L2 cache: 1024K >> > CPU: Physical Processor ID: 3 >> > Booting processor 1/1 rip ffffffff80100008 rsp ffff880000655f58 >> > Initializing C...
2010 Nov 16
5
ssh prompting for password
...1.6G 0% /dev/shm nas.summitnjhome.com:/mnt/nas 903G 265G 566G 32% /mnt/nas nas2.summitnjhome.com:/mnt/store 1.4T 187G 1.1T 15% /mnt/store nas2.summitnjhome.com:/mnt/home 903G 47G 784G 6% /home none 1.6G 136K 1.6G 1% /var/lib/xenstored So therefore my RSA key should already be in my authorized_keys on any host. However logging into the virtual network, I always get prompted for a password. just for the heck of it, I scp'd the key over again to one of the virtual hosts: [bluethundr at LCENT03:~...
2005 Oct 16
8
cannot boot domU
...able entries: 4096 (order: 12, 131072 bytes) Xen reported: 3200.108 MHz processor. Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) Memory: 2050304k/2097152k available (1682k kernel code, 46344k reserved, 561k da ta, 136k init) Mount-cache hash table entries: 256 CPU: Trace cache: 12K uops, L1 D cache: 16K CPU: L2 cache: 1024K CPU: Hyper-Threading is disabled Brought up 1 CPUs NET: Registered protocol family 16 xen_mem: Initialising balloon driver. Grant table initialized IA32 emulation $Id: sys_ia32.c,v 1.32 2002/0...
2012 Oct 08
2
[LLVMdev] [PATCH / PROPOSAL] bitcode encoding that is ~15% smaller for large bitcode files...
On Sat, Oct 6, 2012 at 8:32 AM, Rafael Espíndola <rafael.espindola at gmail.com > wrote: > > +static void EmitSignedInt64(SmallVectorImpl<uint64_t> &Vals, uint64_t V) { > > Please start function names with a lower case letter. > > Done -- changed this function and most of the "pushValue" functions. I left PushValueAndType alone since that is an
2012 Oct 10
0
[LLVMdev] [PATCH / PROPOSAL] bitcode encoding that is ~15% smaller for large bitcode files...
This looks good to me. Just one question, you found that forward references are only common with phi operands, so it is not profitable to use a signed representation for other operands, right? Cheers, Rafael
2018 Nov 05
0
[PATCH 0/1] vhost: parallel virtqueue handling
...at least we need a module parameter to other stuffs to control the number of threads I believe. Thanks > > # num-queues > # bare metal > # virtio-blk > # vhost-blk > > 1 171k 148k 195k > 2 328k 249k 349k > 3 479k 179k 501k > 4 622k 143k 620k > 5 755k 136k 737k > 6 887k 131k 830k > 7 1004k 126k 926k > 8 1099k 117k 1001k > 9 1194k 115k 1055k > 10 1278k 109k 1130k > 11 1345k 110k 1119k > 12 1411k 104k 1201k > 13 1466k 106k 1260k > 14 1517k 103k 1296k > 15 1552k 102k 1322k > 16 1480k 101k 1346k > > Vitaly Maya...
2002 Apr 15
1
Feature freeze on recommended packages coming up
As of 19:00 GMT we'll declare a feature freeze on the set of recommended packages. We will then have one week to ferret out as many problems as we can before final code freeze and platform testing. At the same time R-base is code-frozen, i.e. only critical bugs will be fixed. If you want to help making R 1.5.0 as bug-free as possible, it could be a good idea to install the set of
2018 Jan 20
0
PDFs getting mangled
...e? > > 1) > $ jot 200000 1 > numbers.txt > $ du -a . | grep numbers > 1260 ./numbers-sent.txt > 1248 ./numbers-received.txt > > 2) > root at imap:~# ll test-* > 125 -rw------- 1 root wheel 123K Jan 20 09:35 test-afterbogo.msg > 149 -rw------- 1 root wheel 136K Jan 20 09:35 test-beforebogo.msg The more I look into it, the more it looks to me like pigeonhole is somehow losing the last 4-6K of messages over 100K. When my filter script is: cat /dev/stdin | tee /tmp/input | bogofilter[...] | tee /tmp/output Then /tmp/output is the full message, but what p...
2012 Oct 11
2
[LLVMdev] [PATCH / PROPOSAL] bitcode encoding that is ~15% smaller for large bitcode files...
...ok at the patch too? Thanks, - Jan On Wed, Oct 10, 2012 at 12:39 PM, Rafael Espíndola < rafael.espindola at gmail.com> wrote: > On 10 October 2012 15:15, Jan Voung <jvoung at chromium.org> wrote: > > Yes, I had about 133K hits for INST_PHI with a negative value, out of > 136K > > hits of any "INST_.*" with a negative valued operand. > > > > Overall there were 474K INST_PHI and 12 million "INST_.*" in my tests. > > Cool! > > Thanks again for working on this! > > > - Jan > > Cheers, > Rafael > --------...
2018 Jan 22
0
PDFs getting mangled
...-a . | grep numbers >>> 1260??? ./numbers-sent.txt >>> 1248??? ./numbers-received.txt >>> >>> 2) >>> root at imap:~# ll test-* >>> 125 -rw-------? 1 root? wheel?? 123K Jan 20 09:35 test-afterbogo.msg >>> 149 -rw-------? 1 root? wheel?? 136K Jan 20 09:35 test-beforebogo.msg >> The more I look into it, the more it looks to me like pigeonhole is >> somehow losing the last 4-6K of messages over 100K. >> >> When my filter script is: >> cat /dev/stdin | tee /tmp/input | bogofilter[...] | tee /tmp/output >&gt...