search for: 256k

Displaying 20 results from an estimated 352 matches for "256k".

Did you mean: 256
2007 Mar 06
1
blocks 256k chunks on RAID 1
...ve raid1 sda5[0] sdb5[1] 4192832 blocks [2/2] [UU] [-->> it's OK] md3 : active raid1 sdb6[1] sda6[0] 4192832 blocks [2/2] [UU] [-->> it's OK] md4 : active raid1 sdb7[1] sda7[0] 4192832 blocks [2/2] [UU] [-->> it's OK] md5 : active raid0 sdb8[1] sda8[0] 8385536 blocks 256k chunks [-->>>>>> What's the meaning of this? Is it an error on the hdds? What can I do to fix this?] Thanks in advance. regards, Israel
2012 Jan 15
0
[CENTOS6] mtrr_cleanup: can not find optimal value - during server startup
..., range: 2MB, type UC reg 7, base: 17152MB, range: 256MB, type UC total RAM covered: 16310M gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 126M *BAD*gran_size...
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
...UC reg 6, base: 2736MB, range: 16MB, type UC reg 7, base: 17656MB, range: 8MB, type UC total RAM covered: 16296M gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 56M gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 56M gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 56M gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 56M gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 56M gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 56M gran_size: 64K chunk_si...
2015 Aug 12
1
[PATCH 1/2] disk-create: Allow preallocation off/metadata/full.
...rn -1; diff --git a/tests/create/test-disk-create.sh b/tests/create/test-disk-create.sh index 93dc706..e18d6da 100755 --- a/tests/create/test-disk-create.sh +++ b/tests/create/test-disk-create.sh @@ -27,11 +27,14 @@ rm -f disk*.img file:*.img guestfish <<EOF disk-create disk1.img raw 256K + disk-create disk2.img raw 256K preallocation:off disk-create disk2.img raw 256K preallocation:sparse disk-create disk3.img raw 256K preallocation:full disk-create disk4.img qcow2 256K disk-create disk5.img qcow2 256K preallocation:off + disk-create disk5.img qcow2 256K p...
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
...---- ----- ----- ----- ------ ----- ----- ----- 1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G 2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G 4 15 960K 960K 960K 71 4.44M 4.44M 4.44M 8 4 256K 256K 256K 53 3.31M 3.31M 3.31M 16 1 64K 64K 64K 16 1M 1M 1M 512 1 64K 64K 64K 854 53.4M 53.4M 53.4M 1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M 4K 1 64K 64K...
2004 Sep 30
2
Masquerade with multiple internet interfaces
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi!, ok here is my question, I have 2 ISP''s connected to the firewall, and I already set up th routing tables, I have a 256k and a 1.5m connection, and I want a couple pcs from the internal network to masquerade through the 256k connection and the rest through the 1.5m connection, how do I setup this on shorewall??? I am not subscribed to the list, reply to me please!! Thanks!! Alberto Sierra -----BEGIN PGP SIGNATURE--...
2012 Apr 12
1
6.2 x86_64 "mtrr_cleanup: can not find optimal value"
...5:36 kernel: total RAM covered: 8183M Apr 11 17:25:36 kernel: gran_size: 64K chunk_size: 64K num_reg: 8 lose cover RAM: 4865M Apr 11 17:25:36 kernel: gran_size: 64K chunk_size: 128K num_reg: 8 lose cover RAM: 4865M Apr 11 17:25:36 kernel: gran_size: 64K chunk_size: 256K num_reg: 8 lose cover RAM: 4865M Apr 11 17:25:36 kernel: gran_size: 64K chunk_size: 512K num_reg: 8 lose cover RAM: 4865M Apr 11 17:25:36 kernel: gran_size: 64K chunk_size: 1M num_reg: 8 lose cover RAM: 4865M Apr 11 17:25:36 kernel: gran_size: 64K chunk_siz...
2014 Aug 19
1
Bug#737905: Seabios 128k dropped xen support, use 256k instead and update build-dep
@Ian Campbell: you do not receive my email? I have sent some mails to you and pkg-xen-devel at lists.alioth.debian.org reporting this problem since long time ago to some days ago. As Michael Tokarev above wrote also the build-dep should be changed to seabios >= 1.7.4-2~ I saw in git that is missed in your commit:
2014 Aug 27
0
Bug#737905: Seabios 128k dropped xen support, use 256k instead and update build-dep
Control: reassign -1 src:xen On Sb, 23 aug 14, 18:18:51, Thomas Jepp wrote: > After doing a xen upgrade this morning to 4.4 I hit this same issue. > > I?ve compiled a set of test packages from this commit: http://anonscm.debian.org/cgit/pkg-xen/xen.git/commit/?h=feature/seabios&id=44daa679dc80f2df734e5471476df159bc0ad38d > > My HVM domUs now start as expected so this seems to
2010 Jan 26
1
Bug#567025: xen-hypervisor-3.4-amd64: unhandled page fault while initializing dom0
...I/O APICs (XEN) ACPI: HPET id: 0x8086a301 base: 0xfed00000 (XEN) Using ACPI (MADT) for SMP configuration information (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Initializing CPU#0 (XEN) Detected 2266.808 MHz processor. (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K (XEN) CPU: L2 cache: 256K (XEN) CPU: L3 cache: 8192K (XEN) CPU: Physical Processor ID: 0 (XEN) CPU: Processor Core ID: 0 (XEN) VMX: Supported advanced features: (XEN) - APIC MMIO access virtualisation (XEN) - APIC TPR shadow (XEN) - Extended Page Tables (EPT) (XEN) - Virtual-Processor Identifiers (VPID) (XEN) - Virtual...
2010 Jan 26
1
Bug#567026: xen-hypervisor-3.4-amd64: unhandled page fault while initializing dom0
...I/O APICs (XEN) ACPI: HPET id: 0x8086a301 base: 0xfed00000 (XEN) Using ACPI (MADT) for SMP configuration information (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Initializing CPU#0 (XEN) Detected 2266.808 MHz processor. (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K (XEN) CPU: L2 cache: 256K (XEN) CPU: L3 cache: 8192K (XEN) CPU: Physical Processor ID: 0 (XEN) CPU: Processor Core ID: 0 (XEN) VMX: Supported advanced features: (XEN) - APIC MMIO access virtualisation (XEN) - APIC TPR shadow (XEN) - Extended Page Tables (EPT) (XEN) - Virtual-Processor Identifiers (VPID) (XEN) - Virtual...
2019 Jul 30
1
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...6.044 6.195 > > 8K 5.637 5.676 10.141 11.287 > > 16K 8.250 8.402 15.976 16.736 > > 32K 13.327 13.204 19.013 20.515 > > 64K 21.241 21.341 20.973 21.879 > > 128K 21.851 22.354 21.816 23.203 > > 256K 21.408 21.693 21.846 24.088 > > 512K 21.600 21.899 21.921 24.106 > > > > guest -> host [Gbps] > > pkt_size before opt p 1 p 2+3 p 4+5 > > > > 32 0.045 0.046 0.057 0.057 > > 64...
2015 Dec 28
2
How to make opus work on a low end device ?
hi, I am porting opus encoder to a low end device with 32K ram, 256K flash and 32MHz arm M3 mcu. But opus seems consume too much. To make it work , what I can think of 1, Only fixed point supported 2, Only mono voice application supported 3, Set complexity to zero 4, Support only one sample rate, like 16KHz 5, Silk mode only or Celt mode only M...
2020 Aug 13
2
[PATCH v3] appliance: extract UUID from QCOW2 disk image
For the appliance of the QCOW2 format, the function get_root_uuid() fails to get the UUID of the disk image. In this case, let us read the first 256k bytes of the disk image with the 'qemu-img dd' command. Then pass the read block to the 'file' command. Suggested-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> --- v3: 01: The code refactoring was made based on...
2003 Jun 20
7
RE: HOW TO COMBINE 2 DSL LINES IN THE SAME COMPUTER
...gt; > > > > > > > > > > > > Hello, > > > > > > > > > > I have 2 DSL lines on the same computer that gives me 2 different IP > > > > > addresses on different subnets. > > > > > > > > > > 2M/256K: PPPoA > > > > > 1M/256K: Classical IP over ATM > > > > > > > > > > How can I combine them (1M line and 2M line) together so that I > >would be > > > > > able to use them at the same time to pull in at 3 mbits? > > > > &...
2020 Aug 12
2
[PATCH v2] appliance: extract UUID from QCOW2 disk image
For the appliance of the QCOW2 format, get the UUID of the disk by reading the first 256k bytes with 'qemu-img dd' command. Then pass the read block to the 'file' command. In case of failure, run the 'file' command again directly. Suggested-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> --- v2:...
2016 Mar 07
1
Compiling qemu
...get these errors: fout: internal error: early end of file from monitor, possible problem: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2] warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2] qemu: could not load PC BIOS 'bios-256k.bin' First: What can I do about the 'host doesn't support requested feature'? Second: I think the compiled binary doesn't look for the bios-256k.bin in the right place? Help wil be much apreciated... Greetings, Dominique.
2019 Jul 29
0
[PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput
...4K 3.378 3.326 6.044 6.195 > 8K 5.637 5.676 10.141 11.287 > 16K 8.250 8.402 15.976 16.736 > 32K 13.327 13.204 19.013 20.515 > 64K 21.241 21.341 20.973 21.879 > 128K 21.851 22.354 21.816 23.203 > 256K 21.408 21.693 21.846 24.088 > 512K 21.600 21.899 21.921 24.106 > > guest -> host [Gbps] > pkt_size before opt p 1 p 2+3 p 4+5 > > 32 0.045 0.046 0.057 0.057 > 64 0.089 0.091 0.103 0.10...
2019 Jul 30
0
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...4K 3.378 3.326 6.044 6.195 > 8K 5.637 5.676 10.141 11.287 > 16K 8.250 8.402 15.976 16.736 > 32K 13.327 13.204 19.013 20.515 > 64K 21.241 21.341 20.973 21.879 > 128K 21.851 22.354 21.816 23.203 > 256K 21.408 21.693 21.846 24.088 > 512K 21.600 21.899 21.921 24.106 > > guest -> host [Gbps] > pkt_size before opt p 1 p 2+3 p 4+5 > > 32 0.045 0.046 0.057 0.057 > 64 0.089 0.091 0.103 0.10...
2019 Jul 30
7
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...4 1.813 3.262 3.269 4K 3.378 3.326 6.044 6.195 8K 5.637 5.676 10.141 11.287 16K 8.250 8.402 15.976 16.736 32K 13.327 13.204 19.013 20.515 64K 21.241 21.341 20.973 21.879 128K 21.851 22.354 21.816 23.203 256K 21.408 21.693 21.846 24.088 512K 21.600 21.899 21.921 24.106 guest -> host [Gbps] pkt_size before opt p 1 p 2+3 p 4+5 32 0.045 0.046 0.057 0.057 64 0.089 0.091 0.103 0.104 128 0.170 0.179 0.1...