Displaying 20 results from an estimated 5000 matches similar to: "[Lguest] [PATCH 4/5] lguest: use KVM hypercalls"
2009 Apr 19
0
[PULL] lguest & virtio fixes
The following changes since commit ff54250a0ebab7f90a5f848a0ba63f999830c872:
Linus Torvalds (1):
Remove 'recurse into child resources' logic from 'reserve_region_with_split()'
are available in the git repository at:
ssh://master.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-lguest-and-virtio.git master
Marcelo Tosatti (1):
virtio: fix suspend when using
2009 Apr 19
0
[PULL] lguest & virtio fixes
The following changes since commit ff54250a0ebab7f90a5f848a0ba63f999830c872:
Linus Torvalds (1):
Remove 'recurse into child resources' logic from 'reserve_region_with_split()'
are available in the git repository at:
ssh://master.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-lguest-and-virtio.git master
Marcelo Tosatti (1):
virtio: fix suspend when using
2007 Jun 28
1
Lguest
Hello Rusty,
I'have just started to read the code (and I'm not an expert kernel
programmer), but is this condition ok ?
(function setup_pagetables(), on file Documentation/lguest.c )
/* Ideally we map all physical memory starting at page_offset.
* However, if page_offset is 0xC0000000 we can only map 1G of physical
* (0xC0000000 + 1G overflows). */
if (mem >
2007 Jun 28
1
Lguest
Hello Rusty,
I'have just started to read the code (and I'm not an expert kernel
programmer), but is this condition ok ?
(function setup_pagetables(), on file Documentation/lguest.c )
/* Ideally we map all physical memory starting at page_offset.
* However, if page_offset is 0xC0000000 we can only map 1G of physical
* (0xC0000000 + 1G overflows). */
if (mem >
2007 Jul 11
1
lguest over qemu
Hi,
I'm setting my lguest playing environment with qemu, but didn't have a good
start... maybe because my modest laptop only has 512Mb of RAM.
This is my qemu command:
qemu -s -no-kqemu -m 400 -hda linux26.img -net
nic,model=3Drtl8139 -net tap
( linux26.img includes a 2.6.21.5 kernel with the lguest-2.6.21-307patch=
)
This is my lguest command
2007 Jul 11
1
lguest over qemu
Hi,
I'm setting my lguest playing environment with qemu, but didn't have a good
start... maybe because my modest laptop only has 512Mb of RAM.
This is my qemu command:
qemu -s -no-kqemu -m 400 -hda linux26.img -net
nic,model=3Drtl8139 -net tap
( linux26.img includes a 2.6.21.5 kernel with the lguest-2.6.21-307patch=
)
This is my lguest command
2009 Jun 05
1
[PATCH] lguest: PAE support
Hi, this version requires that host and guest have the same PAE status.
NX cap is not offered to the guest, yet.
Thanks,
Matias
Lguest PAE support
Signed-off-by: Matias Zabaljauregui <zabaljauregui at gmail.com>
---
Documentation/lguest/lguest.txt | 1 -
arch/x86/include/asm/lguest.h | 7 +-
arch/x86/include/asm/lguest_hcall.h | 3 +-
arch/x86/lguest/Kconfig
2009 Jun 05
1
[PATCH] lguest: PAE support
Hi, this version requires that host and guest have the same PAE status.
NX cap is not offered to the guest, yet.
Thanks,
Matias
Lguest PAE support
Signed-off-by: Matias Zabaljauregui <zabaljauregui at gmail.com>
---
Documentation/lguest/lguest.txt | 1 -
arch/x86/include/asm/lguest.h | 7 +-
arch/x86/include/asm/lguest_hcall.h | 3 +-
arch/x86/lguest/Kconfig
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
If we're building a pte, we can use simple assigment; only use set_pte
etc. when we're actually going to use that destination as a PTE. I
don't know that we'll ever run under Xen, but it's neater.
And use set_pte/set_pmd rather than assuming native_ versions, even
though that's probably true for most people.
(Includes compile fix by Kamalesh Babulal <kamalesh at
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
If we're building a pte, we can use simple assigment; only use set_pte
etc. when we're actually going to use that destination as a PTE. I
don't know that we'll ever run under Xen, but it's neater.
And use set_pte/set_pmd rather than assuming native_ versions, even
though that's probably true for most people.
(Includes compile fix by Kamalesh Babulal <kamalesh at
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this mode.
- introduce a new proto_ops peek_len which was
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this mode.
- introduce a new proto_ops peek_len which was
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote:
> On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote:
> >
> > So how about this? We replace the dev destructor with our own that
> > doesn't immediately call free_netdev. We only call free_netdev once
> > all tun fd's attached to the device have been closed.
>
> Here's the patch.
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote:
> On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote:
> >
> > So how about this? We replace the dev destructor with our own that
> > doesn't immediately call free_netdev. We only call free_netdev once
> > all tun fd's attached to the device have been closed.
>
> Here's the patch.
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing