similar to: [Xen-ia64-devel] [PATCH 0/3][IA64] Accelerate IDE PIO on HVM/IA64

Displaying 20 results from an estimated 500 matches similar to: "[Xen-ia64-devel] [PATCH 0/3][IA64] Accelerate IDE PIO on HVM/IA64"

2008 Dec 19
4
[PATCH] vmx: Fix single step on debugger
The hvm domain which is being debugged sometimes crashes with the following message: (XEN) Failed vm entry (exit reason 0x80000021) caused by invalid guest state (0). (XEN) ************* VMCS Area ************** (XEN) *** Guest State *** (XEN) CR0: actual=0x000000008005003b, shadow=0x000000008005003b, gh_mask=ffffffffffffffff ...[snip]... (XEN) DebugCtl=0000000000000000
2013 Feb 05
21
[PATCH] x86/hvm: fix corrupt ACPI PM-Timer during live migration
The value of ACPI PM-Timer may be broken on save unless the timer mode is delay_for_missed_ticks. With other timer modes, vcpu->arch.hvm_vcpu.guest_time is always zero and the adjustment from its value is wrong. This patch fixes the saved value of ACPI PM-Timer: - don''t adjust the PM-Timer if vcpu->arch.hvm_vcpu.guest_time is zero. - consolidate calculations of PM-Timer to one
2012 Nov 29
4
[PATCH] x86/hap: fix race condition between ENABLE_LOGDIRTY and track_dirty_vram hypercall
There is a race condition between XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY and HVMOP_track_dirty_vram hypercall. Although HVMOP_track_dirty_vram is called many times from qemu-dm which is connected via VNC, XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY is called only once from a migration process (e.g. xc_save, libxl-save-helper). So the race seldom happens, but the following cases are possible.
2009 Feb 24
4
[PATCH]xend: fix a typo in pci.py
The PCI_EXP_TYPE_PCI_BRIDGE should be PCI_EXP_FLAGS_TYPE here. Also a tiny fix to the python comment. Signed-off-by: Dexuan Cui <dexuan.cui@intel.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2006 Aug 01
18
[Patch] Enable "sysrq c" handler for domU coredump
Hi, In the case of linux, crash_kexec() is occured by "sysrq c". In the case of DomainU on xen, Help is occured by "sysrq c" now. So The way of dumping DomainU''s memory manualy is nothing. I fix this issue by the following way. 1. Panic is occured by "sysrq c" on both Domain0 and DomainU. 2. On DomainU, coredump is generated in /var/xen/dump (on Domain0).
2020 May 06
6
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
We tried to reserve space for vnet header before xdp.data_hard_start. But this is useless since the packet could be modified by XDP which may invalidate the information stored in the header and there's no way for XDP to know the existence of the vnet header currently. So let's just not reserve space for vnet header in this case. Cc: Jesper Dangaard Brouer <brouer at redhat.com>
2020 May 06
6
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
We tried to reserve space for vnet header before xdp.data_hard_start. But this is useless since the packet could be modified by XDP which may invalidate the information stored in the header and there's no way for XDP to know the existence of the vnet header currently. So let's just not reserve space for vnet header in this case. Cc: Jesper Dangaard Brouer <brouer at redhat.com>
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
On Thu, Sep 06, 2018 at 12:05:21PM +0800, Jason Wang wrote: > This patch split out XDP logic into a single function. This make it to > be reused by XDP batching path in the following patch. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/net/tun.c | 84 ++++++++++++++++++++++++++++------------------- > 1 file changed, 51 insertions(+), 33
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
On Thu, Sep 06, 2018 at 12:05:21PM +0800, Jason Wang wrote: > This patch split out XDP logic into a single function. This make it to > be reused by XDP batching path in the following patch. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/net/tun.c | 84 ++++++++++++++++++++++++++++------------------- > 1 file changed, 51 insertions(+), 33
2007 Aug 16
0
[PATCH] avoid wasteful storing to xenstore
Looking at xend.log, I found wasteful storing to xenstore. Type of rtc/timeoffset is not ''int'' but ''str''. Thanks, Kouya Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??3:53, Michael S. Tsirkin wrote: > On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote: >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and there's no way for XDP to know the existence of the
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??3:53, Michael S. Tsirkin wrote: > On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote: >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and there's no way for XDP to know the existence of the
2017 Dec 31
1
[bpf-next V3 PATCH 11/14] virtio_net: setup xdp_rxq_info
The virtio_net driver doesn't dynamically change the RX-ring queue layout and backing pages, but instead reject XDP setup if all the conditions for XDP is not meet. Thus, the xdp_rxq_info also remains fairly static. This allow us to simply add the reg/unreg to net_device open/close functions. Driver hook points for xdp_rxq_info: * reg : virtnet_open * unreg: virtnet_close V3: - bugfix,
2008 Oct 16
0
[PATCH] vmx: set DR7 via DOMCTL_setvcpucontext
This patch is needed for a guest domain debugger to support hardware watchpoint. Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2008 Dec 25
0
[Patch] fix rombios to get the BDF.
Hi, We found a bug that rombios cannot get the correct BDF during rom_scan. This patch fixes rombios to get the BDF for calling INIT function. I''m sorry for the inconvenience. m(__)m Signed-off-by: Akio Takebe <takebe_akio@jp.fujitsu.com> Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com> Best Regards, Akio Takebe _______________________________________________
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??4:21, Jesper Dangaard Brouer wrote: > On Wed, 6 May 2020 14:16:32 +0800 > Jason Wang <jasowang at redhat.com> wrote: > >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and > IMHO above
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??4:21, Jesper Dangaard Brouer wrote: > On Wed, 6 May 2020 14:16:32 +0800 > Jason Wang <jasowang at redhat.com> wrote: > >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and > IMHO above
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2007 Sep 20
12
ANNOUNCE: Xen 3.1.1 First Release Candidate
Folks, The patch queue for 3.1.1 has been pushed into http://xenbits.xensource.com/xen-3.1-testing.hg, and tagged as -rc1. Please try it out and let us know of any problems (patches gladly accepted!). -- Keir PS. The patch queue (xen-3.1-testing.pq.hg) is no longer being used. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP