Displaying 8 results from an estimated 8 matches for "enalbed".
Did you mean:
enabled
2013 Nov 25
14
[PATCH] VMX: wbinvd when vmentry under UC
...ry back to UC guest, to prevent
cache polluted by hypervisor access guest memory during UC mode.
However, wbinvd is a _very_ time consuming operation, so
1. wbinvd ... timer has a good possibility to expire while
irq disabled, it then would be delayed until
2. ... vmentry back to guest (and irq enalbed), timer interrupt
then occurs and drops guest at once;
3. drop to hypervisor ... then vmentry and wbinvd again;
This loop will run again and again, until lucky enough wbinvd
happens not to expire timer and then loop break, usually it would
occur 10K~60K times, blocking guest 10s~60s.
reprogram...
2006 Jul 11
0
A multi-isp with priority routing and GRE tunneling network problem.
Hey guys,
i have a problem with building a multi-isp gateway using a GNU/Linux
box with priority routing enalbed and after all.
any ideas what should i do? maybe a step by step intro?
thanks in advanced.
Deslay
2008 Feb 28
0
Seelinux : Allow postfix to connect to MysQL
Hello,
I just configured postfix so that it uses MySQL database to store
virtual domain and user info. When SELinux is enalbed, postfix fails to
connect to MySQL. Below is an example error from the mailog file.
postfix/trivial-rewrite[4753]: fatal:
mysql:/etc/postfix/mysql_virtual_alias_maps.cf(0,lock|fold_fix): table
lookup problem
When I turn off Selinux, the problem does not appear.
How can I configure Selinux to...
2011 Feb 17
1
What makes live migration so slow?
Hello,
I have now shfited to Centos 5.5, and I am testing live migration between 2
physical hosts with XEN 3.1.2. XEN 3.1.2 (virtualization) is included in
centOS 5.5 during the installation phase, so everything is handled by
default. A third host with Ubuntu OS serves as the network file system. The
three hosts are connected by one D-link gigabit switch (DGS-2205).
The downtime of live
2013 Oct 30
3
[PATCH 4/4] XSA-60 security hole: flush cache when vmentry back to UC guest
From 159251a04afcdcd8ca08e9f2bdfae279b2aa5471 Mon Sep 17 00:00:00 2001
From: Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 31 Oct 2013 06:38:15 +0800
Subject: [PATCH 4/4] XSA-60 security hole: flush cache when vmentry back to UC guest
This patch flush cache when vmentry back to UC guest, to prevent
cache polluted by hypervisor access guest memory during UC mode.
The elegant way to do this
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
...rlayer socket
through msg_control during sendmsg(). This is done by:
1) Doing userspace copy inside vhost_net
2) Build XDP buff
3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once
through msg_control during sendmsg().
4) Underlayer sockets can use XDP buffs directly when XDP is enalbed,
or build skb based on XDP buff.
For the packet that can not be built easily with XDP or for the case
that batch submission is hard (e.g sndbuf is limited). We will go for
the previous slow path, passing iov iterator to underlayer socket
through sendmsg() once per packet.
This can help to impr...
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
...rlayer socket
through msg_control during sendmsg(). This is done by:
1) Doing userspace copy inside vhost_net
2) Build XDP buff
3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once
through msg_control during sendmsg().
4) Underlayer sockets can use XDP buffs directly when XDP is enalbed,
or build skb based on XDP buff.
For the packet that can not be built easily with XDP or for the case
that batch submission is hard (e.g sndbuf is limited). We will go for
the previous slow path, passing iov iterator to underlayer socket
through sendmsg() once per packet.
This can help to impr...
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
...rlayer socket
through msg_control during sendmsg(). This is done by:
1) Doing userspace copy inside vhost_net
2) Build XDP buff
3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once
through msg_control during sendmsg().
4) Underlayer sockets can use XDP buffs directly when XDP is enalbed,
or build skb based on XDP buff.
For the packet that can not be built easily with XDP or for the case
that batch submission is hard (e.g sndbuf is limited). We will go for
the previous slow path, passing iov iterator to underlayer socket
through sendmsg() once per packet.
This can help to impr...