Displaying 20 results from an estimated 1000 matches similar to: "shadow page code"
2008 Feb 03
5
[PATCH] Simplify paging_invlpg when flush is not required.
Simplify paging_invlpg when flush is not required.
New ''flush'' parameter is added to paging_invlpg, to allow
caller assigning whether flush check is required. It''s
wasteful to always validate shadow linear mapping if caller
doesn''t check return value at all.
Signed-off-by Kevin Tian <kevin.tian@intel.com>
Thanks,
Kevin
2009 Apr 22
7
Consult some concepts about shadow paging mechanism
Dear All:
I am pretty new to xen-devel, please correct me in the following.
Assume we have the following terms
GPT: guest page table
SPT: shadow page table
(Question a) When guest OS is running, is it always using SPT for
address translation? If it is the case, how does guest OS refer and
modify its own GPT content? It seems that there is a page table entry
in SPT for the GPT page.
(Question
2007 Apr 27
2
dovecot + ldap + quota
hi....
i using dovecot 1.0rc26 and i started to configure quota plugin but i think
its not working fine.
i configure like suggested in http://wiki.dovecot.org/Quota
in dovecot.conf:
protocol imap {
mail_plugins = quota imap_quota
}
plugin {
# 10 MB quota limit
quota = maildir:storage=10240
}
in dovecot-ldap.conf:
user_attrs =
2007 Oct 24
13
Ryan Bates' Multi-object Forms and the date_select
I think I''ve found a bug with Edge.
I''m trying out Ryan Bates'' multi-object form technique shown in one of
his Rails-casts (railscasts.com/episodes/75). If you use a fields_for
similar to that shown (here: http://pastie.caboo.se/110480), you get a
Server Error 500:
------
Status: 500 Internal Server Error
Conflicting types for parameter containers. Expected an
2011 May 02
32
[PATCH] blkback: Fix block I/O latency issue
In blkback driver, after I/O requests are submitted to Dom-0 block I/O subsystem, blkback goes to ''sleep'' effectively without letting blkfront know about it (req_event isn''t set appropriately). Hence blkfront doesn''t notify blkback when it submits a new I/O thus delaying the ''dispatch'' of the new I/O to Dom-0 block I/O subsystem. The new I/O is
2013 Jun 21
5
[PATCH 3/4] xen-blkback: check the number of iovecs before allocating a bios
With the introduction of indirect segments we can receive requests
with a number of segments bigger than the maximum number of allowed
iovecs in a bios, so make sure that blkback doesn't try to allocate a
bios with more iovecs than BIO_MAX_PAGES
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
2012 Nov 02
2
[PATCH] xen-blk: persistent-grants fixes
This patch contains fixes for persistent grants implementation v2:
* handle == 0 is a valid handle, so initialize grants in blkback
setting the handle to BLKBACK_INVALID_HANDLE instead of 0. Reported
by Konrad Rzeszutek Wilk.
* new_map is a boolean, use "true" or "false" instead of 1 and 0.
Reported by Konrad Rzeszutek Wilk.
* blkfront announces the
2011 Oct 08
9
xentop reporting zero written sectors
Just moving a chunk of files from one filesysstem on xvba to another
on xvdb, and was monitoring with xentop as it was taking longer than
expected.
The VBD_RD and VBD_WR counters were both clocking-up as expected, as
was the VBD_RSECT counter, but the VBD_WSECT counter was stuck on
zero, I toggled on the individual VBD device counters and these showed
the same (with the RD and WR counters
2011 Oct 08
9
xentop reporting zero written sectors
Just moving a chunk of files from one filesysstem on xvba to another
on xvdb, and was monitoring with xentop as it was taking longer than
expected.
The VBD_RD and VBD_WR counters were both clocking-up as expected, as
was the VBD_RSECT counter, but the VBD_WSECT counter was stuck on
zero, I toggled on the individual VBD device counters and these showed
the same (with the RD and WR counters
2011 Jun 21
13
VM disk I/O limit patch
Hi all,
I add a blkback QoS patch.
You can config(dynamic/static) different I/O speed for different VM disk
by this patch.
----------------------------------------------------------------------------
diff -urNp blkback/blkback.c blkback-qos/blkback.c
--- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800
+++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800
@@ -44,6 +44,11 @@
2011 Jun 21
13
VM disk I/O limit patch
Hi all,
I add a blkback QoS patch.
You can config(dynamic/static) different I/O speed for different VM disk
by this patch.
----------------------------------------------------------------------------
diff -urNp blkback/blkback.c blkback-qos/blkback.c
--- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800
+++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800
@@ -44,6 +44,11 @@
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and
2013 May 13
22
[PATCH] xen-blk(front|back): Handle large physical sector disks
I accidentally realized today that any domU''s using the paravirt disk driver
potentially suffer from poor performance when they get handed in a physical
volume and partitioning is done inside the guest. The physical volume passed in
has to be one that has the compat 512 logical sector size but hints its real
sector size (eg. 4096) as physical sector size.
In dom0 handling is correct and
2011 May 25
2
[PATCH linux-2.6.18-xen] blkback: don''t call vbd_size() if bd_disk is NULL
...because vbd_size() dereferences bd_disk if bd_part is NULL.
Signed-off-by: Laszlo Ersek<lersek@redhat.com>
---
drivers/xen/blkback/vbd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff -r 415a9b435fef drivers/xen/blkback/vbd.c
--- a/drivers/xen/blkback/vbd.c Mon May 23 18:36:33 2011 +0100
+++ b/drivers/xen/blkback/vbd.c Wed May 25 12:15:26 2011 +0200
@@ -73,7 +73,6 @@
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
This patch implements persistent grants for the xen-blk{front,back}
mechanism. The effect of this change is to reduce the number of unmap
operations performed, since they cause a (costly) TLB shootdown. This
allows the I/O performance to scale better when a large number of VMs
are performing I/O.
Previously, the blkfront driver was supplied a bvec[] from the request
queue. This was granted to
2012 Dec 03
1
xen-blkback: move free persistent grants code
Hello Roger Pau Monne,
The patch 4d4f270f1880: "xen-blkback: move free persistent grants
code" from Nov 16, 2012, leads to the following warning:
drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts()
warn: 'persistent_gnt' was already freed.
drivers/block/xen-blkback/blkback.c
232 pages[segs_to_unmap] = persistent_gnt->page;
233
2012 Dec 03
1
xen-blkback: move free persistent grants code
Hello Roger Pau Monne,
The patch 4d4f270f1880: "xen-blkback: move free persistent grants
code" from Nov 16, 2012, leads to the following warning:
drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts()
warn: 'persistent_gnt' was already freed.
drivers/block/xen-blkback/blkback.c
232 pages[segs_to_unmap] = persistent_gnt->page;
233
2012 Oct 24
2
Find VM ID
Hi Folks,
In one if the Centos Xen server I am seeing there is high load and I found
two of the process causing high cpu usage which is [blkback.69.sda1]
and blkback.40.sda1.
So how can I find the VM ID using with this blkback id?
_______________________________________________
Xen-users mailing list
Xen-users@lists.xen.org
http://lists.xen.org/xen-users
2008 Oct 22
1
DomU networking problem in opensuse 11
Hi,
Creating a new domain is a lot easier in opensuse 11. I follow the instruction on the website and build a virtual machine which uses opensuse 11 as well (I installed it from iso image). Everything works fine except the network. I cannot access the Internet from DomU. Here is some information.
P.S. the DomU id is 1.
"brctl show" in Dom 0:
bridge name bridge id STP
2012 Mar 26
13
blkback global resources
All the resources allocated based on xen_blkif_reqs are global in
blkback. While (without having measured anything) I think that this
is bad from a QoS perspective (not the least implied from a warning
issued by Citrix''es multi-page-ring patches:
if (blkif_reqs < BLK_RING_SIZE(order))
printk(KERN_WARNING "WARNING: "
"I/O request space (%d reqs) < ring