Hello, A friend of mine was having weird ocasional crashes on migration, and I took a look at the problem The VM is a very stripped down ubuntu 12.04 environment (3.2.0 kernel) with a total of 96MB of RAM, but this appears to be a generic driver problem still present in upstream. The symptoms were that on about 5% of migrations, one or more block devices would fail to come back on resume. The relevant snippets of dmesg are: [6673983.756117] xenwatch: page allocation failure: order:5, mode:0x4430 [6673983.756123] Pid: 12, comm: xenwatch Not tainted 3.2.0-29-virtual #46-Ubuntu ... [6673983.756155] [<c01fdaac>] __get_free_pages+0x1c/0x30 [6673983.756161] [<c0232dd7>] kmalloc_order_trace+0x27/0xa0 [6673983.756165] [<c04998b1>] blkif_recover+0x71/0x550 [6673983.756168] [<c0499de5>] blkfront_resume+0x55/0x60 [6673983.756172] [<c044502a>] xenbus_dev_resume+0x4a/0x100 [6673983.756176] [<c048a2ad>] pm_op+0x17d/0x1a0 ... [6673983.756737] xenbus: resume vbd-51712 failed: -12 [6673983.756743] pm_op(): xenbus_dev_resume+0x0/0x100 returns -12 [6673983.756759] PM: Device vbd-51712 failed to restore: error -12 [6673983.867532] PM: restore of devices complete after 182.808 msecs Looking at the code in blkif_recover (http://lxr.linux.no/#linux+v3.6.7/drivers/block/xen-blkfront.c#L1054) In "Stage 1" as commented, we make a copy of the shadow map. We then reset the contents of the real shadow map, and selectively copy the in-use entries back from the copy to the real map. Looking at the code, it appears possible to do this rearranging inplace in the real shadow map, without requiring any memory allocation. Is this a sensible suggestion or have I overlooked something? This order-5 allocation is a disaster lying in wait for VMs with high memory pressure. -- Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer T: +44 (0)1223 225 900, http://www.citrix.com
>>> On 22.11.12 at 13:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote: > In "Stage 1" as commented, we make a copy of the shadow map. We then > reset the contents of the real shadow map, and selectively copy the > in-use entries back from the copy to the real map. > > Looking at the code, it appears possible to do this rearranging inplace > in the real shadow map, without requiring any memory allocation. > > Is this a sensible suggestion or have I overlooked something? This > order-5 allocation is a disaster lying in wait for VMs with high memory > pressure.While merging the multi-page ring patches, I think I tried to make this an in place copy operation, and it didn''t work (don''t recall details though). This and/or the need to deal with shrinking ring size across migration (maybe that was what really didn''t work) made me move stage 3 to kick_pending_request_queues(), and allocate entries that actually need copying one by one, sticking them on a list. Jan
On 22/11/12 13:46, Jan Beulich wrote:>>>> On 22.11.12 at 13:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote: >> In "Stage 1" as commented, we make a copy of the shadow map. We then >> reset the contents of the real shadow map, and selectively copy the >> in-use entries back from the copy to the real map. >> >> Looking at the code, it appears possible to do this rearranging inplace >> in the real shadow map, without requiring any memory allocation. >> >> Is this a sensible suggestion or have I overlooked something? This >> order-5 allocation is a disaster lying in wait for VMs with high memory >> pressure. > While merging the multi-page ring patches, I think I tried to make > this an in place copy operation, and it didn''t work (don''t recall > details though). This and/or the need to deal with shrinking ring > size across migration (maybe that was what really didn''t work) > made me move stage 3 to kick_pending_request_queues(), and > allocate entries that actually need copying one by one, sticking > them on a list. > > Jan >Where are your multi-page ring patches? Are you saying this code is going to change very shortly? If the copy and copy back really cant be avoided, then making "sizeof(info->shadow)/PAGE_SIZE" allocations of order 0 would be substantially more friendly to environments with high memory pressure, at the cost of slightly more complicated indexing in the loop. -- Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer T: +44 (0)1223 225 900, http://www.citrix.com
>>> On 22.11.12 at 16:07, Andrew Cooper <andrew.cooper3@citrix.com> wrote: > On 22/11/12 13:46, Jan Beulich wrote: >>>>> On 22.11.12 at 13:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote: >>> In "Stage 1" as commented, we make a copy of the shadow map. We then >>> reset the contents of the real shadow map, and selectively copy the >>> in-use entries back from the copy to the real map. >>> >>> Looking at the code, it appears possible to do this rearranging inplace >>> in the real shadow map, without requiring any memory allocation. >>> >>> Is this a sensible suggestion or have I overlooked something? This >>> order-5 allocation is a disaster lying in wait for VMs with high memory >>> pressure. >> While merging the multi-page ring patches, I think I tried to make >> this an in place copy operation, and it didn''t work (don''t recall >> details though). This and/or the need to deal with shrinking ring >> size across migration (maybe that was what really didn''t work) >> made me move stage 3 to kick_pending_request_queues(), and >> allocate entries that actually need copying one by one, sticking >> them on a list. > > Where are your multi-page ring patches? Are you saying this code is > going to change very shortly?Oh, I implied that this was for our (forward ported) kernel (which can be found at http://kernel.opensuse.org/git, and the master and SLE11-SP3 branches should have those patches).> If the copy and copy back really cant be avoided, then making > "sizeof(info->shadow)/PAGE_SIZE" allocations of order 0 would be > substantially more friendly to environments with high memory pressure, > at the cost of slightly more complicated indexing in the loop.As I said, I converted the one big allocation into per-entry ones (at once avoiding to allocate space for entries that aren''t in use). Jan
Seemingly Similar Threads
- [PATCH V3] vmx/nmi: Do not use self_nmi() in VMEXIT handler
- [PATCH RFC 05/12] xen-blkfront: remove frame list from blk_shadow
- [PATCH] xen/blkfront: remove driver_data direct access of struct device
- [PATCH] xen/blkfront: remove driver_data direct access of struct device
- [PATCH] xen/blkfront: remove driver_data direct access of struct device