Jean Guyader
2011-Nov-23 16:07 UTC
[PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
The Intel GPU uses a two pages NVS region called OpRegion. In order to get full support for the driver in the guest we need to map this region. This patch reserves 2 pages on the top of the RAM and mark this region as NVS in the e820. Then we write the address to the config space (offset 0xfc) so the device model can map the OpRegion at this address in the guest. Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com> --- tools/firmware/hvmloader/config.h | 1 + tools/firmware/hvmloader/e820.c | 8 ++++++++ tools/firmware/hvmloader/pci.c | 28 ++++++++++++++++++++++++++++ tools/firmware/hvmloader/pci_regs.h | 2 ++ 4 files changed, 39 insertions(+), 0 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2011-Nov-23 17:20 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote:> > The Intel GPU uses a two pages NVS region called OpRegion. > In order to get full support for the driver in the guest > we need to map this region. > > This patch reserves 2 pages on the top of the RAM and > mark this region as NVS in the e820. Then we write the > address to the config space (offset 0xfc) so the device > model can map the OpRegion at this address in the guest.Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. Is it correct to do this for *all* gfx devices with Intel vendor id? -- Keir> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com> > --- > tools/firmware/hvmloader/config.h | 1 + > tools/firmware/hvmloader/e820.c | 8 ++++++++ > tools/firmware/hvmloader/pci.c | 28 ++++++++++++++++++++++++++++ > tools/firmware/hvmloader/pci_regs.h | 2 ++ > 4 files changed, 39 insertions(+), 0 deletions(-) > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel
Jean Guyader
2011-Nov-23 17:28 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On 23 November 2011 17:20, Keir Fraser <keir.xen@gmail.com> wrote:> On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote: > >> >> The Intel GPU uses a two pages NVS region called OpRegion. >> In order to get full support for the driver in the guest >> we need to map this region. >> >> This patch reserves 2 pages on the top of the RAM and >> mark this region as NVS in the e820. Then we write the >> address to the config space (offset 0xfc) so the device >> model can map the OpRegion at this address in the guest. > > Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. >Ok, that is handy.> Is it correct to do this for *all* gfx devices with Intel vendor id? >The OpRegion is a Intel GPU specific thing. Jean
Jean Guyader
2011-Nov-23 17:37 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On 23 November 2011 17:28, Jean Guyader <jean.guyader@gmail.com> wrote:> On 23 November 2011 17:20, Keir Fraser <keir.xen@gmail.com> wrote: >> On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote: >> >>> >>> The Intel GPU uses a two pages NVS region called OpRegion. >>> In order to get full support for the driver in the guest >>> we need to map this region. >>> >>> This patch reserves 2 pages on the top of the RAM and >>> mark this region as NVS in the e820. Then we write the >>> address to the config space (offset 0xfc) so the device >>> model can map the OpRegion at this address in the guest. >> >> Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. >> > > Ok, that is handy. > >> Is it correct to do this for *all* gfx devices with Intel vendor id? >> > > The OpRegion is a Intel GPU specific thing. >Sorry didn''t read carefully the first time, yes I think it''s correct to do that for all the Intel GPU. Maybe I can do a read on 0xfc first to check if I don''t get something dodgy like 0xfffffff or 0. I could also double check in Qemu that it''s a NVS region on the host, but that won''t work for stub domain. Jean
Jean Guyader
2011-Nov-24 09:35 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On 23/11 05:20, Keir Fraser wrote:> On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote: > > > > > The Intel GPU uses a two pages NVS region called OpRegion. > > In order to get full support for the driver in the guest > > we need to map this region. > > > > This patch reserves 2 pages on the top of the RAM and > > mark this region as NVS in the e820. Then we write the > > address to the config space (offset 0xfc) so the device > > model can map the OpRegion at this address in the guest. > > Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. >I''m calling mem_hole_alloc() in pci_setup (see patch attached), but that causes an overlap in e820, is that expected? (XEN) HVM5: E820 table: (XEN) HVM5: [00]: 00000000:00000000 - 00000000:0009e000: RAM (XEN) HVM5: [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED (XEN) HVM5: HOLE: 00000000:000a0000 - 00000000:000e0000 (XEN) HVM5: [02]: 00000000:000e0000 - 00000000:00100000: RESERVED (XEN) HVM5: [03]: 00000000:00100000 - 00000000:3f800000: RAM (XEN) HVM5: HOLE: 00000000:3f800000 - 00000000:feff8000 (XEN) HVM5: [04]: 00000000:feff8000 - 00000000:feffa000: NVS (XEN) HVM5: OVERLAP!! (XEN) HVM5: [05]: 00000000:fc000000 - 00000001:00000000: RESERVED Jean
Keir Fraser
2011-Nov-24 09:56 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On 24/11/2011 09:35, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote:> On 23/11 05:20, Keir Fraser wrote: >> On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote: >> >>> >>> The Intel GPU uses a two pages NVS region called OpRegion. >>> In order to get full support for the driver in the guest >>> we need to map this region. >>> >>> This patch reserves 2 pages on the top of the RAM and >>> mark this region as NVS in the e820. Then we write the >>> address to the config space (offset 0xfc) so the device >>> model can map the OpRegion at this address in the guest. >> >> Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. >> > > I''m calling mem_hole_alloc() in pci_setup (see patch attached), > but that causes an overlap in e820, is that expected?You''ll have to adjust your changes to build_e820_table() to split the range RESERVED_MEMBASE-0x10000000 into two pieces partitioned by your NVS region. The region starting at RESERVED_MEMBASE comes *before* your NVS region. Then you add an another reserved region up to 0x1000000 if your NVS region exists. -- Keir> (XEN) HVM5: E820 table: > (XEN) HVM5: [00]: 00000000:00000000 - 00000000:0009e000: RAM > (XEN) HVM5: [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED > (XEN) HVM5: HOLE: 00000000:000a0000 - 00000000:000e0000 > (XEN) HVM5: [02]: 00000000:000e0000 - 00000000:00100000: RESERVED > (XEN) HVM5: [03]: 00000000:00100000 - 00000000:3f800000: RAM > (XEN) HVM5: HOLE: 00000000:3f800000 - 00000000:feff8000 > (XEN) HVM5: [04]: 00000000:feff8000 - 00000000:feffa000: NVS > (XEN) HVM5: OVERLAP!! > (XEN) HVM5: [05]: 00000000:fc000000 - 00000001:00000000: RESERVED > > Jean
Stefano Stabellini
2011-Nov-24 11:18 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On Wed, 23 Nov 2011, Jean Guyader wrote:> On 23 November 2011 17:28, Jean Guyader <jean.guyader@gmail.com> wrote: > > On 23 November 2011 17:20, Keir Fraser <keir.xen@gmail.com> wrote: > >> On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote: > >> > >>> > >>> The Intel GPU uses a two pages NVS region called OpRegion. > >>> In order to get full support for the driver in the guest > >>> we need to map this region. > >>> > >>> This patch reserves 2 pages on the top of the RAM and > >>> mark this region as NVS in the e820. Then we write the > >>> address to the config space (offset 0xfc) so the device > >>> model can map the OpRegion at this address in the guest. > >> > >> Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. > >> > > > > Ok, that is handy. > > > >> Is it correct to do this for *all* gfx devices with Intel vendor id? > >> > > > > The OpRegion is a Intel GPU specific thing. > > > > Sorry didn''t read carefully the first time, yes I think it''s correct > to do that for > all the Intel GPU. Maybe I can do a read on 0xfc first to check if I don''t get > something dodgy like 0xfffffff or 0. I could also double check in Qemu that it''s > a NVS region on the host, but that won''t work for stub domain.access to physical memory through /dev/mem should work from the stubdom
Jean Guyader
2011-Nov-24 11:19 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On 24/11 11:18, Stefano Stabellini wrote:> On Wed, 23 Nov 2011, Jean Guyader wrote: > > On 23 November 2011 17:28, Jean Guyader <jean.guyader@gmail.com> wrote: > > > On 23 November 2011 17:20, Keir Fraser <keir.xen@gmail.com> wrote: > > >> On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote: > > >> > > >>> > > >>> The Intel GPU uses a two pages NVS region called OpRegion. > > >>> In order to get full support for the driver in the guest > > >>> we need to map this region. > > >>> > > >>> This patch reserves 2 pages on the top of the RAM and > > >>> mark this region as NVS in the e820. Then we write the > > >>> address to the config space (offset 0xfc) so the device > > >>> model can map the OpRegion at this address in the guest. > > >> > > >> Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. > > >> > > > > > > Ok, that is handy. > > > > > >> Is it correct to do this for *all* gfx devices with Intel vendor id? > > >> > > > > > > The OpRegion is a Intel GPU specific thing. > > > > > > > Sorry didn''t read carefully the first time, yes I think it''s correct > > to do that for > > all the Intel GPU. Maybe I can do a read on 0xfc first to check if I don''t get > > something dodgy like 0xfffffff or 0. I could also double check in Qemu that it''s > > a NVS region on the host, but that won''t work for stub domain. > > access to physical memory through /dev/mem should work from the stubdomI would think that /dev/mem in a stubdom will expose the memory of the guest but maybe I''m wrong. Jean
Stefano Stabellini
2011-Nov-24 11:25 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
On Thu, 24 Nov 2011, Jean Guyader wrote:> On 24/11 11:18, Stefano Stabellini wrote: > > On Wed, 23 Nov 2011, Jean Guyader wrote: > > > On 23 November 2011 17:28, Jean Guyader <jean.guyader@gmail.com> wrote: > > > > On 23 November 2011 17:20, Keir Fraser <keir.xen@gmail.com> wrote: > > > >> On 23/11/2011 16:07, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote: > > > >> > > > >>> > > > >>> The Intel GPU uses a two pages NVS region called OpRegion. > > > >>> In order to get full support for the driver in the guest > > > >>> we need to map this region. > > > >>> > > > >>> This patch reserves 2 pages on the top of the RAM and > > > >>> mark this region as NVS in the e820. Then we write the > > > >>> address to the config space (offset 0xfc) so the device > > > >>> model can map the OpRegion at this address in the guest. > > > >> > > > >> Please use mem_hole_alloc() rather than adjusting {low,high}_mem_pgend. > > > >> > > > > > > > > Ok, that is handy. > > > > > > > >> Is it correct to do this for *all* gfx devices with Intel vendor id? > > > >> > > > > > > > > The OpRegion is a Intel GPU specific thing. > > > > > > > > > > Sorry didn''t read carefully the first time, yes I think it''s correct > > > to do that for > > > all the Intel GPU. Maybe I can do a read on 0xfc first to check if I don''t get > > > something dodgy like 0xfffffff or 0. I could also double check in Qemu that it''s > > > a NVS region on the host, but that won''t work for stub domain. > > > > access to physical memory through /dev/mem should work from the stubdom > > I would think that /dev/mem in a stubdom will expose the memory of the guest but > maybe I''m wrong. >Nope, it maps host memory. Of course you need to make sure you have given enough privileges to the stubdom so that it can actually map the memory area.
Jean Guyader
2011-Nov-24 14:53 UTC
[PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
The Intel GPU uses a two pages NVS region called OpRegion. In order to get full support for the driver in the guest we need to map this region. This patch reserves 2 pages on the top of the RAM and mark this region as NVS in the e820. Then we write the address to the config space (offset 0xfc) so the device model can map the OpRegion at this address in the guest. Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com> --- tools/firmware/hvmloader/config.h | 1 + tools/firmware/hvmloader/e820.c | 34 ++++++++++++++++++++++++++++++---- tools/firmware/hvmloader/pci.c | 14 ++++++++++++++ tools/firmware/hvmloader/pci_regs.h | 2 ++ 4 files changed, 47 insertions(+), 4 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jean Guyader
2011-Nov-24 15:00 UTC
Re: [PATCH] hvmloader: Intel GPU passthrough, reverse OpRegion
The description was slightly wrong, here is a new one: The Intel GPU uses a two pages NVS region called OpRegion. In order to get full support for the driver in the guest we need to map this region. This patch reserves 2 pages on the top of the memory in the reserved area and mark this region as NVS in the e820. Then we write the address to the config space (offset 0xfc) so the device model can map the OpRegion at this address in the guest. Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com> On 24 November 2011 14:53, Jean Guyader <jean.guyader@eu.citrix.com> wrote:> > The Intel GPU uses a two pages NVS region called OpRegion. > In order to get full support for the driver in the guest > we need to map this region. > > This patch reserves 2 pages on the top of the RAM and > mark this region as NVS in the e820. Then we write the > address to the config space (offset 0xfc) so the device > model can map the OpRegion at this address in the guest. > > Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com> > --- > tools/firmware/hvmloader/config.h | 1 + > tools/firmware/hvmloader/e820.c | 34 ++++++++++++++++++++++++++++++---- > tools/firmware/hvmloader/pci.c | 14 ++++++++++++++ > tools/firmware/hvmloader/pci_regs.h | 2 ++ > 4 files changed, 47 insertions(+), 4 deletions(-) > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >