search for: map_idx

Displaying 16 results from an estimated 16 matches for "map_idx".

Did you mean: kmap_idx
2007 Jan 18
13
[PATCH 0/5] dump-core take 2:
The following dump-core patches changes its format into ELF, adds PFN-GMFN table, HVM support, and adds experimental IA64 support. - ELF format Program header and note section are adopted. - HVM domain support To know the memory area to dump, XENMEM_set_memory_map is added. XENMEM_memory_map hypercall is for current domain, so new one is created. and hvm domain builder tell xen its
2014 Dec 19
2
[PATCH RFC 2/5] s390: add pci_iomap_range
...int bar, > + unsigned long offset, > + unsigned long max) > { > struct zpci_dev *zdev = get_zdev(pdev); > u64 addr; > @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) > > idx = zdev->bars[bar].map_idx; > spin_lock(&zpci_iomap_lock); > - zpci_iomap_start[idx].fh = zdev->fh; > - zpci_iomap_start[idx].bar = bar; > + if (zpci_iomap_start[idx].count++) { > + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || > + zpci_iomap_start[idx].bar != bar); > + } else { &gt...
2014 Dec 19
2
[PATCH RFC 2/5] s390: add pci_iomap_range
...int bar, > + unsigned long offset, > + unsigned long max) > { > struct zpci_dev *zdev = get_zdev(pdev); > u64 addr; > @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) > > idx = zdev->bars[bar].map_idx; > spin_lock(&zpci_iomap_lock); > - zpci_iomap_start[idx].fh = zdev->fh; > - zpci_iomap_start[idx].bar = bar; > + if (zpci_iomap_start[idx].count++) { > + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || > + zpci_iomap_start[idx].bar != bar); > + } else { &gt...
2014 Dec 15
0
[PATCH RFC 2/5] s390: add pci_iomap_range
...pci_iomap_range(struct pci_dev *pdev, + int bar, + unsigned long offset, + unsigned long max) { struct zpci_dev *zdev = get_zdev(pdev); u64 addr; @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) idx = zdev->bars[bar].map_idx; spin_lock(&zpci_iomap_lock); - zpci_iomap_start[idx].fh = zdev->fh; - zpci_iomap_start[idx].bar = bar; + if (zpci_iomap_start[idx].count++) { + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || + zpci_iomap_start[idx].bar != bar); + } else { + zpci_iomap_start[idx].fh = zdev-&gt...
2014 Dec 15
0
[PATCH RFC 2/5] s390: add pci_iomap_range
...pci_iomap_range(struct pci_dev *pdev, + int bar, + unsigned long offset, + unsigned long max) { struct zpci_dev *zdev = get_zdev(pdev); u64 addr; @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) idx = zdev->bars[bar].map_idx; spin_lock(&zpci_iomap_lock); - zpci_iomap_start[idx].fh = zdev->fh; - zpci_iomap_start[idx].bar = bar; + if (zpci_iomap_start[idx].count++) { + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || + zpci_iomap_start[idx].bar != bar); + } else { + zpci_iomap_start[idx].fh = zdev-&gt...
2015 Jan 14
0
[PATCH v3 10/16] s390: add pci_iomap_range
...pci_iomap_range(struct pci_dev *pdev, + int bar, + unsigned long offset, + unsigned long max) { struct zpci_dev *zdev = get_zdev(pdev); u64 addr; @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) idx = zdev->bars[bar].map_idx; spin_lock(&zpci_iomap_lock); - zpci_iomap_start[idx].fh = zdev->fh; - zpci_iomap_start[idx].bar = bar; + if (zpci_iomap_start[idx].count++) { + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || + zpci_iomap_start[idx].bar != bar); + } else { + zpci_iomap_start[idx].fh = zdev-&gt...
2015 Jan 14
0
[PATCH v3 10/16] s390: add pci_iomap_range
...pci_iomap_range(struct pci_dev *pdev, + int bar, + unsigned long offset, + unsigned long max) { struct zpci_dev *zdev = get_zdev(pdev); u64 addr; @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) idx = zdev->bars[bar].map_idx; spin_lock(&zpci_iomap_lock); - zpci_iomap_start[idx].fh = zdev->fh; - zpci_iomap_start[idx].bar = bar; + if (zpci_iomap_start[idx].count++) { + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || + zpci_iomap_start[idx].bar != bar); + } else { + zpci_iomap_start[idx].fh = zdev-&gt...
2014 Dec 19
0
[PATCH RFC 2/5] s390: add pci_iomap_range
...signed long offset, > > + unsigned long max) > > { > > struct zpci_dev *zdev = get_zdev(pdev); > > u64 addr; > > @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) > > > > idx = zdev->bars[bar].map_idx; > > spin_lock(&zpci_iomap_lock); > > - zpci_iomap_start[idx].fh = zdev->fh; > > - zpci_iomap_start[idx].bar = bar; > > + if (zpci_iomap_start[idx].count++) { > > + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || > > + zpci_iomap_start[idx].ba...
2014 Dec 19
0
[PATCH RFC 2/5] s390: add pci_iomap_range
...signed long offset, > > + unsigned long max) > > { > > struct zpci_dev *zdev = get_zdev(pdev); > > u64 addr; > > @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) > > > > idx = zdev->bars[bar].map_idx; > > spin_lock(&zpci_iomap_lock); > > - zpci_iomap_start[idx].fh = zdev->fh; > > - zpci_iomap_start[idx].bar = bar; > > + if (zpci_iomap_start[idx].count++) { > > + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || > > + zpci_iomap_start[idx].ba...
2015 Jan 16
1
[PATCH v3 10/16] s390: add pci_iomap_range
...int bar, > + unsigned long offset, > + unsigned long max) > { > struct zpci_dev *zdev = get_zdev(pdev); > u64 addr; > @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) > > idx = zdev->bars[bar].map_idx; > spin_lock(&zpci_iomap_lock); > - zpci_iomap_start[idx].fh = zdev->fh; > - zpci_iomap_start[idx].bar = bar; > + if (zpci_iomap_start[idx].count++) { > + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || > + zpci_iomap_start[idx].bar != bar); > + } else { &gt...
2015 Jan 16
1
[PATCH v3 10/16] s390: add pci_iomap_range
...int bar, > + unsigned long offset, > + unsigned long max) > { > struct zpci_dev *zdev = get_zdev(pdev); > u64 addr; > @@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) > > idx = zdev->bars[bar].map_idx; > spin_lock(&zpci_iomap_lock); > - zpci_iomap_start[idx].fh = zdev->fh; > - zpci_iomap_start[idx].bar = bar; > + if (zpci_iomap_start[idx].count++) { > + BUG_ON(zpci_iomap_start[idx].fh != zdev->fh || > + zpci_iomap_start[idx].bar != bar); > + } else { &gt...
2014 Dec 15
6
[PATCH RFC 0/5] virtio pci: virtio 1.0 support
This is on top of 3.19 master + my bugfix patches, and adds virtio 1.0 support to virtio pci. This is 3.20 material I think. Would like to get feedback on s390 change as it's untested. Michael S Tsirkin (2): pci: add pci_iomap_range s390: add pci_iomap_range Michael S. Tsirkin (2): virtio_pci: modern driver virtio_pci: macros for PCI layout offsets. Rusty Russell (1): virtio-pci:
2014 Dec 15
6
[PATCH RFC 0/5] virtio pci: virtio 1.0 support
This is on top of 3.19 master + my bugfix patches, and adds virtio 1.0 support to virtio pci. This is 3.20 material I think. Would like to get feedback on s390 change as it's untested. Michael S Tsirkin (2): pci: add pci_iomap_range s390: add pci_iomap_range Michael S. Tsirkin (2): virtio_pci: modern driver virtio_pci: macros for PCI layout offsets. Rusty Russell (1): virtio-pci:
2015 Jan 14
22
[PATCH v3 00/16] virtio-pci: towards virtio 1.0 guest support
Changes since v2: handling for devices without config space (e.g. rng) reduce # of mappings for VQs These patches seem to work fine on my virtio-1.0 qemu branch. There haven't been any bugs since v2: just minor cleanups and enhancements. QEMU side is still undergoing polishing, but is already testable. Rusty, what do you think? Let's merge these for 3.20? Also - will you be doing that
2015 Jan 14
22
[PATCH v3 00/16] virtio-pci: towards virtio 1.0 guest support
Changes since v2: handling for devices without config space (e.g. rng) reduce # of mappings for VQs These patches seem to work fine on my virtio-1.0 qemu branch. There haven't been any bugs since v2: just minor cleanups and enhancements. QEMU side is still undergoing polishing, but is already testable. Rusty, what do you think? Let's merge these for 3.20? Also - will you be doing that
2013 Nov 04
17
Fwd: NetBSD xl core-dump not working... Memory fault (core dumped)
On 31.10.13 04:34, Miguel Clara wrote: > I was trying to get a core-dump for a domU with xl and got this error: > > # xl dump-core 20 test.core > Memory fault > > GDB shows this: > > a# gdb xl xl.core > GNU gdb (GDB) 7.3.1 > Copyright (C) 2011 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later<http://gnu.org/licenses/gpl.html> >