search for: mmios

Displaying 20 results from an estimated 1974 matches for "mmios".

Did you mean: mmio
2013 Jul 24
11
[Bug 67255] New: black screen after resuming from Hibernate
https://bugs.freedesktop.org/show_bug.cgi?id=67255 Priority: medium Bug ID: 67255 Assignee: nouveau at lists.freedesktop.org Summary: black screen after resuming from Hibernate QA Contact: xorg-team at lists.x.org Severity: normal Classification: Unclassified OS: Linux (All) Reporter: michele.cane at
2016 Oct 18
4
NVAC "No Signal"
Fixes "No Signal" via HDMI from NVIDIA Corporation ION VGA (rev b1) Ref. "drm/nouveau/disp/g94: implement workaround for dvi issue on fx380" https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=2a4bd8a The last working Fedora kernel 4.8.0-0.rc0.git3.1.fc25 Patched and tested with: $ modinfo -n nouveau
2009 Feb 14
5
Expect Deadlock!
Running some company specific software, which is an exam/test. Designed for Windows OS machines. On Vista machines it requires the addition of the msvbvm50.dll. When installed in WINE, it has required the addition of jet40 and vb5run in winetricks to make the program run. The program runs fine, until it works out if the user passed or failed. Fails it reports with no error. When the user passes
2011 Dec 01
2
How does vmm get all mmio areas of pci devices?
Hi, any one help? I have puzzled by the question of device''s MMIO areas. I know a MMIO operation of guest os handled by VMM as follow steps: 1: Qemu-dm does its initialization and presents virtual devices for guest os. 2: virtual bios executes PCI_setup, it will scan the pci bus and get configure space of all devices,then virtual bios allocation system resources(like port I/O range,MMIO
2013 Apr 04
0
[PATCH v2 0/6] kvm: pci PORT IO MMIO and PV MMIO speed tests
These patches add a test device, useful to measure speed of MMIO versus PIO, in different configurations. As I didn't want to reserve a hardcoded range of memory, I added pci device for this instead. Used together with the kvm unittest patches I posted on kvm mailing list. To use, simply add the device on the pci bus. Example test output: vmcall 1519 .... outl_to_kernel 1745
2013 Apr 04
0
[PATCH v2 0/6] kvm: pci PORT IO MMIO and PV MMIO speed tests
These patches add a test device, useful to measure speed of MMIO versus PIO, in different configurations. As I didn't want to reserve a hardcoded range of memory, I added pci device for this instead. Used together with the kvm unittest patches I posted on kvm mailing list. To use, simply add the device on the pci bus. Example test output: vmcall 1519 .... outl_to_kernel 1745
2020 Apr 08
5
[PATCH] x86: mmiotrace: Use cpumask_available for cpumask_var_t variables
When building with Clang + -Wtautological-compare and CONFIG_CPUMASK_OFFSTACK unset: arch/x86/mm/mmio-mod.c:375:6: warning: comparison of array 'downed_cpus' equal to a null pointer is always false [-Wtautological-pointer-compare] if (downed_cpus == NULL && ^~~~~~~~~~~ ~~~~ arch/x86/mm/mmio-mod.c:405:6: warning: comparison of array 'downed_cpus'
2009 Jul 25
2
[RFC] patch 0/4: DRM MMIO accessor cleanup
Hi, this is continuation for the MMIO accessor rewrite and cleanup. I am currently running nv28 with these patches applied, but I cannot test on PPC. Please, review and comment. If the direction is good, I'll do the same to INSTANCE_{RD,WR} as I did for nv_{rd,wr}32, and change PRAMIN from drm_local_map to simple ioremap. Can the same be done for channel specific mappings, that is
2016 Jan 02
13
[Bug 93557] New: Kernel Panic on Linux Kernel 4.4 when loading KDE/KDM on Nvidia GeForce 7025 / nForce 630a
https://bugs.freedesktop.org/show_bug.cgi?id=93557 Bug ID: 93557 Summary: Kernel Panic on Linux Kernel 4.4 when loading KDE/KDM on Nvidia GeForce 7025 / nForce 630a Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: blocker
2014 Aug 20
26
[Bug 82835] New: GeForce 8800 GS VDPAU h264 decoding hang
https://bugs.freedesktop.org/show_bug.cgi?id=82835 Priority: medium Bug ID: 82835 Assignee: nouveau at lists.freedesktop.org Summary: GeForce 8800 GS VDPAU h264 decoding hang QA Contact: xorg-team at lists.x.org Severity: normal Classification: Unclassified OS: Linux (All) Reporter: randrik at mail.ru
2015 Jul 30
3
[Qemu-devel] [PATCH v2] arm: change vendor ID for virtio-mmio
On 30 July 2015 at 09:04, Michael S. Tsirkin <mst at redhat.com> wrote: > On Thu, Jul 30, 2015 at 09:23:20AM +0800, Shannon Zhao wrote: >> >> Why do we drop the previous way using "QEMUXXXX"? Something I missed? > > So that guests that bind to this interface will work fine with non QEMU > implementations of virtio-mmio. I don't understand this sentence.
2015 Jul 30
3
[Qemu-devel] [PATCH v2] arm: change vendor ID for virtio-mmio
On 30 July 2015 at 09:04, Michael S. Tsirkin <mst at redhat.com> wrote: > On Thu, Jul 30, 2015 at 09:23:20AM +0800, Shannon Zhao wrote: >> >> Why do we drop the previous way using "QEMUXXXX"? Something I missed? > > So that guests that bind to this interface will work fine with non QEMU > implementations of virtio-mmio. I don't understand this sentence.
2012 Mar 19
2
[PATCH RFC] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is done through a PCI IO space (via BAR 0 of the virtual PCI device). However, Linux guests happen to use ioread/iowrite/iomap primitives for access, and these work uniformly across memory/io BARs. While PCI IO accesses are faster than MMIO on x86 kvm, MMIO might be helpful on other systems which don't implement PIO or
2012 Mar 19
2
[PATCH RFC] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is done through a PCI IO space (via BAR 0 of the virtual PCI device). However, Linux guests happen to use ioread/iowrite/iomap primitives for access, and these work uniformly across memory/io BARs. While PCI IO accesses are faster than MMIO on x86 kvm, MMIO might be helpful on other systems which don't implement PIO or
2018 Oct 12
0
[PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
The nature of a virtio-mmio node is discovered by the virtio driver at probe time. However the DMA relation between devices must be described statically. When a virtio-mmio node is a virtio-iommu device, it needs an "#iommu-cells" property as specified by bindings/iommu/iommu.txt. Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which requires an "iommus"
2014 Nov 05
2
[Qemu-devel] [RFC PATCH 0/2] virtio-mmio: add irqfd support for vhost-net based on virtio-mmio
On 10/27/2014 12:23 PM, Li Liu wrote: > > > On 2014/10/27 17:37, Peter Maydell wrote: >> On 25 October 2014 09:24, john.liuli <john.liuli at huawei.com> wrote: >>> To get the interrupt reason to support such VIRTIO_NET_F_STATUS >>> features I add a new register offset VIRTIO_MMIO_ISRMEM which >>> will help to establish a shared memory region
2014 Nov 05
2
[Qemu-devel] [RFC PATCH 0/2] virtio-mmio: add irqfd support for vhost-net based on virtio-mmio
On 10/27/2014 12:23 PM, Li Liu wrote: > > > On 2014/10/27 17:37, Peter Maydell wrote: >> On 25 October 2014 09:24, john.liuli <john.liuli at huawei.com> wrote: >>> To get the interrupt reason to support such VIRTIO_NET_F_STATUS >>> features I add a new register offset VIRTIO_MMIO_ISRMEM which >>> will help to establish a shared memory region
2019 Aug 13
4
[Bug 111392] New: [NV110] bus: MMIO read of 00000000 FAULT at 619444 [ IBUS ]
https://bugs.freedesktop.org/show_bug.cgi?id=111392 Bug ID: 111392 Summary: [NV110] bus: MMIO read of 00000000 FAULT at 619444 [ IBUS ] Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: normal Priority: medium
2014 Oct 27
2
[Qemu-devel] [RFC PATCH 0/2] virtio-mmio: add irqfd support for vhost-net based on virtio-mmio
On 25 October 2014 09:24, john.liuli <john.liuli at huawei.com> wrote: > To get the interrupt reason to support such VIRTIO_NET_F_STATUS > features I add a new register offset VIRTIO_MMIO_ISRMEM which > will help to establish a shared memory region between qemu and > virtio-mmio device. Then the interrupt reason can be accessed by > guest driver through this region. At the
2014 Oct 27
2
[Qemu-devel] [RFC PATCH 0/2] virtio-mmio: add irqfd support for vhost-net based on virtio-mmio
On 25 October 2014 09:24, john.liuli <john.liuli at huawei.com> wrote: > To get the interrupt reason to support such VIRTIO_NET_F_STATUS > features I add a new register offset VIRTIO_MMIO_ISRMEM which > will help to establish a shared memory region between qemu and > virtio-mmio device. Then the interrupt reason can be accessed by > guest driver through this region. At the