Displaying 20 results from an estimated 29 matches for "ioread".
Did you mean:
ioread8
2006 Nov 09
2
[LLVMdev] LLVM and newlib progress
...emory locations.
Yes. In more detail, instruction words directly control the
data transports inside the processor, and I/O is handled
by transporting data into a special function unit.
> In that case, you would not
> use a "system call" intrinsic;
Correct.
> you would use an ioread/iowrite intrinsic
> (these are similar to load/store and are briefly documented in the
> LLVA-OS paper).
Which I should probably read, it seems.
If you're doing memory mapped I/O, you could probably
> use LLVM volatile load/store instructions and not have to add any
> intrinsic...
2006 Nov 09
0
[LLVMdev] LLVM and newlib progress
...in the
miroarchitecture a particular piece of data should go. Directing it to
a specific functional unit makes it do I/O. Right?
>
>> In that case, you would not
>> use a "system call" intrinsic;
>>
>
> Correct.
>
>
>> you would use an ioread/iowrite intrinsic
>> (these are similar to load/store and are briefly documented in the
>> LLVA-OS paper).
>>
>
> Which I should probably read, it seems.
>
The LLVA-OS paper contains descriptions of intrinsics that we would add
to LLVM to support an operating sys...
2011 Nov 26
0
No subject
to read address before selecting the correct vq.
At that point, I've added simple prints to the driver. Initially it
looked as follows:
iowrite16(index, &vp_dev->common->queue_select);
switch (ioread64(&vp_dev->common->queue_address)) {
[...]
};
So I added prints before the iowrite16() and after the ioread64(), and
saw that while the driver prints were ordered, the device ones weren't:
[ 1.264052] before iowrite index=1
kvmtool: net returning pfn (vq=0): 310706176
kvmto...
2011 Nov 26
0
No subject
to read address before selecting the correct vq.
At that point, I've added simple prints to the driver. Initially it
looked as follows:
iowrite16(index, &vp_dev->common->queue_select);
switch (ioread64(&vp_dev->common->queue_address)) {
[...]
};
So I added prints before the iowrite16() and after the ioread64(), and
saw that while the driver prints were ordered, the device ones weren't:
[ 1.264052] before iowrite index=1
kvmtool: net returning pfn (vq=0): 310706176
kvmto...
2012 Mar 19
2
[PATCH RFC] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is
done through a PCI IO space (via BAR 0 of the virtual PCI device).
However, Linux guests happen to use ioread/iowrite/iomap primitives
for access, and these work uniformly across memory/io BARs.
While PCI IO accesses are faster than MMIO on x86 kvm,
MMIO might be helpful on other systems which don't
implement PIO or where PIO is slower than MMIO.
Add a property to make it possible to tweak the BAR ty...
2012 Mar 19
2
[PATCH RFC] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is
done through a PCI IO space (via BAR 0 of the virtual PCI device).
However, Linux guests happen to use ioread/iowrite/iomap primitives
for access, and these work uniformly across memory/io BARs.
While PCI IO accesses are faster than MMIO on x86 kvm,
MMIO might be helpful on other systems which don't
implement PIO or where PIO is slower than MMIO.
Add a property to make it possible to tweak the BAR ty...
2015 Jan 14
0
[PATCH v3 09/16] pci: add pci_iomap_range
...@dev: PCI device that owns the BAR
* @bar: BAR number
- * @maxlen: length of the memory to map
+ * @offset: map memory at the given offset in BAR
+ * @maxlen: max length of the memory to map
*
* Using this function you will get a __iomem address to your device BAR.
* You can access it using ioread*() and iowrite*(). These functions hide
@@ -21,16 +22,21 @@
* you expect from them in the correct way.
*
* @maxlen specifies the maximum length to map. If you want to get access to
- * the complete BAR without checking for its length first, pass %0 here.
+ * the complete BAR from offset to th...
2015 Jan 14
0
[PATCH v3 09/16] pci: add pci_iomap_range
...@dev: PCI device that owns the BAR
* @bar: BAR number
- * @maxlen: length of the memory to map
+ * @offset: map memory at the given offset in BAR
+ * @maxlen: max length of the memory to map
*
* Using this function you will get a __iomem address to your device BAR.
* You can access it using ioread*() and iowrite*(). These functions hide
@@ -21,16 +22,21 @@
* you expect from them in the correct way.
*
* @maxlen specifies the maximum length to map. If you want to get access to
- * the complete BAR without checking for its length first, pass %0 here.
+ * the complete BAR from offset to th...
2014 Dec 11
0
[PATCH RFC 3/5] pci: add pci_iomap_range
...BAR number
- * @maxlen: length of the memory to map
+ * @offset: map memory at the given offset in BAR
+ * @minlen: min length of the memory to map
+ * @maxlen: max length of the memory to map
*
* Using this function you will get a __iomem address to your device BAR.
* You can access it using ioread*() and iowrite*(). These functions hide
* the details if this is a MMIO or PIO address space and will just do what
* you expect from them in the correct way.
*
+ * @minlen specifies the minimum length to map. We check that BAR is
+ * large enough.
* @maxlen specifies the maximum length to m...
2014 Dec 11
0
[PATCH RFC 3/5] pci: add pci_iomap_range
...BAR number
- * @maxlen: length of the memory to map
+ * @offset: map memory at the given offset in BAR
+ * @minlen: min length of the memory to map
+ * @maxlen: max length of the memory to map
*
* Using this function you will get a __iomem address to your device BAR.
* You can access it using ioread*() and iowrite*(). These functions hide
* the details if this is a MMIO or PIO address space and will just do what
* you expect from them in the correct way.
*
+ * @minlen specifies the minimum length to map. We check that BAR is
+ * large enough.
* @maxlen specifies the maximum length to m...
2006 Nov 09
0
[LLVMdev] LLVM and newlib progress
...understand this right:
First, it sounds like you're programming on the bare processor, so your
I/O instructions are either special processor instructions or volatile
loads/stores to special memory locations. In that case, you would not
use a "system call" intrinsic; you would use an ioread/iowrite intrinsic
(these are similar to load/store and are briefly documented in the
LLVA-OS paper). If you're doing memory mapped I/O, you could probably
use LLVM volatile load/store instructions and not have to add any
intrinsics.
Second, you could implement these "intrinsics" as...
2006 Nov 09
2
[LLVMdev] LLVM and newlib progress
This is in response to Reid's and John's comments about
intrinsics.
The setting of the work is a project on reconfigurable
processors using the Transport Triggered Architecture (TTA)
<http://en.wikipedia.org/wiki/Transport_triggered_architecture>.
For the compiler this means that the target architecture
is not fixed, but rather an instance of a processor template.
Different
2012 Mar 19
1
[PATCHv2] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is
done through a PCI IO space (via BAR 0 of the virtual PCI device).
However, Linux guests happen to use ioread/iowrite/iomap primitives
for access, and these work uniformly across memory/io BARs.
While PCI IO accesses are faster than MMIO on x86 kvm,
MMIO might be helpful on other systems:
for example IBM pSeries machines not all firmware/hypervisor
versions necessarily support PCI PIO access on all domain...
2012 Mar 19
1
[PATCHv2] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is
done through a PCI IO space (via BAR 0 of the virtual PCI device).
However, Linux guests happen to use ioread/iowrite/iomap primitives
for access, and these work uniformly across memory/io BARs.
While PCI IO accesses are faster than MMIO on x86 kvm,
MMIO might be helpful on other systems:
for example IBM pSeries machines not all firmware/hypervisor
versions necessarily support PCI PIO access on all domain...
2011 Nov 14
2
[PATCHv2 RFC] virtio-pci: flexible configuration layout
..._dev->ioaddr_notify);
}
/* Handle a configuration change: Tell driver if it wants to know. */
@@ -231,7 +383,8 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
/* reading the ISR has the effect of also clearing it so it's very
* important to save off the value. */
- isr = ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
+ isr = ioread8(vp_dev->ioaddr_notify +
+ VIRTIO_PCI_ISR - VIRTIO_PCI_QUEUE_NOTIFY);
/* It's definitely not us if the ISR was not high */
if (!isr)
@@ -265,7 +418,7 @@ static void vp_free_vectors(struct virtio_device *vdev)
ioread16(vp_de...
2011 Nov 14
2
[PATCHv2 RFC] virtio-pci: flexible configuration layout
..._dev->ioaddr_notify);
}
/* Handle a configuration change: Tell driver if it wants to know. */
@@ -231,7 +383,8 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
/* reading the ISR has the effect of also clearing it so it's very
* important to save off the value. */
- isr = ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
+ isr = ioread8(vp_dev->ioaddr_notify +
+ VIRTIO_PCI_ISR - VIRTIO_PCI_QUEUE_NOTIFY);
/* It's definitely not us if the ISR was not high */
if (!isr)
@@ -265,7 +418,7 @@ static void vp_free_vectors(struct virtio_device *vdev)
ioread16(vp_de...
2011 Nov 22
2
[PATCHv3 RFC] virtio-pci: flexible configuration layout
..._dev->ioaddr_notify);
}
/* Handle a configuration change: Tell driver if it wants to know. */
@@ -231,7 +402,8 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
/* reading the ISR has the effect of also clearing it so it's very
* important to save off the value. */
- isr = ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
+ isr = ioread8(vp_dev->ioaddr_notify +
+ VIRTIO_PCI_ISR - VIRTIO_PCI_QUEUE_NOTIFY);
/* It's definitely not us if the ISR was not high */
if (!isr)
@@ -265,7 +437,7 @@ static void vp_free_vectors(struct virtio_device *vdev)
ioread16(vp_de...
2011 Nov 22
2
[PATCHv3 RFC] virtio-pci: flexible configuration layout
..._dev->ioaddr_notify);
}
/* Handle a configuration change: Tell driver if it wants to know. */
@@ -231,7 +402,8 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
/* reading the ISR has the effect of also clearing it so it's very
* important to save off the value. */
- isr = ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
+ isr = ioread8(vp_dev->ioaddr_notify +
+ VIRTIO_PCI_ISR - VIRTIO_PCI_QUEUE_NOTIFY);
/* It's definitely not us if the ISR was not high */
if (!isr)
@@ -265,7 +437,7 @@ static void vp_free_vectors(struct virtio_device *vdev)
ioread16(vp_de...
2023 Apr 27
4
[RFC PATCH v2 0/3] Introduce a PCIe endpoint virtio console
PCIe endpoint framework provides APIs to implement PCIe endpoint function.
This framework allows defining various PCIe endpoint function behaviors in
software. This patch extend the framework for virtio pci device. The
virtio is defined to communicate guest on virtual machine and host side.
Advantage of the virtio is the efficiency of data transfer and the conciseness
of implementation device
2014 Dec 11
6
[PATCH RFC 0/5] virtio_pci: modern driver
Based on Rusty's patches.
Coding style and funny jokes are his.
Bugs and a star wars reference (should be easy to spot) are mine.
Untested, but useful as basis for beginning the qemu work.
TODO:
= simplify probing: use a common probe function, probe with modern driver
first, if that fails - probe with legacy driver.
BUGS: ATM legacy driver can win and drive a transitional device