search for: kick_tx

Displaying 8 results from an estimated 8 matches for "kick_tx".

2020 Sep 01
3
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
On Tue, Sep 01, 2020 at 07:12:41PM +0200, Thomas Bogendoerfer wrote: > On Tue, Sep 01, 2020 at 05:22:09PM +0200, Thomas Bogendoerfer wrote: > > On Wed, Aug 19, 2020 at 08:55:49AM +0200, Christoph Hellwig wrote: > > > Use the proper modern API to transfer cache ownership for incoherent DMA. > > > > > > Signed-off-by: Christoph Hellwig <hch at lst.de> >
2020 Sep 01
0
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...ays check for received packets. */ sgiseeq_rx(dev, sp, hregs, sregs); so the driver will look at the rx descriptor on every interrupt, so we cache the rx descriptor on the first interrupt and if there was $no rx packet, we will only see it, if cache line gets flushed for some other reason. kick_tx() does a busy loop checking tx descriptors, with just sync_desc_cpu... Thomas. -- Crap can work. Given enough thrust pigs will fly, but it's not necessarily a good idea. [ RFC1925, 2.3 ]
2020 Sep 03
1
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...the rx descriptor on every interrupt, so > we cache the rx descriptor on the first interrupt and if there was > $no rx packet, we will only see it, if cache line gets flushed for > some other reason. That means a transfer back to device ownership is missing after a (negative) check. > kick_tx() does a busy loop checking tx descriptors, > with just sync_desc_cpu... > > Thomas. > > -- > Crap can work. Given enough thrust pigs will fly, but it's not necessarily a > good idea. [ RFC1925, 2.3 ] ---end quoted text---
2020 Sep 15
0
[PATCH 12/18] sgiseeq: convert to dma_alloc_noncoherent
...c[sp->rx_new]; dma_sync_desc_cpu(dev, rd); } + dma_sync_desc_dev(dev, rd); + dma_sync_desc_cpu(dev, &sp->rx_desc[orig_end]); sp->rx_desc[orig_end].rdma.cntinfo &= ~(HPCDMA_EOR); dma_sync_desc_dev(dev, &sp->rx_desc[orig_end]); @@ -443,6 +449,7 @@ static inline void kick_tx(struct net_device *dev, dma_sync_desc_cpu(dev, td); } if (td->tdma.cntinfo & HPCDMA_XIU) { + dma_sync_desc_dev(dev, td); hregs->tx_ndptr = VIRT_TO_DMA(sp, td); hregs->tx_ctrl = HPC3_ETXCTRL_ACTIVE; } @@ -476,6 +483,7 @@ static inline void sgiseeq_tx(struct net_device *d...
2020 Sep 02
1
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
.... */ > sgiseeq_rx(dev, sp, hregs, sregs); > > so the driver will look at the rx descriptor on every interrupt, so > we cache the rx descriptor on the first interrupt and if there was > $no rx packet, we will only see it, if cache line gets flushed for > some other reason. kick_tx() does a busy loop checking tx descriptors, > with just sync_desc_cpu... the patch below fixes the problem. Thomas. diff --git a/drivers/net/ethernet/seeq/sgiseeq.c b/drivers/net/ethernet/seeq/sgiseeq.c index 8507ff242014..876e3700a0e4 100644 --- a/drivers/net/ethernet/seeq/sgiseeq.c +++ b/d...
2020 Sep 14
2
[PATCH 11/17] sgiseeq: convert to dma_alloc_noncoherent
...c[sp->rx_new]; dma_sync_desc_cpu(dev, rd); } + dma_sync_desc_dev(dev, rd); + dma_sync_desc_cpu(dev, &sp->rx_desc[orig_end]); sp->rx_desc[orig_end].rdma.cntinfo &= ~(HPCDMA_EOR); dma_sync_desc_dev(dev, &sp->rx_desc[orig_end]); @@ -443,6 +449,7 @@ static inline void kick_tx(struct net_device *dev, dma_sync_desc_cpu(dev, td); } if (td->tdma.cntinfo & HPCDMA_XIU) { + dma_sync_desc_dev(dev, td); hregs->tx_ndptr = VIRT_TO_DMA(sp, td); hregs->tx_ctrl = HPC3_ETXCTRL_ACTIVE; } @@ -476,6 +483,7 @@ static inline void sgiseeq_tx(struct net_device *d...
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. As a follow up I