search for: hreg

Displaying 13 results from an estimated 13 matches for "hreg".

Did you mean: greg
2020 Sep 15
0
[PATCH 12/18] sgiseeq: convert to dma_alloc_noncoherent
...t, addr, sizeof(struct sgiseeq_rx_desc), - DMA_TO_DEVICE); + struct sgiseeq_private *sp = netdev_priv(dev); + + dma_sync_single_for_device(dev->dev.parent, VIRT_TO_DMA(sp, addr), + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); } static inline void hpc3_eth_reset(struct hpc3_ethregs *hregs) @@ -403,6 +407,8 @@ static inline void sgiseeq_rx(struct net_device *dev, struct sgiseeq_private *sp rd = &sp->rx_desc[sp->rx_new]; dma_sync_desc_cpu(dev, rd); } + dma_sync_desc_dev(dev, rd); + dma_sync_desc_cpu(dev, &sp->rx_desc[orig_end]); sp->rx_desc[orig...
2013 Jul 02
1
[PATCH] drm/nv50-/disp: Use output specific mask in interrupt
...x 8b42f45..7ffe2f3 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c @@ -1107,6 +1107,7 @@ nv50_disp_intr_unk20_2(struct nv50_disp_priv *priv, int head) u32 pclk = nv_rd32(priv, 0x610ad0 + (head * 0x540)) & 0x3fffff; u32 hval, hreg = 0x614200 + (head * 0x800); u32 oval, oreg; + u32 mask; u32 conf = exec_clkcmp(priv, head, 0xff, pclk, &outp); if (conf != ~0) { if (outp.location == 0 && outp.type == DCB_OUTPUT_DP) { @@ -1133,6 +1134,7 @@ nv50_disp_intr_unk20_2(struct nv50_disp_priv *priv, int head) oreg...
2020 Sep 01
3
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
On Tue, Sep 01, 2020 at 07:12:41PM +0200, Thomas Bogendoerfer wrote: > On Tue, Sep 01, 2020 at 05:22:09PM +0200, Thomas Bogendoerfer wrote: > > On Wed, Aug 19, 2020 at 08:55:49AM +0200, Christoph Hellwig wrote: > > > Use the proper modern API to transfer cache ownership for incoherent DMA. > > > > > > Signed-off-by: Christoph Hellwig <hch at lst.de> >
2020 Sep 03
1
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
On Tue, Sep 01, 2020 at 07:38:10PM +0200, Thomas Bogendoerfer wrote: > this is the problem: > > /* Always check for received packets. */ > sgiseeq_rx(dev, sp, hregs, sregs); > > so the driver will look at the rx descriptor on every interrupt, so > we cache the rx descriptor on the first interrupt and if there was > $no rx packet, we will only see it, if cache line gets flushed for > some other reason. That means a transfer back to device owne...
2010 Sep 09
0
calling Rf_initEmbeddedR error
...Argv1[] = {"Rtest_1"}; int Argc2 = 1; char *Argv2[] = {"Rtest_2"}; // Init R(first) Rf_initEmbeddedR(Argc1, Argv1); // R package load SEXP e = R_NilValue; SEXP r = R_NilValue; PROTECT(e = lang2(install("source"), mkString("hreg.r"))); r = R_tryEval(e, R_GlobalEnv, NULL); // -----> success UNPROTECT(1); // Function load SEXP fun; PROTECT(e = allocVector(LANGSXP, 3)); fun = findFun(install("hreg"), R_GlobalEnv); if(fun == R_NilValue) // ----...
2005 Feb 09
3
WinXP won't authenticate
I've looke throught the archives and can't find much to help my situation. I have a Slackware 10 server running Samba 3.0.4 on a Win2k/2K3 Active Directory Domain. My win2k clients can connect to the samba shares but the WinXP clients cannot. I've tried the enableplaintextpassword hreg hack but that doesn't solve the issue. Here is my [global] section of my smb.conf Can anyone assist me with a resolution? [global] workgroup = AD netbios name = %h server string = PTS RNW Production Server hosts allow = 10. 127. load printers = no security = domain domai...
2020 Sep 14
2
[PATCH 11/17] sgiseeq: convert to dma_alloc_noncoherent
...t, addr, sizeof(struct sgiseeq_rx_desc), - DMA_TO_DEVICE); + struct sgiseeq_private *sp = netdev_priv(dev); + + dma_sync_single_for_device(dev->dev.parent, VIRT_TO_DMA(sp, addr), + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); } static inline void hpc3_eth_reset(struct hpc3_ethregs *hregs) @@ -403,6 +407,8 @@ static inline void sgiseeq_rx(struct net_device *dev, struct sgiseeq_private *sp rd = &sp->rx_desc[sp->rx_new]; dma_sync_desc_cpu(dev, rd); } + dma_sync_desc_dev(dev, rd); + dma_sync_desc_cpu(dev, &sp->rx_desc[orig_end]); sp->rx_desc[orig...
2020 Aug 19
0
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...t, addr, sizeof(struct sgiseeq_rx_desc), - DMA_TO_DEVICE); + struct sgiseeq_private *sp = netdev_priv(dev); + + dma_sync_single_for_device(dev->dev.parent, VIRT_TO_DMA(sp, addr), + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); } static inline void hpc3_eth_reset(struct hpc3_ethregs *hregs) -- 2.28.0
2020 Sep 01
0
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...invalidated it. So it seems like > we are missing one (or more) ownership transfers to the device. I'll > try to look at the the ownership management in a little more detail > tomorrow. this is the problem: /* Always check for received packets. */ sgiseeq_rx(dev, sp, hregs, sregs); so the driver will look at the rx descriptor on every interrupt, so we cache the rx descriptor on the first interrupt and if there was $no rx packet, we will only see it, if cache line gets flushed for some other reason. kick_tx() does a busy loop checking tx descriptors, with just sync_...
2020 Sep 02
1
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...gt; we are missing one (or more) ownership transfers to the device. I'll > > try to look at the the ownership management in a little more detail > > tomorrow. > > this is the problem: > > /* Always check for received packets. */ > sgiseeq_rx(dev, sp, hregs, sregs); > > so the driver will look at the rx descriptor on every interrupt, so > we cache the rx descriptor on the first interrupt and if there was > $no rx packet, we will only see it, if cache line gets flushed for > some other reason. kick_tx() does a busy loop checking tx des...
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. As a follow up I
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a