Displaying 20 results from an estimated 10000 matches similar to: "Disable L2 cache on nvidia gpu"
2016 Nov 07
0
How to disable L2 cache on Nvidia GPUs
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/nouveau/attachments/20161107/ace96f22/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 201602111742151_N3WZA6X7.png
Type: image/png
Size: 33527 bytes
Desc: not available
URL:
2018 Sep 04
0
[PATCH] PCI: add prefetch quirk to work around Asus/Nvidia suspend issues
On Tue, Sep 04, 2018 at 09:52:02AM +0800, Daniel Drake wrote:
> # cat /proc/mtrr
> reg00: base=0x0c0000000 ( 3072MB), size= 1024MB, count=1: uncachable
> reg01: base=0x0a0000000 ( 2560MB), size= 512MB, count=1: uncachable
> reg02: base=0x090000000 ( 2304MB), size= 256MB, count=1: uncachable
> reg03: base=0x08c000000 ( 2240MB), size= 64MB, count=1: uncachable
> reg04:
2018 Sep 04
2
[PATCH] PCI: add prefetch quirk to work around Asus/Nvidia suspend issues
On Mon, Sep 3, 2018 at 8:12 PM, Mika Westerberg
<mika.westerberg at linux.intel.com> wrote:
> We have seen one similar issue with LPSS devices when BIOS assigns
> device BARs above 4G (which is not the case here) and it turned out to
> be misconfigured MTRR register or something like that. It may not be
> related at all but it could be worth a try to dump out MTRR registers of
2018 Sep 04
1
[PATCH] PCI: add prefetch quirk to work around Asus/Nvidia suspend issues
On Tue, Sep 4, 2018 at 2:43 PM, Mika Westerberg
<mika.westerberg at linux.intel.com> wrote:
> Yes, can you check if the failing device BAR is included in any of the
> above entries? If not then it is probably not related.
mtrr again for reference:
reg00: base=0x0c0000000 ( 3072MB), size= 1024MB, count=1: uncachable
reg01: base=0x0a0000000 ( 2560MB), size= 512MB, count=1: uncachable
2016 Sep 20
0
Re: How to set QEMU qcow2 l2-cache-size using libvirt xml?
On Mon, Sep 19, 2016 at 11:32:26AM -0400, Frank Myhr wrote:
>QEMU's default qcow2 L2 cache size is too small for large images (and small cluster sizes), resulting in very bad performance.
>
>https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/
>shows huge performance hit for a 20GB qcow2 with default 64kB cluster size:
>
2007 Apr 07
1
OT: general question re processor, l2 and l3 cache etc
Greetings
Please forgive the OT question yet I highly value the experience and wisdom
on this list
I am wondering if anyone here can address the performance difference between
having a processor board with say 256KB L2 *and* 2048KB L3 cache *VERSUS*
just having the same processor board with just the L2 cache in a centos
server environment...
Please figure that all other necessary and related
2016 Sep 19
2
How to set QEMU qcow2 l2-cache-size using libvirt xml?
QEMU's default qcow2 L2 cache size is too small for large images (and small cluster sizes), resulting in very bad performance.
https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/
shows huge performance hit for a 20GB qcow2 with default 64kB cluster size:
L2 Cache, MiB Average IOPS
1 (default) 5100
1.5
2009 May 03
1
[LLVMdev] L1, L2 Cache line sizes in TargetData?
Hello,
Is there any way for a pass to determine the L1 or L2 cacheline size
of the target before the IR is lowered to machine instructions?
Thanks,
--
Nick Johnson
2016 Feb 14
0
Configure QCow2 L2 Cache through virt domain XML
Hello, i was wondering if it is possible to configure the l2 cache of a
qcow2 image through the domain xml.
I cannot find any documentation about it... the only thing i've found is
that i can pass custom options to qemu command line through the
<qemu:commandline> tag, but, to my understanding, that would mean entirely
remove the <disk> tag for the image and rewrite it as a list of
2019 Sep 16
0
[PATCH 04/11] drm/nouveau: gp10b: Add custom L2 cache implementation
On Mon, Sep 16, 2019 at 04:35:30PM +0100, Ben Dooks wrote:
> On 16/09/2019 16:04, Thierry Reding wrote:
> > From: Thierry Reding <treding at nvidia.com>
> >
> > There are extra registers that need to be programmed to make the level 2
> > cache work on GP10B, such as the stream ID register that is used when an
> > SMMU is used to translate memory addresses.
2019 Sep 16
1
[PATCH 04/11] drm/nouveau: gp10b: Add custom L2 cache implementation
On Mon, Sep 16, 2019 at 05:49:46PM +0200, Thierry Reding wrote:
> On Mon, Sep 16, 2019 at 04:35:30PM +0100, Ben Dooks wrote:
> > On 16/09/2019 16:04, Thierry Reding wrote:
> > > From: Thierry Reding <treding at nvidia.com>
> > >
> > > There are extra registers that need to be programmed to make the level 2
> > > cache work on GP10B, such as the
2010 Oct 13
0
mtrr error
I need your help and DEL's tech support doesn't provide any help on this
one.
We have a lot of different type of DELL desktops from old-type
hyper-thread cpu, to dual-core and quad-core cpus (most are Xeons). We
run all versions of CentOS, but most are latest 5.5 (also up-to-date)
and are very happy about that. The primary software on those Linux
systems is IDL, which uses OpenGL
2012 Jul 12
0
Nvidia VGX (GPU hypervisor) with Xen
Hello everyone, I was just reading about nvidia''s VGX technology for their
new line of GPUs:
http://www.nvidia.com/object/vgx-hypervisor.html
The page mentions it''s implemented by XenServer. Is this a driver thing
that splits up the GPU and then uses PCI passthrough or is it more involved
than that? Anyone have any info about this for open-source Xen?
Thanks,
Chris
2014 May 23
0
[RFC] drm/nouveau: disable caching for VRAM BOs on ARM
Am Freitag, den 23.05.2014, 16:10 +0900 schrieb Alexandre Courbot:
> On Mon, May 19, 2014 at 7:16 PM, Lucas Stach <l.stach at pengutronix.de> wrote:
> > Am Montag, den 19.05.2014, 19:06 +0900 schrieb Alexandre Courbot:
> >> On 05/19/2014 06:57 PM, Lucas Stach wrote:
> >> > Am Montag, den 19.05.2014, 18:46 +0900 schrieb Alexandre Courbot:
> >> >>
2012 Jan 15
0
[CENTOS6] mtrr_cleanup: can not find optimal value - during server startup
After fresh installation of CentOS 6.2 on my server, I get following errors
in my dmesg output:
-------
MTRR default type: uncachable
MTRR fixed ranges enabled:
00000-9FFFF write-back
A0000-BFFFF uncachable
C0000-D7FFF write-protect
D8000-E7FFF uncachable
E8000-FFFFF write-protect
MTRR variable ranges enabled:
0 base 000000000 mask C00000000 write-back
1 base 400000000 mask
2024 Sep 23
1
[RFC 00/29] Introduce NVIDIA GPU Virtualization (vGPU) Support
On Sun, Sep 22, 2024 at 04:11:21PM +0300, Zhi Wang wrote:
> On Sun, 22 Sep 2024 05:49:22 -0700
> Zhi Wang <zhiw at nvidia.com> wrote:
>
> +Ben.
>
> Forget to add you. My bad.
Please also add the driver maintainers!
I had to fetch the patchset from the KVM list, since they did not hit the
nouveau list (I'm trying to get @nvidia.com addresses whitelisted).
- Danilo
2014 May 23
0
[RFC] drm/nouveau: disable caching for VRAM BOs on ARM
Am Freitag, den 23.05.2014, 18:43 +0900 schrieb Alexandre Courbot:
> On 05/23/2014 06:24 PM, Lucas Stach wrote:
> > Am Freitag, den 23.05.2014, 16:10 +0900 schrieb Alexandre Courbot:
> >> On Mon, May 19, 2014 at 7:16 PM, Lucas Stach <l.stach at pengutronix.de> wrote:
> >>> Am Montag, den 19.05.2014, 19:06 +0900 schrieb Alexandre Courbot:
> >>>> On
2014 Jan 21
2
Re: Double fault panic in L2 upon v2v conversion
On 01/17/2014 04:06 PM, Rom Freiman wrote:
> Kashyap, just to be sure - it happens to you during the v2v
> conversion? on L2?
I haven't done any v2v conversions in L2 (or at any other level).
PS: Sorry, I didn't notice my previous 2 emails didn't go to the list,
that wasn't intended. Rich, you bounce them here, if you prefer (instead
of me clumsily forwarding them).
--
2024 Sep 23
1
[RFC 00/29] Introduce NVIDIA GPU Virtualization (vGPU) Support
Hi Zhi,
Thanks for the very detailed cover letter.
On Sun, Sep 22, 2024 at 05:49:22AM -0700, Zhi Wang wrote:
> 1. Background
> =============
>
> NVIDIA vGPU[1] software enables powerful GPU performance for workloads
> ranging from graphics-rich virtual workstations to data science and AI,
> enabling IT to leverage the management and security benefits of
> virtualization as
2014 Jan 31
0
Re: Double fault panic in L2 upon v2v conversion
Hey everybody,
Any news about the topic? I could not find anything relevant yet.
Thanks,
Rom
On Tue, Jan 21, 2014 at 3:34 PM, Rom Freiman <rom@stratoscale.com> wrote:
> Hi,
>
> We all agree that it's not specific to virt-v2v.
>
> I managed to reproduce the same double fault on "normal" L2 boot - without
> libguestfs interference.
> And as Paolo wrote