Displaying 20 results from an estimated 70000 matches similar to: "NVIDIA display hardware documentation update"
2017 Dec 15
0
NVIDIA display class method hardware documentation update
Hi,
I'm pleased to announce that NVIDIA has released a new revision of the display
class method hardware documentation. This release includes definitions for the
Pascal and Volta GPU architectures.
As always, you can find the files on download.nvidia.com:
https://download.nvidia.com/open-gpu-doc/Display-Class-Methods/2/
- Robert
2015 Oct 02
2
Documentation request for MP warp error 0x10
Hi Robert,
Thanks for the quick response! That goes in line with my observations
which is that these things happen when using an ATOM/RED instruction.
I've checked and rechecked that I'm generating ops with identical bits
as what the proprietary driver does, however (and nvdisasm prints
identical output). Could you advise what the proper way of indicating
that the memory is
2015 Oct 26
2
Documentation request for MP warp error 0x10
On Fri, Oct 2, 2015 at 6:14 PM, Robert Morell <rmorell at nvidia.com> wrote:
> Hi Ilia,
>
> On Fri, Oct 02, 2015 at 06:05:21PM -0400, Ilia Mirkin wrote:
>> Hi Robert,
>>
>> Thanks for the quick response! That goes in line with my observations
>> which is that these things happen when using an ATOM/RED instruction.
>> I've checked and rechecked that
2018 Sep 16
1
missing firmware report
I just built a linux-4.18.7 kernel from kernel.org for opensuse linux
42.3. When I issued the command "sudo make modules_install install" I
received a series of error messages:
dracut: Possible missing firmware "nvidia/gv100/sec2/sig.bin" for kernel
module "nouveau.ko"
dracut: Possible missing firmware "nvidia/gv100/sec2/image.bin" for
kernel module
2015 Oct 02
0
Documentation request for MP warp error 0x10
Hi Ilia,
On Fri, Oct 02, 2015 at 06:05:21PM -0400, Ilia Mirkin wrote:
> Hi Robert,
>
> Thanks for the quick response! That goes in line with my observations
> which is that these things happen when using an ATOM/RED instruction.
> I've checked and rechecked that I'm generating ops with identical bits
> as what the proprietary driver does, however (and nvdisasm prints
2019 Sep 17
1
[PATCH 2/6] drm/nouveau: fault: Widen engine field
On Tue, 17 Sep 2019 at 01:18, Thierry Reding <thierry.reding at gmail.com> wrote:
>
> From: Thierry Reding <treding at nvidia.com>
>
> The engine field in the FIFO fault information registers is actually 9
> bits wide.
Looks like this is true for fault buffer parsing too.
>
> Signed-off-by: Thierry Reding <treding at nvidia.com>
> ---
>
2015 Oct 26
0
Documentation request for MP warp error 0x10
On Mon, Oct 26, 2015 at 03:28:59PM -0400, Ilia Mirkin wrote:
> On Fri, Oct 2, 2015 at 6:14 PM, Robert Morell <rmorell at nvidia.com> wrote:
> > Hi Ilia,
> >
> > On Fri, Oct 02, 2015 at 06:05:21PM -0400, Ilia Mirkin wrote:
> >> Hi Robert,
> >>
> >> Thanks for the quick response! That goes in line with my observations
> >> which is that
2013 Oct 24
2
known MSI errata?
On Fri, Oct 25, 2013 at 7:43 AM, Robert Morell <rmorell at nvidia.com> wrote:
> On Mon, Sep 30, 2013 at 10:44:12AM -0700, Lucas Stach wrote:
>> Hi,
>>
>> recently we tried to enable MSI interrupts with nouveau. Unfortunately
>> there have been some reports of things failing with certain cards, where
>> it isn't entirely clear if this is a GPU errata or
2015 Nov 06
0
Documentation request for MP warp error 0x10
On Fri, Nov 6, 2015 at 3:59 PM, Robert Morell <rmorell at nvidia.com> wrote:
> On Fri, Oct 02, 2015 at 06:05:21PM -0400, Ilia Mirkin wrote:
>> Could you advise what the proper way of indicating
>> that the memory is "global" to the op? I'm sure I'm just missing
>> something simple. If you show me what to look for in SM35 I can
>> probably find it
2019 Sep 16
0
[PATCH 2/6] drm/nouveau: fault: Widen engine field
From: Thierry Reding <treding at nvidia.com>
The engine field in the FIFO fault information registers is actually 9
bits wide.
Signed-off-by: Thierry Reding <treding at nvidia.com>
---
drivers/gpu/drm/nouveau/nvkm/subdev/fault/gv100.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fault/gv100.c
2019 Sep 16
0
[PATCH 1/6] drm/nouveau: fault: Store aperture in fault information
From: Thierry Reding <treding at nvidia.com>
The fault information register contains data about the aperture that
caused the failure. This can be useful in debugging aperture related
programming bugs.
Signed-off-by: Thierry Reding <treding at nvidia.com>
---
drivers/gpu/drm/nouveau/include/nvkm/subdev/fault.h | 1 +
drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c | 3 ++-
2019 Sep 17
1
[PATCH 1/6] drm/nouveau: fault: Store aperture in fault information
On Tue, 17 Sep 2019 at 01:18, Thierry Reding <thierry.reding at gmail.com> wrote:
>
> From: Thierry Reding <treding at nvidia.com>
>
> The fault information register contains data about the aperture that
> caused the failure. This can be useful in debugging aperture related
> programming bugs.
Should this be parsed for fault buffer entries too?
>
>
2015 May 21
2
Fermi+ shader header docs
On Thu, May 21, 2015 at 10:05 AM, Robert Morell <rmorell at nvidia.com> wrote:
> Hi Ilia,
>
> On Sat, May 02, 2015 at 12:34:21PM -0400, Ilia Mirkin wrote:
>> Hi,
>>
>> As I'm looking to add some support to nouveau for features like atomic
>> counters and images, I'm running into some confusion about what the
>> first word of the shader header
2015 Nov 06
1
Documentation request for MP warp error 0x10
On Fri, Nov 06, 2015 at 04:15:29PM -0500, Ilia Mirkin wrote:
> In order for ATOM.*/RED.* to work, the addresses in question must
> *NOT* be inside of the 16MB local/shared windows. So if I'm getting
> that error, the address must be inside.
Yes, that's my understanding.
> If so, this may be a reasonable explanation for what I'm seeing --
Cool, I'm happy it helps.
2013 Oct 24
0
known MSI errata?
On Thu, Oct 24, 2013 at 04:03:12PM -0700, Ben Skeggs wrote:
> On Fri, Oct 25, 2013 at 7:43 AM, Robert Morell <rmorell at nvidia.com> wrote:
> > On Mon, Sep 30, 2013 at 10:44:12AM -0700, Lucas Stach wrote:
> >> Hi,
> >>
> >> recently we tried to enable MSI interrupts with nouveau. Unfortunately
> >> there have been some reports of things failing
2017 Sep 27
0
Semi-OT: hardware: NVidia proprietary driver, C7.4
On 27/09/17 16:49, m.roth at 5-cent.us wrote:
> Hi, folks,
>
> Well, still more fun (for values of fun approaching zero):
>
> 1. Went to install CUDA 9.0... well, gee, there is *no* CUDA 9.0.
> Even though I installed the 9 repo, all that I get is 8. I've
> used their webform, and an waiting on a reply.
> 2. I remove all nvidia packages.
2014 Dec 17
0
NVIDIA dropping support for older hardware (G8xxx, G9xxx and GT2xx chipsets)
Hi all,
I just wanted to give a heads up to any NVIDIA users that NVIDIA are
dropping support for older hardware based on G8xxx, G9xxx and GT2xx
chipsets in their latest display drivers. The last version to support
these older chipsets will be the current Long Lived 340.xx branch.
If anyone is using NVIDIA driver packages from elrepo, legacy 340.xx
driver packages are now available for those
2017 Sep 27
0
Semi-OT: hardware: NVidia proprietary driver, C7.4
On 27/09/17 07:56, Sorin Srbu wrote:
>> -----Original Message-----
>> From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of Phil Perry
>> Sent: den 26 september 2017 21:46
>> To: centos at centos.org
>> Subject: Re: [CentOS] Semi-OT: hardware: NVidia proprietary driver, C7.4
>>
>> On 26/09/17 18:40, m.roth at 5-cent.us wrote:
>>> This is
2017 Sep 26
0
Semi-OT: hardware: NVidia proprietary driver, C7.4
On Tue, 2017-09-26 at 13:40 -0400, m.roth at 5-cent.us wrote:
> This is really frustrating. I've got a server with two K20c Tesla cards. I
> need to use the proprietary drivers to use the CUDA toolkit. Btw, I had no
> trouble at all with building for CentOS 7.3
>
> I have what NVidia claims is the correct driver package, a 340 series. It
> appears to build, but then fails to
2017 Sep 26
0
Semi-OT: hardware: NVidia proprietary driver, C7.4
On 26/09/17 18:40, m.roth at 5-cent.us wrote:
> This is really frustrating. I've got a server with two K20c Tesla cards. I
> need to use the proprietary drivers to use the CUDA toolkit. Btw, I had no
> trouble at all with building for CentOS 7.3
>
> I have what NVidia claims is the correct driver package, a 340 series. It
> appears to build, but then fails to load. The only