Displaying 20 results from an estimated 3000 matches similar to: "AppdB and Bugzilla suggestion"
2011 Aug 20
0
[LLVMdev] Xilinx zynq-7000 (7030) as a Gallium3D LLVM FPGA target
Luke Kenneth Casson Leighton wrote:
> i was just writing this:
> http://www.gp32x.com/board/index.php?/topic/60228-replicating-the-success-of-the-openpandora-discussion-v20/
>
> when something that just occurred to me, half way through, and i would
> greatly appreciate some help evaluating whether it's feasible.
>
> put these together:
>
2011 Aug 20
2
[LLVMdev] Xilinx zynq-7000 (7030) as a Gallium3D LLVM FPGA target
i was just writing this:
http://www.gp32x.com/board/index.php?/topic/60228-replicating-the-success-of-the-openpandora-discussion-v20/
when something that just occurred to me, half way through, and i would
greatly appreciate some help evaluating whether it's feasible.
put these together:
http://www.xilinx.com/products/silicon-devices/epp/zynq-7000/index.htm
2011 Aug 21
4
[LLVMdev] Xilinx zynq-7000 (7030) as a Gallium3D LLVM FPGA target
On Sun, Aug 21, 2011 at 12:48 AM, Nick Lewycky <nicholas at mxc.ca> wrote:
> The way in which Gallium3D targets LLVM, is that it waits until it receives
> the shader program from the application, then compiles that down to LLVM IR.
> That's too late to start synthesizing hardware (unless you're planning to
> ship an FPGA as the graphics card, in which case reprogramming
2011 Aug 21
0
[LLVMdev] Xilinx zynq-7000 (7030) as a Gallium3D LLVM FPGA target
Luke Kenneth Casson Leighton wrote:
> On Sun, Aug 21, 2011 at 12:48 AM, Nick Lewycky<nicholas at mxc.ca> wrote:
>
>> The way in which Gallium3D targets LLVM, is that it waits until it receives
>> the shader program from the application, then compiles that down to LLVM IR.
>> That's too late to start synthesizing hardware (unless you're planning to
>>
2008 Oct 05
1
Nvidia regs (Re: need help relating your post on freedesktop)
On Fri, 3 Oct 2008 18:03:17 -0400 (EDT)
"Kolakkar, Pranay B" <pranay.kolakkar at gatech.edu> wrote:
> Hi Paalanen,
>
> I am a graduate student for Computer Science at Georgia tech and I came
> across your page on freedesktop.org/~pq/rules-ng
>
> It was interesting that you had listed all the registers and addresses of
> the nvidia gpu. Please let me know as
2009 Dec 17
1
Question about nv40_draw_array
Hi,
My name is Krzysztof and currently I'm working on porting nouveau
(gallium3d driver + libdrm + drm) to AROS Research OS
(http://www.aros.org). I completed a quite successful port of "old" drm
(one from libdrm git - now removed) and currently I'm working on drm
port from the nouveau kernel tree git.
Right now I'm faced with rather peculiar memory allocation/access
2017 May 07
2
multiple cards and monitors with xrandr and opengl
Dear Devs,
We have achieved a desktop of up to six monitors, with openGL running
succesfully on the desktop, with the following setup/features:
* Ubuntu 16+
* Xrandr
* Noveau driver
* Two gtx750 graphic cards
Each (identical) graphic card has 2xHDMI + 2xDVI connectors, which we
connect to the monitor array.
So far it works with six monitors, but we'd like to achieve eight.
However,
2016 Jul 16
7
[Bug 96952] New: Noveau driver doesn't working well when in Kdenlive
https://bugs.freedesktop.org/show_bug.cgi?id=96952
Bug ID: 96952
Summary: Noveau driver doesn't working well when in Kdenlive
Product: xorg
Version: unspecified
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component: Driver/nouveau
Assignee:
2017 May 07
2
multiple cards and monitors with xrandr and opengl
On 05/07/2017 11:12 PM, Ilia Mirkin wrote:
> On Sun, May 7, 2017 at 7:17 AM, Sampsa Riikonen <sampsa.riikonen at iki.fi> wrote:
>> Dear Devs,
>>
>> We have achieved a desktop of up to six monitors, with openGL running
>> succesfully on the desktop, with the following setup/features:
>>
>> * Ubuntu 16+
>> * Xrandr
>> * Noveau driver
>>
2016 Jan 19
4
[Bug 93778] New: Noveau GeForce GT 640M vbios error
https://bugs.freedesktop.org/show_bug.cgi?id=93778
Bug ID: 93778
Summary: Noveau GeForce GT 640M vbios error
Product: xorg
Version: unspecified
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Driver/nouveau
Assignee: nouveau at
2013 Mar 14
6
moderate rant un updates
So, another admin I work with rolled out most (but not kernel) updates to
6.4... but including xorg. I log out the end of the day, and I'm hosed -
no X.
Now, my not-two-year-old workstation has an nvidia card, and I'd installed
kmod-nvidia from elrepo. I figured I'd fix my problem by finishing the
upgrade and rebooting.
Nope.
I try to upgrade kmod-nvidia from elrepo. Anyone got any
2001 Mar 07
1
fixeme:pthread_kill_other_threads_np
I just installed wine. I've read as much of the documentation as I could
find. yet cannot find a list of known problems.?.
I get this error every time I try and use it
fixeme:pthread_kill_other_threads_np
and then
fixeme:console:SetConsoleScreenBufferSize (8,80x25): stub
followed with a lot more of the first error.
then I get a couple windows that pop up. one being the program I
2011 Sep 18
2
Segfault in SolidWorks with recent nvidia cards and drivers
Hello,
I upgraded my config to and i7.2600/quadro600 setup running ubuntu natty 64bits with 3.1-rc kernel (needed for correct asus P8P67 EFI support) and 280 or 285 Nividia proprietary drivers, and I'm experiencing systematic segfaults when running solidworks 32bits ( version 2010 or 2011 ) : the part is displayed in 3D, and then the application is closed immediatly. There are no traces in
2017 May 09
1
multiple cards and monitors with xrandr and opengl
Hi,
Thanks for your advice..! I have a few follow-up questions (tagged
below Q1, Q2 and Q3). Any help highly/extremely appreciated.
Regarding to "reverse prime", etc. I have read the following page:
https://nouveau.freedesktop.org/wiki/Optimus/
So, if we want a single "macro" xscreen that spans two cards, for example:
Card 0, connected to monitor 0
Card 1, connected to
2013 Apr 22
0
[LLVMdev] GSoC proposal: TGSI compiler back-end.
Francisco Jerez <currojerez at riseup.net> writes:
> Although I'm sending this as a GSoC proposal, I'm well aware that the
> amount of work that a project of this kind involves largely exceeds the
> scope of the GSoC program. I think that's okay: my work here wouldn't
> be finished at the end of this summer by any means, it would merely be a
> start.
>
2013 Apr 04
2
[LLVMdev] GSoC proposal: TGSI compiler back-end.
Although I'm sending this as a GSoC proposal, I'm well aware that the
amount of work that a project of this kind involves largely exceeds the
scope of the GSoC program. I think that's okay: my work here wouldn't
be finished at the end of this summer by any means, it would merely be a
start.
TGSI is the intermediate representation that all open-source GPU drivers
using the
2008 Jul 17
1
noveau help/testing
On Thu, 17 Jul 2008 15:49:42 +0200
Micha? Wi?niewski <brylozketrzyn at gmail.com> wrote:
> Hi
>
> I'm writing to you cause I want to help with noveau. I have GF 5200 card and I
> can dump any data you need from it's registers, of course, avalaible to dump
> with nvclock. Just write what test you want me to perform (if possible, with
> explanations; I'll set
2012 Feb 07
7
[Bug 45752] New: Debian Wheezy with xfce4-power-manager and noveau fails to resume from hibernate
https://bugs.freedesktop.org/show_bug.cgi?id=45752
Bug #: 45752
Summary: Debian Wheezy with xfce4-power-manager and noveau
fails to resume from hibernate
Classification: Unclassified
Product: xorg
Version: 7.7 (2011)
Platform: x86-64 (AMD64)
OS/Version: Linux (All)
Status: NEW
Severity:
2009 Mar 31
1
(patch) Gallium NV50: honor bypass_vs_clip_and_viewport
When trying out the Gallium3D NV50 driver (curiosity) with a small OpenGL
program that renders 2 rotating triangles partially occluding each other
I noticed that depth buffer clearing by rendering a quad
(st_cb_clear.c/clear_with_quad) didn't work properly.
I found this was because the rasterizer state that is set by clear_with_quad
has bypass_vs_clip_and_viewport = 1 which would only be
2016 May 14
2
R external pointer and GPU memory leak problem
My question is based on a project I have partially done, but there is still something I'm not clear.
My goal is to create a R package contains GPU functions (some are from Nividia cuda library, some are my self-defined CUDA functions)
My design is quite different from current R's GPU package, I want to create a R object (external pointer) point to GPU address, and run my GPU function