Displaying 20 results from an estimated 21 matches for "a097".
Did you mean:
1097
2018 Nov 21
2
About Porting Android to nouveau
...ORM, A8B8G8R8, TB),
I want the last params to be “TD” or “ID” (support PIPE_BIND_DISPLAY_TARGET) , to support android. I modified these code, but got display issues and kernel issue:
[ 93.313995] nouveau 0000:01:00.0: gr: DATA_ERROR 0000009c [] ch 5 [007f6c6000 RenderThread[5395]] subc 0 class a097 mthd 15f0 data 002c0014
[ 93.314082] nouveau 0000:01:00.0: gr: DATA_ERROR 0000009c [] ch 5 [007f6c6000 RenderThread[5395]] subc 0 class a097 mthd 15f0 data 002d0016
[ 93.314112] nouveau 0000:01:00.0: gr: DATA_ERROR 0000009c [] ch 5 [007f6c6000 RenderThread[5395]] subc 0 class a097 mthd 15f0 dat...
2019 Feb 16
0
About Porting Android to nouveau
...y bits. Some winsys's get upset
> > if you have both RGBA and BGRA ordering though. (The reason that we
> > prefer BGRA8 on X is largely legacy.)
> >
> >> [ 93.313995] nouveau 0000:01:00.0: gr: DATA_ERROR 0000009c [] ch 5 [007f6c6000 RenderThread[5395]] subc 0 class a097 mthd 15f0 data 002c0014
> >> [ 93.314082] nouveau 0000:01:00.0: gr: DATA_ERROR 0000009c [] ch 5 [007f6c6000 RenderThread[5395]] subc 0 class a097 mthd 15f0 data 002d0016
> >> [ 93.314112] nouveau 0000:01:00.0: gr: DATA_ERROR 0000009c [] ch 5 [007f6c6000 RenderThread[5395]] sub...
2016 Dec 28
1
Resolving Schema & Configuration Replication Issues
...ds].
It is possible to attempt to retry the replication somehow - we have
since update the S4 DCs. Something like a "drs replicate --full-sync"?
CN=Schema,CN=Configuration,DC=micore,DC=us
Default-First-Site-Name\TEMP2008R2DC via RPC
DSA object GUID: c8d5c583-a097-4265-858a-cb67797ebb05
Last attempt @ Wed Dec 28 10:46:41 2016 EST failed,
result 58 (WERR_BAD_NET_RESP)
15347 consecutive failure(s).
Last success @ Fri Nov 4 14:59:16 2016 EDT
CN=Configuration,DC=micore,DC=us
Default-First-Site-Name\...
2007 Oct 07
3
rsync error
Skipped content of type multipart/alternative-------------- next part --------------
Executing: rsync.exe -v -rlt --delete "/cygdrive/C/Documents and Settings/User/Local Settings/Application Data/Identities/{DFF16927-88E6-4EAA-A097-460B7E65289B}/Microsoft/Outlook Express/" "localhost::Backup/Outlook Express/"
building file list ...
done
./
Deleted Items.dbx
rsync: writefd_unbuffered failed to write 4 bytes: phase "unknown" [sender]: Connection reset by peer (104)
rsync: read error: Connection res...
2016 Nov 16
4
Schema Change Breaks Replication
I believe a schema change on a Windows DC (2008rc) has broken
replication with our S4 DCs. Anyone have any tips or pointers to
resolve this?
I have three S4 DCs [CentOS6] and one Windows 2008R2 DC. The Windows
2008R2 DC has the schema master FSMO, and I believe the Exchange schema
was added.
I am willing to pay US dollars to get this issue resolved. I need the
replication restored, the
2005 Dec 06
2
Mac OS X clients not binding to a Samba+LDAP PDC
...A 192.168.101.50
ldap A 192.168.101.50
pruebas A 192.168.101.50
_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs SRV 0 100 389
pruebas.valeeuro.com
_ldap._tcp.dc._msdcs SRV 0 100 389 pruebas.valeeuro.com
_ldap._tcp.aab455e4-bbb2-408b-a097-bb359f315574.domains._msdcs SRV 0 100
389 pruebas.valeeuro.com
_ldap._tcp.Default-First-Site-Name._sites.gc._msdcs SRV 0 100 389
pruebas.valeeuro.com
_ldap._tcp.gc._msdcs SRV 0 100 389 pruebas.valeeuro.com
_ldap._tcp.pdc._msdcs SRV 0 100 389 pruebas.valeeuro.com
_gc._...
2012 Feb 21
3
libvirt doesn't boot kVM, hangs, qemu on 100% CPU
...9;ll be hugely in debt for your help and welcome any
suggestions, ready to provide as much information as
you need to solve this issue.
So long,
i
--
Igor Gali?
Tel: +43 (0) 664 886 22 883
Mail: i.galic at brainsware.org
URL: http://brainsware.org/
GPG: 6880 4155 74BD FD7C B515 2EA5 4B1D 9E08 A097 C9AE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: web.xml
Type: application/xml
Size: 2971 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20120221/acc4428f/attachment.wsdl>
2012 Sep 21
0
[LLVMdev] [PATCH] Building compiler-rt on Solaris
...ding compiler-rt on Solaris
http://llvm.org/bugs/show_bug.cgi?id=13896
but was told on IRC to submit patches to the ML.
This is it.
So long,
i
--
Igor Galić
Tel: +43 (0) 664 886 22 883
Mail: i.galic at brainsware.org
URL: http://brainsware.org/
GPG: 6880 4155 74BD FD7C B515 2EA5 4B1D 9E08 A097 C9AE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: byteorder.patch
Type: text/x-patch
Size: 790 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120921/9bc19277/attachment.bin>
2012 Feb 23
0
[SeaBIOS] RFE: Print amount of RAM
...<bios useserial='yes' /> we could
introduce <bios useserial='debug' /> perhaps.
> -Kevin
Thanks Kevin.
So long,
i
--
Igor Gali?
Tel: +43 (0) 664 886 22 883
Mail: i.galic at brainsware.org
URL: http://brainsware.org/
GPG: 6880 4155 74BD FD7C B515 2EA5 4B1D 9E08 A097 C9AE
2017 Jul 07
2
I/O error for one folder within the mountpoint
...8168-d983c4a82475>
<gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0>
<gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
<gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
<gfid:842b30c1-6016-45bd-9685-6be76911bd98>
<gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
<gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
<gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316>
<gfid:01409b23-eff2-4bda-966e-ab6133784001>
<gfid:c723e484-63fc-4267-b3f0-4090194370a0>
<gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
<gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104>
<gfid:a8f6d7e5-0ff2-4747-89f3...
2016 Nov 13
1
[Bug 98701] New: [NVE6] Desktop freeze, fifo read fault at 0000000000 engine 00 [GR] client 14 [SCC] reason 02 [PTE] on channel 21
...ntact[19175]: Unknown handle 0x0000003c
[ 9655.527542] nouveau 0000:01:00.0: kontact[19175]: validate_init
[ 9655.527546] nouveau 0000:01:00.0: kontact[19175]: validate: -2
[ 9655.534613] nouveau 0000:01:00.0: gr: DATA_ERROR 0000000c [INVALID_BITFIELD]
ch 21 [023e9d4000 kontact[19175]] subc 0 class a097 mthd 2384 data 20050004
[ 9655.537255] nouveau 0000:01:00.0: kontact[19175]: Unknown handle 0x0000003c
[ 9655.537261] nouveau 0000:01:00.0: kontact[19175]: validate_init
[ 9655.537264] nouveau 0000:01:00.0: kontact[19175]: validate: -2
[ 9655.537530] nouveau 0000:01:00.0: fifo: read fault at 000000...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...;gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0>
> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
> <gfid:842b30c1-6016-45bd-9685-6be76911bd98>
> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316>
> <gfid:01409b23-eff2-4bda-966e-ab6133784001>
> <gfid:c723e484-63fc-4267-b3f0-4090194370a0>
> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104>
> &l...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...4f07-bd66-5a0e17edb2b0>
>> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
>> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
>> <gfid:842b30c1-6016-45bd-9685-6be76911bd98>
>> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
>> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
>> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316>
>> <gfid:01409b23-eff2-4bda-966e-ab6133784001>
>> <gfid:c723e484-63fc-4267-b3f0-4090194370a0>
>> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
>> <gfid:056f3bba-6324-4cd8-b08d-bdf0...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...b0>
>>> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
>>> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
>>> <gfid:842b30c1-6016-45bd-9685-6be76911bd98>
>>> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
>>> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
>>> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316>
>>> <gfid:01409b23-eff2-4bda-966e-ab6133784001>
>>> <gfid:c723e484-63fc-4267-b3f0-4090194370a0>
>>> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
>>> <gfid:056f3bba...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote:
>
> Hello everyone,
>
> first time on the ML so excuse me if I'm not following well the rules,
> I'll improve if I get comments.
>
> We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
> following command was made on node ipvr8.xxx:
>
> # gluster volume info applicatif
>
> Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone,
first time on the ML so excuse me if I'm not following well the rules,
I'll improve if I get comments.
We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
following command was made on node ipvr8.xxx:
# gluster volume info applicatif
Volume Name: applicatif
Type: Replicate
Volume ID: ac222863-9210-4354-9636-2c822b332504
Status: Started
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi,
I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode
(fs1, fs2)
Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB
sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit
network
I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using
glusterfs. (also tried NFS -> Gluster mount)
We have 50Gb of
2016 Jan 07
57
[Bug 93629] New: [NVE6] complete system freeze, PGRAPH engine fault on channel 2, SCHED_ERROR [ CTXSW_TIMEOUT ]
https://bugs.freedesktop.org/show_bug.cgi?id=93629
Bug ID: 93629
Summary: [NVE6] complete system freeze, PGRAPH engine fault on
channel 2, SCHED_ERROR [ CTXSW_TIMEOUT ]
Product: xorg
Version: unspecified
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next