Displaying 9 results from an estimated 9 matches for "4e92".
Did you mean:
492
2012 Jun 27
1
DNS issue.
...irst-Site-Name\PDC
DSA Options: 0x00000001
DSA object GUID: 56003cd3-d15b-4825-915f-37b9e2952f2a
DSA invocationId: ec8a9ed7-ce1a-449e-8321-97c715375445
==== INBOUND NEIGHBORS ====
DC=DomainDnsZones,DC=abc,DC=com
Default-First-Site-Name\BDC via RPC
DSA object GUID: adf1d7c5-4e92-400f-9bfb-17986c6d20a2
Last attempt @ Wed Jun 27 08:51:47 2012 IST failed, result
2 (WERR_BADFILE)
216 consecutive failure(s).
Last success @ NTTIME(0)
DC=ForestDnsZones,DC=abc,DC=com
Default-First-Site-Name\BDC via RPC
DSA ob...
2014 Jun 27
1
geo-replication status faulty
...ick):204:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-DIu2bR/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock root at node003 /nonexistent/gsyncd --session-owner 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120 gluster://localhost:gluster_vol1" returned with 127, saying:
[2014-06-26 17:09:10.264806] E [resource(/data/glusterfs/vol0/brick0/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
[2014-06-26 17:09:10.266753] I [s...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
...and [2018-02-04 10:50:27.762292]
[2018-02-04 10:55:35.256018] W [MSGID: 109011]
[dht-layout.c:186:dht_layout_search]
48-gv0-dht: no subvolume for hash (value) = 28918667
[2018-02-04 10:55:35.387073] W [fuse-bridge.c:2398:fuse_writev_cbk]
0-glusterfs-fuse: 4006263: WRITE => -1
gfid=54e6f8ea-27d7-4e92-ae64-5e198bd3cb42
fd=0x7ffa38036bf0 (?????? ?????/??????)
[2018-02-04 10:55:35.407554] W [fuse-bridge.c:1377:fuse_err_cbk]
0-glusterfs-fuse: 4006264: FLUSH() ERR => -1 (?????? ?????/??????)
[2018-02-04 10:55:59.677734] W [MSGID: 109011]
[dht-layout.c:186:dht_layout_search]
48-gv0-dht: no subvolu...
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
....762292]
> [2018-02-04 10:55:35.256018] W [MSGID: 109011]
> [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no subvolume for hash
> (value) = 28918667
> [2018-02-04 10:55:35.387073] W [fuse-bridge.c:2398:fuse_writev_cbk]
> 0-glusterfs-fuse: 4006263: WRITE => -1 gfid=54e6f8ea-27d7-4e92-ae64-5e198bd3cb42
> fd=0x7ffa38036bf0 (?????? ?????/??????)
> [2018-02-04 10:55:35.407554] W [fuse-bridge.c:1377:fuse_err_cbk]
> 0-glusterfs-fuse: 4006264: FLUSH() ERR => -1 (?????? ?????/??????)
> [2018-02-04 10:55:59.677734] W [MSGID: 109011]
> [dht-layout.c:186:dht_layout_searc...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
...018-02-04 10:55:35.256018] W [MSGID: 109011]
>> [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no subvolume for hash
>> (value) = 28918667
>> [2018-02-04 10:55:35.387073] W [fuse-bridge.c:2398:fuse_writev_cbk]
>> 0-glusterfs-fuse: 4006263: WRITE => -1 gfid=54e6f8ea-27d7-4e92-ae64-5e198bd3cb42
>> fd=0x7ffa38036bf0 (?????? ?????/??????)
>> [2018-02-04 10:55:35.407554] W [fuse-bridge.c:1377:fuse_err_cbk]
>> 0-glusterfs-fuse: 4006264: FLUSH() ERR => -1 (?????? ?????/??????)
>> [2018-02-04 10:55:59.677734] W [MSGID: 109011]
>> [dht-layout.c:...
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
...35.256018] W [MSGID: 109011]
>>> [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no subvolume for hash
>>> (value) = 28918667
>>> [2018-02-04 10:55:35.387073] W [fuse-bridge.c:2398:fuse_writev_cbk]
>>> 0-glusterfs-fuse: 4006263: WRITE => -1 gfid=54e6f8ea-27d7-4e92-ae64-5e198bd3cb42
>>> fd=0x7ffa38036bf0 (?????? ?????/??????)
>>> [2018-02-04 10:55:35.407554] W [fuse-bridge.c:1377:fuse_err_cbk]
>>> 0-glusterfs-fuse: 4006264: FLUSH() ERR => -1 (?????? ?????/??????)
>>> [2018-02-04 10:55:59.677734] W [MSGID: 109011]
>>...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...496 (e02a8b9b-6b93-4f12-abdf-cf716e2bc652) on home-client-2
[2017-10-25 10:14:12.310381] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/B3857D3BD45085F6DA0FF3A502CF388A5764A052 (1ac15de5-7f4b-44da-9f76-974e92b94d16) on home-client-2
[2017-10-25 10:14:12.330767] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/ADFFEFF5C34202705A66703CF4A3A372BC83C369 (6981b1d5-c775-49ad-94f7-8a5e8c74e42c) on home-client-2
[2017-...