search for: 40f6

Displaying 6 results from an estimated 6 matches for "40f6".

Did you mean: 4096
2010 Apr 20
0
3.4.6 to 3.5.2 difficulties
...issue though, windows 2003 servers couldn't login users into the domain. These servers were joined as domain members. Logs as follows: > The browser service was unable to retrieve a list of servers from the > browser master \\LDAP on the network > \Device\NetBT_Tcpip_{87C2EC8F-2437-40F6-A637-4B7B3A70F5D5}. > > Browser master: \\LDAP > Network: \Device\NetBT_Tcpip_{87C2EC8F-2437-40F6-A637-4B7B3A70F5D5} and also as a result this: > This computer was not able to set up a secure session with a domain > controller in domain USAINTEANNE due to the following: > The...
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
...lume Name: stor_fast Type: Distribute Volume ID: ad82b554-8ff0-4903-be32-f8dcb9420f31 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: blade7.xen:/gluster/stor_fast Options Reconfigured: nfs.port: 2049 Volume Name: stor1 Type: Replicate Volume ID: 6bd88164-86c2-40f6-9846-b21e90303e73 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: blade7.xen:/gluster/stor1 Brick2: blade6.xen:/gluster/stor1 Options Reconfigured: nfs.port: 2049 [root@blade7 stor1]# gluster volume info Volume Name: stor_fast Type: Distribute Volum...
2013 Aug 29
2
Puma fails when it restarts itself
...n email to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/9ac8c9a9-ea48-40f6-87dd-2312d460925d%40googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...FE04F76F341B33B (c37750b3-9d10-471b-bc98-3f2aa974e40b) on home-client-2 [2017-10-25 10:14:02.130576] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/315D5A2D320DF4FA250F2E139EB05FBA0BD4E288 (23ba4012-4cd9-40f6-b133-64d9aca75ded) on home-client-2 [2017-10-25 10:14:02.161616] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/AD01B7D51464EFED3BF168E1BDAA8E688AB39A8B (6ca39064-871b-4f36-9aa5-1f1d29580a21) on home-cli...