search for: 43db

Displaying 7 results from an estimated 7 matches for "43db".

Did you mean: 40db
2019 Feb 26
3
Joining_a_Samba_DC_to_an_Existing_Active_Directory
...e\DC1 DSA Options: 0x00000001 DSA object GUID: 8ba457e4-815d-4bd3-a748-8b5ddb53fd5f DSA invocationId: 834770f4-c5a7-48c7-bc77-66e2cf37e557 ==== INBOUND NEIGHBORS ==== DC=ForestDnsZones,DC=lxcerruti,DC=com         Default-First-Site-Name\DC2 via RPC                 DSA object GUID: 2c8db74e-548c-43db-996a-a5287c6aa557                 Last attempt @ Tue Feb 26 14:28:28 2019 CET failed, result 1232 (WERR_HOST_UNREACHABLE)                 31 consecutive failure(s).                 Last success @ NTTIME(0) and many rows like this in log.smbd: [2019/02/26 14:33:01.184413,  0] ../source4/librpc...
2019 Feb 26
0
Joining_a_Samba_DC_to_an_Existing_Active_Directory
...e\DC1 DSA Options: 0x00000001 DSA object GUID: 8ba457e4-815d-4bd3-a748-8b5ddb53fd5f DSA invocationId: 834770f4-c5a7-48c7-bc77-66e2cf37e557 ==== INBOUND NEIGHBORS ==== DC=ForestDnsZones,DC=lxcerruti,DC=com         Default-First-Site-Name\DC2 via RPC                 DSA object GUID: 2c8db74e-548c-43db-996a-a5287c6aa557                 Last attempt @ Tue Feb 26 14:28:28 2019 CET failed, result 1232 (WERR_HOST_UNREACHABLE)                 31 consecutive failure(s).                 Last success @ NTTIME(0) and many rows like this in log.smbd: [2019/02/26 14:33:01.184413,  0] ../source4/librpc...
2008 Feb 12
7
san fibrechannel device in HVM domU
Hi, I''m on a HP DL365, amd64, running SLES10sp1, but with kernel and xen of SLES10sp2, therefore using Xen 3.2. The domU shall be a Windows HVM guest. I want use the Qlogic SAN card in a domU. I''m following these instructions: http://www.novell.com/communities/node/2880/assign-dedicated-network-card-or-pci-device-xen-virtual-machine well, there is written that this only
2014 May 08
0
Rails log for rspec tests inside of an engine
...n email to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/90e69d93-1c2e-43db-81d7-97778ee50297%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 4727bcf4-0b33-48ac-bb2f-db8396c68e05. sources=0 [2] sinks=1 [2017-10-25 10:40:31.426983] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 04558251-237f-43db-8858-de25c205a388. sources=0 [2] sinks=1 [2017-10-25 10:40:31.428320] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 04558251-237f-43db-8858-de25c205a388 [2017-10-25 10:40:31.431407] I [MSGID: 108026] [afr-self-heal-c...