Displaying 8 results from an estimated 8 matches for "400f".
Did you mean:
400
2012 Jun 27
1
DNS issue.
...Site-Name\PDC
DSA Options: 0x00000001
DSA object GUID: 56003cd3-d15b-4825-915f-37b9e2952f2a
DSA invocationId: ec8a9ed7-ce1a-449e-8321-97c715375445
==== INBOUND NEIGHBORS ====
DC=DomainDnsZones,DC=abc,DC=com
Default-First-Site-Name\BDC via RPC
DSA object GUID: adf1d7c5-4e92-400f-9bfb-17986c6d20a2
Last attempt @ Wed Jun 27 08:51:47 2012 IST failed, result
2 (WERR_BADFILE)
216 consecutive failure(s).
Last success @ NTTIME(0)
DC=ForestDnsZones,DC=abc,DC=com
Default-First-Site-Name\BDC via RPC
DSA object...
2014 Oct 01
1
DHCP with ipv6 tunnel
...NS
lookups are going to Comcast's servers.
I have an IPV6 tunnel through Hurricane Electric (www.tunnelbroker.net).
The tunnel is configured and is up and running on the CentOS server. I can
ping several IPV6 addresses from it just fine:
ping6 -n ipv6.google.com
PING ipv6.google.com(2607:f8b0:400f:801::1006) 56 data bytes
64 bytes from 2607:f8b0:400f:801::1006: icmp_seq=1 ttl=53 time=109 ms
64 bytes from 2607:f8b0:400f:801::1006: icmp_seq=2 ttl=53 time=109 ms
64 bytes from 2607:f8b0:400f:801::1006: icmp_seq=3 ttl=53 time=106 ms
^C
--- ipv6.google.com ping statistics ---
3 packets transmitted...
2012 Jun 08
2
btrfs filesystems can only be mounted after an unclean shutdown if btrfsck is run and immediately killed!
...82 /dev/sdc
[ 10.403108] btrfs: force zlib compression
[ 10.403130] btrfs: enabling inode map caching
[ 10.403152] btrfs: disk space caching is enabled
[ 10.403377] btrfs: failed to read the system array on sdc
[ 10.403557] btrfs: open_ctree failed
[ 10.431763] device fsid 7f7be913-e359-400f-8bdb-7ef48aad3f03 devid 2
transid 3916 /dev/sdb
[ 10.432180] btrfs: force zlib compression
[ 10.433040] btrfs: enabling inode map caching
[ 10.433892] btrfs: disk space caching is enabled
[ 10.434930] btrfs: failed to read the system array on sdb
[ 10.435945] btrfs: open_ctree failed
f...
2012 Jan 20
2
No sound in Wine 1.3.37 (tried known solutions already)
...e regsvr32 dsound.dll
err:ole:CoGetClassObject class {44ec053a-400f-11d0-9dcd-00a0c90391d3} not registered
err:ole:CoGetClassObject no class object {44ec053a-400f-11d0-9dcd-00a0c90...
2002 Nov 19
0
winbindd+ win24
...vwv[21]=32768 (0x8000)
smb_vwv[22]=0 (0x0)
smb_vwv[23]=0 (0x0)
smb_vwv[24]=16 (0x10)
smb_vwv[25]=0 (0x0)
smb_vwv[26]=0 (0x0)
smb_vwv[27]=0 (0x0)
smb_vwv[28]=0 (0x0)
smb_vwv[29]=0 (0x0)
smb_vwv[30]=0 (0x0)
smb_vwv[31]=512 (0x200)
smb_vwv[32]=65280 (0xFF00)
smb_vwv[33]=5 (0x5)
smb_bcc=0
Bind RPC Pipe[400f]: \PIPE\lsarpc
Bind Abstract Syntax: [000] 78 57 34 12 34 12 CD AB EF 00 01 23 45 67 89 AB xW4.4... ...#Eg..
[010] 00 00 00 00 ....
Bind Transfer Syntax: [000] 04 5D 88 8A EB 1C C9 11 9F E8 08 00 2B 10 48 60 .]...... ....+.H`
[010] 02 00 00 00...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 15eb6b72-4c8a-4af6-8ca8-cb7331deed82. sources=0 [2] sinks=1
[2017-10-25 10:38:58.063079] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on e55582c1-8979-400f-bb87-92ab4b9409a9. sources=0 [2] sinks=1
[2017-10-25 10:38:58.064804] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on e55582c1-8979-400f-bb87-92ab4b9409a9
[2017-10-25 10:38:58.068826] I [MSGID: 108026] [afr-self-heal-c...