search for: be57

Displaying 8 results from an estimated 8 matches for "be57".

Did you mean: 4e57
2012 Mar 02
1
xfs, inode64, and NFS
we recently deployed some large XFS file systems with centos 6.2 used as NFS servers... I've had some reports of a problem similar to the one reported here... http://www.linuxquestions.org/questions/red-hat-31/xfs-inode64-nfs-export-no_subtree_check-and-stale-nfs-file-handle-message-855844/ these reports are somewhat vague (third indirectly reported via internal corporate channels from
2010 Sep 02
3
Metadata update
...| | `--|o` 'o|--' http://octavio.gnu.org.ve | | \ ` / irc.radiognu.org #gnu | | .: :. | | :o_o: Huella: FC69 551B ECB9 62B0 D992 | | "-" BE57 B551 2497 C78B 870A | |__________________________________________________________| -------------- next part -------------- A non-text attachment was scrubbed... Name: octavio.vcf Type: text/x-vcard Size: 134 bytes Desc: not available Url : http://lists.xiph.org/pipermail/icecast-dev/attachment...
2010 Sep 02
4
Metadata update
...| | `--|o` 'o|--' http://octavio.gnu.org.ve | | \ ` / irc.radiognu.org #gnu | | .: :. | | :o_o: Huella: FC69 551B ECB9 62B0 D992 | | "-" BE57 B551 2497 C78B 870A | |__________________________________________________________| -------------- next part -------------- A non-text attachment was scrubbed... Name: octavio.vcf Type: text/x-vcard Size: 134 bytes Desc: not available Url : http://lists.xiph.org/pipermail/icecast-dev/attachment...
2017 Mar 08
0
From Networkmanager to self managed configuration files
...l/Networking_Guide/index.html Finally there was nothing to do with IPv6 in your article. That address was an IPv4 address and the zeroconf stuff configures the 169.254.0.0/16 network as a 'local link' network on that interface. If it was IPv6 it would have an address like fe80::33bb:5a14:be57:1690/64 ... which is an IPv6 link local address. Regards, James
2017 Mar 08
4
From Networkmanager to self managed configuration files
Hello Guys, update my post, because of a route from ipv6 on same networkcard, with only ipv4 enabled Sincerely Andy
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on acd75c28-ad7d-42d5-9675-4e8503eb4076. sources=0 [2] sinks=1 [2017-10-25 10:40:24.149766] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on d28669f8-be57-482d-86bb-f3eccf560851. sources=0 [2] sinks=1 [2017-10-25 10:40:24.151489] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on d28669f8-be57-482d-86bb-f3eccf560851 [2017-10-25 10:40:24.154941] I [MSGID: 108026] [afr-self-h...