Displaying 4 results from an estimated 4 matches for "40d6".
Did you mean:
4096
2015 Oct 16
0
samba 4.1.17
...Gb
> swap 20 GB
Is everything in / ?
Yes
so looks my fstab:# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/md1 during installation
UUID=3f9a2ca7-21a3-40d6-9d43-06d0334c494a / ext4 user_xattr,acl,barrier=1,errors=remount-ro 1 1
# swap was on /dev/md0 during installation
#UUID=b1f620f6-4763-48c4-a45e-f7ab56e8d398 none swap sw 0 0
/dev/mapper/cryptswap1 none swap sw 0 0
Can you read some problems...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...27:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 494a1fd0-ce79-4c13-875b-409d9d0b9bf3. sources=0 [2] sinks=1
[2017-10-25 10:40:20.445148] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on bea04f8f-c079-40d6-b827-1bf19ba9379c
[2017-10-25 10:40:20.448937] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on bea04f8f-c079-40d6-b827-1bf19ba9379c. sources=0 [2] sinks=1
[2017-10-25 10:40:20.456782] I [MSGID: 108026] [afr-self-heal-common.c:132...