Displaying 4 results from an estimated 4 matches for "a848".
Did you mean:
848
2013 Aug 28
4
Deploying Rails 4 to VPS
...il to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/926887d3-e03d-4fba-a848-fe94ddbdd256%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...al-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on aa1692b0-71ec-47a1-a933-bf8f53b956fb. sources=0 [2] sinks=1
[2017-10-25 10:40:31.115344] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on a84896cc-1fc2-4ee9-8ad9-986e89cb6b34
[2017-10-25 10:40:31.118419] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on a84896cc-1fc2-4ee9-8ad9-986e89cb6b34. sources=0 [2] sinks=1
[2017-10-25 10:40:31.126667] I [MSGID: 108026] [afr-self-hea...