Displaying 12 results from an estimated 12 matches for "9bfa".
Did you mean:
9b2a
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ing your suggestion, I've checked the "peer" status and I found that
there is too many name for the hosts, I don't know if this can be the
problem or part of it:
*gluster peer status on NODE01:*
*Number of Peers: 2*
*Hostname: dnode02.localdomain.local*
*Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
*State: Peer in Cluster (Connected)*
*Other names:*
*192.168.10.52*
*dnode02.localdomain.local*
*10.10.20.90*
*10.10.10.20*
*gluster peer status on NODE02:*
*Number of Peers: 2*
*Hostname: dnode01.localdomain.local*
*Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12*
*State: Peer in Clu...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...the "peer" status and I found that
> there is too many name for the hosts, I don't know if this can be the
> problem or part of it:
>
> *gluster peer status on NODE01:*
> *Number of Peers: 2*
>
> *Hostname: dnode02.localdomain.local*
> *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
> *State: Peer in Cluster (Connected)*
> *Other names:*
> *192.168.10.52*
> *dnode02.localdomain.local*
> *10.10.20.90*
> *10.10.10.20*
>
>
>
>
> *gluster peer status on NODE02:*
> *Number of Peers: 2*
>
> *Hostname: dnode01.localdomain.local*...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...I found
>> that there is too many name for the hosts, I don't know if this can be the
>> problem or part of it:
>>
>> *gluster peer status on NODE01:*
>> *Number of Peers: 2*
>>
>> *Hostname: dnode02.localdomain.local*
>> *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
>> *State: Peer in Cluster (Connected)*
>> *Other names:*
>> *192.168.10.52*
>> *dnode02.localdomain.local*
>> *10.10.20.90*
>> *10.10.10.20*
>>
>>
>>
>>
>> *gluster peer status on NODE02:*
>> *Number of Peers: 2*...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...many name for the hosts, I don't know if
> this can be the problem or part of it:
>
> /*gluster peer status on NODE01:*/
> /Number of Peers: 2/
> /
> /
> /Hostname: dnode02.localdomain.local/
> /Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd/
> /State: Peer in Cluster (Connected)/
> /Other names:/
> /192.168.10.52/
> /dnode02.localdomain.local/
> /10.10.20.90/
> /10.10.10.20/
> /
> /
> /
> /
> /
>...
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...is too many name for the hosts, I don't know if this can be the
>>> problem or part of it:
>>>
>>> *gluster peer status on NODE01:*
>>> *Number of Peers: 2*
>>>
>>> *Hostname: dnode02.localdomain.local*
>>> *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
>>> *State: Peer in Cluster (Connected)*
>>> *Other names:*
>>> *192.168.10.52*
>>> *dnode02.localdomain.local*
>>> *10.10.20.90*
>>> *10.10.10.20*
>>>
>>>
>>>
>>>
>>> *gluster peer stat...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ts, I don't know if this can be the
>>>> problem or part of it:
>>>>
>>>> *gluster peer status on NODE01:*
>>>> *Number of Peers: 2*
>>>>
>>>> *Hostname: dnode02.localdomain.local*
>>>> *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
>>>> *State: Peer in Cluster (Connected)*
>>>> *Other names:*
>>>> *192.168.10.52*
>>>> *dnode02.localdomain.local*
>>>> *10.10.20.90*
>>>> *10.10.10.20*
>>>>
>>>>
>>>>
>>...
2014 Mar 26
0
Heroku | Rails 4 | Ruby 2.0 - send_file not presenting file for download in the browser
...end an email to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/7FCB0CB0-9BFA-4F9F-9CD2-700EC40B0ECC%40gmail.com.
For more options, visit https://groups.google.com/d/optout.
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote:
>
> 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
>
> Could you check if the self-heal daemon on all nodes is connected
> to the 3 bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine force`, then launch the heal command like you did earlier and see if
> heals
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...ommon.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on ededfa02-6177-4cfb-b9ce-93e7abc28b11. sources=0 [2] sinks=1
[2017-10-25 10:40:22.050093] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on ada4cd7c-03c0-4ab2-9bfa-288d16b8dd9f. sources=0 [2] sinks=1
[2017-10-25 10:40:22.088147] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 7eb36c06-7f81-4105-bdd7-bdc8c37bf619. sources=0 [2] sinks=1
[2017-10-25 10:40:22.099051] I [MSGID: 108026] [afr-self-h...