Displaying 17 results from an estimated 17 matches for "4df1".
Did you mean:
1df1
2004 Aug 11
0
Asterisk --> Mediatrix 1204 --> returned -1: Operation not permitted
When I try to make a call using the Mediatrix 1204 is showed on the CLI:
-- Executing SetCIDNum("SIP/2009-4df1", "1111") in new stack
-- Executing Dial("SIP/2009-4df1", "SIP/2217008@192.168.199.5") in new
stack
Aug 11 15:14:10 WARNING[1211108144]: chan_sip.c:590 __sip_xmit: sip_xmit of
0x81
40c5c (len 794) to 192.168.199.5 returned -1: Operation not permitted
-- Ca...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.8 failed
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
[File exists]
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241fail...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...62335cb9-c7b5-4735-a879-59cff93fe622.8 failed
> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
> [File exists]
> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...35-a879-59cff93fe622.8 failed
>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>> [File exists]
>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
>> /bricks/brick2/gv2a2/.glusterfs/c...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...d/
> 62335cb9-c7b5-4735-a879-59cff93fe622.8 failed
> [2018-01-16 15:17:08.279162] W [MSGID: 113096] [posix-handle.c:770:posix_handle_hard]
> 0-gv2a2-posix: link /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
> -> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
> [File exists]
> [2018-01-16 15:17:08.279162] W [MSGID: 113096] [posix-handle.c:770:posix_handle_hard]
> 0-gv2a2-posix: link /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
> -> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...[2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
>>> ->
>>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>>> [File exists]
>>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
>>> -&g...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...-4735-a879-59cff93fe622.8 failed
>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>> [File exists]
>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
>> /bricks/brick2/gv2a2/.glusterfs/c9/b7...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...[MSGID: 113096]
>>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
>>>> ->
>>>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>>>> [File exists]
>>>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59c...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...>>>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
>>>>> ->
>>>>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>>>>> [File exists]
>>>>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7...
2018 Jan 16
1
Problem with Gluster 3.12.4, VM and sharding
Also to help isolate the component, could you answer these:
1. on a different volume with shard not enabled, do you see this issue?
2. on a plain 3-way replicated volume (no arbiter), do you see this issue?
On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> Please share the volume-info output and the logs under /var/log/glusterfs/
> from all your
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...770:posix_handle_hard] 0-gv2a2-posix:
>>>>>> link
>>>>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
>>>>>> ->
>>>>>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>>>>>> [File exists]
>>>>>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>>>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix:
>>>>>> link
>>>>>>...
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...;>> failed
>>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
>>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>>> [File exists]
>>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>> [posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9 ->
>>> /bricks/brick2/gv...
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...gt;>>>> 0-gv2a2-posix: link
>>>>>>> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
>>>>>>> ->
>>>>>>> /bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
>>>>>>> [File exists]
>>>>>>> [2018-01-16 15:17:08.279162] W [MSGID: 113096]
>>>>>>> [posix-handle.c:770:posix_handle_hard]
>>>>>>> 0-gv2a2-pos...
2013 Nov 29
1
Self heal problem
Hi,
I have a glusterfs volume replicated on three nodes. I am planing to use
the volume as storage for vMware ESXi machines using NFS. The reason for
using tree nodes is to be able to configure Quorum and avoid
split-brains. However, during my initial testing when intentionally and
gracefully restart the node "ned", a split-brain/self-heal error
occurred.
The log on "todd"
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...8859-48e3-96ca-60a988eb9358/BE252A639408F51C1A9172D683E27CDA2D2C8764 (7861c6cc-3185-49aa-92dd-b66e6e1d63c2) on home-client-2
[2017-10-25 10:14:02.011544] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/4604DF1295ABF43448282482090C1A31C7523603 (fe986c0f-5412-4385-9afc-71d48ae0b2c9) on home-client-2
[2017-10-25 10:14:02.033449] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/EEB0D72D714FB54640E6EF972FCF8AD68C2842...