Displaying 13 results from an estimated 13 matches for "4f0f".
Did you mean:
410f
2017 Jul 07
1
Gluster 3.11 on ubuntu 16.04 not working
...figure/>
in fstab is:
/dev/sdb1 /gluster xfs defaults 0 0
knoten5:/gv0 /glusterfs glusterfs defaults,_netdev,acl,selinux 0 0
after reboot /gluster is mounted
/glusterfs not
with mount -a mounting is possible.
gluster peer status shows
Number of Peers: 1
Hostname: knoten5
Uuid: 996c9b7b-9913-4f0f-a0e2-387fbd970129
State: Peer in Cluster (Connected)
Network connectivity is okay
gluster volume info shows
Volume Name: gv0
Type: Replicate
Volume ID: 0e049b18-9fb7-4554-a4b7-b7413753af3a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: knoten4:/g...
2017 May 30
1
Gluster client mount fails in mid flight with signum 15
...eb but can not find anyone else having the same problem in mid flight
The clients have four mounts of volumes from the same server, all mounts fail simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-family: inet
cluster.self-heal-daemon: ena...
2014 Feb 19
1
problem with nwfilter direction='out'
i test the following simple filter
<filter name='nwfilter-test-fedora2' chain='root'>
<uuid>ccbd255f-4be5-4f0f-8835-770ea40cb2c9</uuid>
<rule action='accept' direction='out' priority='500'>
<tcp dstipaddr='10.1.24.0' dstipmask='24' comment='test test test'/>
</rule>
</filter>
but i get strange results (look at the attache...
2018 Jun 21
2
WERR_BAD_NET_RESP on replication (--full-sync)
... 1: DRSUAPI_SUPPORTED_EXTENSION_GETCHGREQ_V10
0:
DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART2
0:
DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART3
site_guid :
229f5470-27e6-4f0f-994b-4073a5fc4dc5
pid : 0x00000000 (0)
repl_epoch : 0x00000000 (0)
bind_handle : *
bind_handle: struct policy_handle
handle_type : 0x0000...
2018 Jun 22
2
WERR_BAD_NET_RESP on replication (--full-sync)
...EXTENSION_GETCHGREQ_V10
>> 0:
>> DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART2
>> 0:
>> DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART3
>> site_guid :
>> 229f5470-27e6-4f0f-994b-4073a5fc4dc5
>> pid : 0x00000000 (0)
>> repl_epoch : 0x00000000 (0)
>> bind_handle : *
>> bind_handle: struct policy_handle
>> ...
2018 Jun 21
0
WERR_BAD_NET_RESP on replication (--full-sync)
...1:
> DRSUAPI_SUPPORTED_EXTENSION_GETCHGREQ_V10
> 0:
> DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART2
> 0:
> DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART3
> site_guid :
> 229f5470-27e6-4f0f-994b-4073a5fc4dc5
> pid : 0x00000000 (0)
> repl_epoch : 0x00000000 (0)
> bind_handle : *
> bind_handle: struct policy_handle
> handle_type ...
2018 Jul 02
0
WERR_BAD_NET_RESP on replication (--full-sync)
...>>> 0:
>>> DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART2
>>> 0:
>>> DRSUAPI_SUPPORTED_EXTENSION_RESERVED_PART3
>>> site_guid :
>>> 229f5470-27e6-4f0f-994b-4073a5fc4dc5
>>> pid : 0x00000000 (0)
>>> repl_epoch : 0x00000000 (0)
>>> bind_handle : *
>>> bind_handle: struct policy_handle
&g...
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
...eb but can not find anyone else having the same problem in mid flight
The clients have four mounts of volumes from the same server, all mounts fail simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-family: inet
cluster.self-heal-daemon: ena...
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
...eb but can not find anyone else having the same problem in mid flight
The clients have four mounts of volumes from the same server, all mounts fail simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-family: inet
cluster.self-heal-daemon: ena...
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
...he same problem in mid flight
>
> The clients have four mounts of volumes from the same server, all mounts fail simultaneously
> Peer status looks ok
> Volume status looks ok
> Volume info looks like this:
> Volume Name: GLUSTERVOLUME
> Type: Replicate
> Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
> Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
> Options Reconfigured:
> transport.addre...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...8/D664A7FA1F2F099C6D2BB282F40CE3C0AC6723B6 (0830f88c-586f-42cc-8bab-4aa293c86571) on home-client-2
[2017-10-25 10:14:19.337868] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/D9B11BE814ECB1EBD29E5E48E80384F0FF7B0BFF (0ae73ee3-2d21-4ea9-aa4c-2cde1f193d0b) on home-client-2
[2017-10-25 10:14:19.357994] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/F0DAFD8F1E712F2FABA92D275A037A8057EAE3B0 (e0355db8-7a98-4c83-ac9...