search for: _testvol

Displaying 8 results from an estimated 8 matches for "_testvol".

Did you mean: testvol
2013 Oct 14
0
Glusterfs 3.4.1 not able to mount the exports containing soft links
...d I am trying to mount the soft link contained in the volume as NFS from server 2 . but it is failing with the error " mount.nfs: an incorrect mount option was specified" Below is the volume in the server 1 that I am trying to export server 1 sh# gluster volume info all Volume Name: _testvol Type: Distribute Volume ID: f36d4ec4-8462-44aa-a0e6-e86c8bd3c914 Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.137.108.163:/mnt/gluster Brick2: 10.137.108.163:/root/test_fs Below is the content of the of the exported volume on the server 1 server 1 sh# ls -l total 4...
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
...[glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >> /rhev/data-center/mnt/glusterSD/s1:_testvol) >> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >> --volfile-server=s3 --volfile-server=s4 --volfile...
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
...[MSGID: 100030] [glusterfsd.c:2338:main] > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 > (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 > --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol > /rhev/data-center/mnt/glusterSD/s1:_testvol) > [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 > (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 > --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol &gt...
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
...338:main] >>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >>> /rhev/data-center/mnt/glusterSD/s1:_testvol) >>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >>> --volfile-server=s3 --volfile-ser...
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
...t;>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >>>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >>>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >>>> /rhev/data-center/mnt/glusterSD/s1:_testvol) >>>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >>>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >>>> --volfile-server=...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...6 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol /rhev/data-center/mnt/glusterSD/s1:_testvol) [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol /rhev/data-center/mnt/g...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...[glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >> /rhev/data-center/mnt/glusterSD/s1:_testvol) >> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >> --volfile-server=s3 --volfile-server=s4 --volfile...
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi, I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ? Thanks & Regards, Bobby Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: