I don't have volumes named "gluster_shared_storage".
Here is what I have:
# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist
I added that script
"/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh"
Then I run next commands:
# systemctl restart glusterd
# gluster volume set all cluster.enable-shared-storage disable
# gluster volume set all cluster.enable-shared-storage enable
After several minutes I tried to check if the volume is exists but it still
doesn't:
# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist
I have check next log file:
"/var/log/glusterfs/run-gluster-shared_storage.log" and it is empty.
Then I checked next log file
"/var/log/glusterfs/etc-glusterfs-glusterd.vol.log" it has the next
content:
[2016-11-21 10:01:58.847473] I [MSGID: 106499]
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume gluster_shared_storage
[2016-11-21 10:01:58.850019] E [MSGID: 106525]
[glusterd-op-sm.c:3914:glusterd_dict_set_volid] 0-management: Volume
gluster_shared_storage does not exist
[2016-11-21 10:01:58.850058] E [MSGID: 106289]
[glusterd-syncop.c:1894:gd_sync_task_begin] 0-management: Failed to build
payload for operation 'Volume Status'
I am not sure how to check rpm file because I have installed glusterfs
with "yum install glusterfs-server".
Do you know how to get rpm with yum?
Sincerely,
Alexandr
On Mon, Nov 21, 2016 at 9:02 AM, Jiffin Tony Thottan <jthottan at
redhat.com>
wrote:
>
>
> On 21/11/16 11:13, Alexandr Porunov wrote:
>
> Version of glusterfs is 3.8.5
>
> Here what I have installed:
> rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-
> release-7-8.noarch.rpm
> yum install centos-release-gluster
> yum install glusterfs-server
>
>
> It should be part of glusterfs-server. So can u check files provided by
> this, run rpm -qil <full name of glusterfs-server rpm>
>
> yum install glusterfs-geo-replication
>
> Unfortunately it doesn't work if I just add the script
>
"/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh"
> and restart "glusterd".
>
>
> I didn't get that, when u rerun gluster v set all
> cluster.enable-shared-storage enable should work (I guess even glusterd
> restart is not required)
> Or do u have any volumes named "gluster_shared_storage", if yes
please
> remove it and rerun the cli.
>
> --
> Jiffin
>
>
> It seems that I have to install something else..
>
> Sincerely,
> Alexandr
>
>
>
> On Mon, Nov 21, 2016 at 6:58 AM, Jiffin Tony Thottan <jthottan at
redhat.com>
> wrote:
>
>>
>> On 21/11/16 01:07, Alexandr Porunov wrote:
>>
>> I have installed it from rpm. No that file isn't there. The folder
>> "/var/lib/glusterd/hooks/1/set/post/" is empty..
>>
>>
>> which gluster version and what all gluster rpms have u installed?
>> For time being just download this file[1] and copy to above location
and
>> rerun the same cli.
>>
>> [1] https://github.com/gluster/glusterfs/blob/master/extras/hook
>> -scripts/set/post/S32gluster_enable_shared_storage.sh
>>
>> --
>> Jiffin
>>
>>
>> Sincerely,
>> Alexandr
>>
>> On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan <jthottan at
redhat.com
>> > wrote:
>>
>>> Did u install rpm or directly from sources. Can u check whether
>>> following script is present?
>>>
/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
>>>
>>> --
>>>
>>> Jiffin
>>>
>>>
>>> On 20/11/16 13:33, Alexandr Porunov wrote:
>>>
>>> To enable shared storage I used next command:
>>> # gluster volume set all cluster.enable-shared-storage enable
>>>
>>> But it seems that it doesn't create gluster_shared_storage
automatically.
>>>
>>> # gluster volume status gluster_shared_storage
>>> Volume gluster_shared_storage does not exist
>>>
>>> Do I need to manually create a volume
"gluster_shared_storage"? Do I
>>> need to manually create a folder
"/var/run/gluster/shared_storage"? Do
>>> I need to manually mount it? Or something I don't need to do?
>>>
>>> If I use 6 cluster nodes and I need to have a shared storage on all
of
>>> them then how to create a shared storage?
>>> It says that it have to be with replication 2 or replication 3. But
if
>>> we use shared storage on all of 6 nodes then we have only 2 ways to
create
>>> a volume:
>>> 1. Use replication 6
>>> 2. Use replication 3 with distribution.
>>>
>>> Which way I need to use?
>>>
>>> Sincerely,
>>> Alexandr
>>>
>>> On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan <
>>> jthottan at redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On 19/11/16 21:47, Alexandr Porunov wrote:
>>>>
>>>> Unfortunately I haven't this log file but I have
>>>> 'run-gluster-shared_storage.log' and it has errors I
don't know why.
>>>>
>>>> Here is the content of the
'run-gluster-shared_storage.log':
>>>>
>>>>
>>>> Make sure shared storage is up and running using "gluster
volume
>>>> status gluster_shared_storage"
>>>>
>>>> May be the issue is related to firewalld or iptables. Try it
after
>>>> disabling them.
>>>>
>>>> --
>>>>
>>>> Jiffin
>>>>
>>>> [2016-11-19 10:37:01.581737] I [MSGID: 100030]
[glusterfsd.c:2454:main]
>>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.8.5
>>>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>>>> --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
>>>> [2016-11-19 10:37:01.641836] I [MSGID: 101190]
>>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started
>>>> thread with index 1
>>>> [2016-11-19 10:37:01.642311] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>>>> 0-glusterfs: failed to get the 'volume file' from
server
>>>> [2016-11-19 10:37:01.642340] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>>>> 0-mgmt: failed to fetch volume file
(key:gluster_shared_storage)
>>>> [2016-11-19 10:37:01.642592] W
[glusterfsd.c:1327:cleanup_and_exit]
>>>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f95cd309770]
>>>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f95cda3afc6]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f95cda34b4b] ) 0-:
>>>> received signum (0), shutting down
>>>> [2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini]
0-fuse:
>>>> Unmounting '/run/gluster/shared_storage'.
>>>> [2016-11-19 10:37:18.798787] I [MSGID: 100030]
[glusterfsd.c:2454:main]
>>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.8.5
>>>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>>>> --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
>>>> [2016-11-19 10:37:18.813011] I [MSGID: 101190]
>>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started
>>>> thread with index 1
>>>> [2016-11-19 10:37:18.813363] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>>>> 0-glusterfs: failed to get the 'volume file' from
server
>>>> [2016-11-19 10:37:18.813386] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>>>> 0-mgmt: failed to fetch volume file
(key:gluster_shared_storage)
>>>> [2016-11-19 10:37:18.813592] W
[glusterfsd.c:1327:cleanup_and_exit]
>>>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f96ba4c7770]
>>>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f96babf8fc6]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f96babf2b4b] ) 0-:
>>>> received signum (0), shutting down
>>>>
>>>> [2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini]
0-fuse:
>>>> Unmounting '/run/gluster/shared_storage'.
>>>> [2016-11-19 10:40:33.115685] I [MSGID: 100030]
[glusterfsd.c:2454:main]
>>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.8.5
>>>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>>>> --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
>>>> [2016-11-19 10:40:33.124218] I [MSGID: 101190]
>>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started
>>>> thread with index 1
>>>> [2016-11-19 10:40:33.124722] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>>>> 0-glusterfs: failed to get the 'volume file' from
server
>>>> [2016-11-19 10:40:33.124738] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>>>> 0-mgmt: failed to fetch volume file
(key:gluster_shared_storage)
>>>>
>>>>
>>>> [2016-11-19 10:40:33.124869] W
[glusterfsd.c:1327:cleanup_and_exit]
>>>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f23576a9770]
>>>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f2357ddafc6]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f2357dd4b4b] ) 0-:
>>>> received signum (0), shutting down
>>>>
>>>> [2016-11-19 10:40:33.124896] I [fuse-bridge.c:5793:fini]
0-fuse:
>>>> Unmounting '/run/gluster/shared_storage'.
>>>> [2016-11-19 10:44:36.029838] I [MSGID: 100030]
[glusterfsd.c:2454:main]
>>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.8.5
>>>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>>>> --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
>>>> [2016-11-19 10:44:36.043705] I [MSGID: 101190]
>>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started
>>>> thread with index 1
>>>> [2016-11-19 10:44:36.044082] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>>>> 0-glusterfs: failed to get the 'volume file' from
server
>>>> [2016-11-19 10:44:36.044106] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>>>> 0-mgmt: failed to fetch volume file
(key:gluster_shared_storage)
>>>>
>>>>
>>>> [2016-11-19 10:44:36.044302] W
[glusterfsd.c:1327:cleanup_and_exit]
>>>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7fbd9dced770]
>>>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7fbd9e41efc6]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7fbd9e418b4b] ) 0-:
>>>> received signum (0), shutting down
>>>>
>>>> [2016-11-19 10:44:36.044356] I [fuse-bridge.c:5793:fini]
0-fuse:
>>>> Unmounting '/run/gluster/shared_storage'.
>>>>
>>>> Can you help me to figure out what I am doing wrong?
>>>>
>>>> Sincerely,
>>>> Alexandr
>>>>
>>>> On Sat, Nov 19, 2016 at 3:18 PM, Saravanakumar Arumugam <
>>>> sarumuga at redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 11/19/2016 04:13 PM, Alexandr Porunov wrote:
>>>>>
>>>>> It still doesn't work..
>>>>>
>>>>> I have created that dir:
>>>>> # mkdir -p /var/run/gluster/shared_storage
>>>>>
>>>>> and then:
>>>>> # mount -t glusterfs 127.0.0.1:gluster_shared_storage
>>>>> /var/run/gluster/shared_storage
>>>>> Mount failed. Please check the log file for more details.
>>>>>
>>>>> Where to find a proper file to read logs? Because
>>>>> "/var/log/glusterfs/" has a lot of log files.
>>>>>
>>>>>
>>>>> You can find mount logs like this :
"directory_mounted".log inside
>>>>> /var/log/glusterfs
>>>>> There is some issue in your setup...check this log and
share it here.
>>>>>
>>>>>
>>>>> Sincerely,
>>>>> Alexandr
>>>>>
>>>>> On Sat, Nov 19, 2016 at 11:16 AM, Saravanakumar Arumugam
<
>>>>> sarumuga at redhat.com> wrote:
>>>>>
>>>>>>
>>>>>> On 11/19/2016 01:39 AM, Alexandr Porunov wrote:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I try to enable shared storage for Geo-Replication
but I am not sure
>>>>>>> that I do it properly.
>>>>>>>
>>>>>>> Here is what I do:
>>>>>>> # gluster volume set all
cluster.enable-shared-storage enable
>>>>>>> volume set: success
>>>>>>>
>>>>>>> # mount -t glusterfs
127.0.0.1:gluster_shared_storage
>>>>>>> /var/run/gluster/shared_storage
>>>>>>> ERROR: Mount point does not exist
>>>>>>> Please specify a mount point
>>>>>>> Usage:
>>>>>>> man 8 /sbin/mount.glusterfs
>>>>>>>
>>>>>>>
>>>>>>> This error means /var/run/gluster/shared_storage
directory does NOT
>>>>>> exists.
>>>>>>
>>>>>> But, running the command (gluster volume set all
>>>>>> cluster.enable-shared-storage enable)
>>>>>> should carry out the mounting automatically. (so, there
is no need to
>>>>>> manually mount).
>>>>>>
>>>>>> Check after running "gluster volume set all
>>>>>> cluster.enable-shared-storage enable"
>>>>>> 1. gluster volume info
>>>>>> 2. glusterfs process started with volfile-id as
>>>>>> gluster_shared_storage.
>>>>>>
>>>>>> Thanks,
>>>>>> Saravana
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing listGluster-users at
gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20161121/8b721a3c/attachment.html>