Displaying 4 results from an estimated 4 matches for "59.677734".
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hi,
I see a lot of the following messages in the logs:
[2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk]
0-glusterfs: No change in volfile,continuing
[2018-02-04 07:41:16.189349] W [MSGID: 109011]
[dht-layout.c:186:dht_layout_search]
48-gv0-dht: no subvolume for hash (value) = 122440868
[2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk]
0-glusterfs-fuse:
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
On 5 February 2018 at 15:40, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi,
>
>
> I see a lot of the following messages in the logs:
> [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile,continuing
> [2018-02-04 07:41:16.189349] W [MSGID: 109011]
> [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hello Nithya!
Thank you so much, I think we are close to build a stable storage solution
according to your recommendations. Here's our rebalance log - please don't
pay attention to error messages after 9AM - this is when we manually
destroyed volume to recreate it for further testing. Also all remove-brick
operations you could see in the log were executed manually when recreating
volume.
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out!
We changed our configuration and after having a successful test yesterday
we have run into new issue today.
The test including moderate read/write (~20-30 Mb/s) and scaling the
storage was running about 3 hours and at some moment system got stuck:
On the user level there are such errors when trying to work with filesystem:
OSError: