similar to: Strange messages in mnt-xxx.log

Displaying 20 results from an estimated 1000 matches similar to: "Strange messages in mnt-xxx.log"

2018 Jan 17
0
Strange messages in mnt-xxx.log
Hi, On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at trendservizi.it> wrote: > Hi, > > I'm testing gluster 3.12.4 and, by inspecting log files > /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines > saying: > > [2018-01-15 09:45:41.066914] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize]
2018 Jan 17
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
Here's the volume info: Volume Name: gv2a2 Type: Replicate Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: gluster1:/bricks/brick2/gv2a2 Brick2: gluster3:/bricks/brick3/gv2a2 Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter) Options Reconfigured: storage.owner-gid: 107
2018 Jan 23
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at trendservizi.it> wrote: > Here's the volume info: > > > Volume Name: gv2a2 > Type: Replicate > Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1:
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
After other test (I'm trying to convice myself about gluster reliability :-) I've found that with performance.write-behind off the vm works without problem. Now I'll try with write-behind on and flush-behind on too. Il 18/01/2018 13:30, Krutika Dhananjay ha scritto: > Thanks for that input. Adding Niels since the issue is reproducible > only with libgfapi. > >
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Thanks for that input. Adding Niels since the issue is reproducible only with libgfapi. -Krutika On Thu, Jan 18, 2018 at 1:39 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > Another update. > > I've setup a replica 3 volume without sharding and tried to install a VM > on a qcow2 volume on that device; however the result is the same and the vm >
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Another update. I've setup a replica 3 volume without sharding and tried to install a VM on a qcow2 volume on that device; however the result is the same and the vm image has been corrupted, exactly at the same point. Here's the volume info of the create volume: Volume Name: gvtest Type: Replicate Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d Status: Started Snapshot Count: 0 Number
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Hi, after our IRC chat I've rebuilt a virtual machine with FUSE based virtual disk. Everything worked flawlessly. Now I'm sending you the output of the requested getfattr command on the disk image: # file: TestFUSE-vda.qcow2 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x40ffafbbe987445692bb31295fa40105
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I actually use FUSE and it works. If i try to use "libgfapi" direct interface to gluster in qemu-kvm, the problem appears. Il 17/01/2018 11:35, Krutika Dhananjay ha scritto: > Really? Then which protocol exactly do you see this issue with? > libgfapi? NFS? > > -Krutika > > On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi > Srl <luca at
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Really? Then which protocol exactly do you see this issue with? libgfapi? NFS? -Krutika On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > Of course. Here's the full log. Please, note that in FUSE mode everything > works apparently without problems. I've installed 4 vm and updated them > without problems. > > >
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Of course. Here's the full log. Please, note that in FUSE mode everything works apparently without problems. I've installed 4 vm and updated them without problems. Il 17/01/2018 11:00, Krutika Dhananjay ha scritto: > > > On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi > Srl <luca at gvnet.it <mailto:luca at gvnet.it>> wrote: > >
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > I've made the test with raw image format (preallocated too) and the > corruption problem is still there (but without errors in bricks' log file). > > What does the "link" error in bricks log files means ? > > I've seen the source code looking for the
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've made the test with raw image format (preallocated too) and the corruption problem is still there (but without errors in bricks' log file). What does the "link" error in bricks log files means ? I've seen the source code looking for the lines where it happens and it seems a warning (it doesn't imply a failure). Il 16/01/2018 17:39, Ing. Luca Lazzeroni - Trend
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
An update: I've tried, for my tests, to create the vm volume as qemu-img create -f qcow2 -o preallocation=full gluster://gluster1/Test/Test-vda.img 20G et voila ! No errors at all, neither in bricks' log file (the "link failed" message disappeared), neither in VM (no corruption and installed succesfully). I'll do another test with a fully preallocated raw image. Il
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've just done all the steps to reproduce the problem. Tha VM volume has been created via "qemu-img create -f qcow2 Test-vda2.qcow2 20G" on the gluster volume mounted via FUSE. I've tried also to create the volume with preallocated metadata, which moves the problem a bit far away (in time). The volume is a replice 3 arbiter 1 volume hosted on XFS bricks. Here are the
2018 Jan 16
1
Problem with Gluster 3.12.4, VM and sharding
Also to help isolate the component, could you answer these: 1. on a different volume with shard not enabled, do you see this issue? 2. on a plain 3-way replicated volume (no arbiter), do you see this issue? On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > Please share the volume-info output and the logs under /var/log/glusterfs/ > from all your
2018 Jan 16
2
Problem with Gluster 3.12.4, VM and sharding
Hi to everyone. I've got a strange problem with a gluster setup: 3 nodes with Centos 7.4, Gluster 3.12.4 from Centos/Gluster repositories, QEMU-KVM version 2.9.0 (compiled from RHEL sources). I'm running volumes in replica 3 arbiter 1 mode (but I've got a volume in "pure" replica 3 mode too). I've applied the "virt" group settings to my volumes since they
2018 Jan 16
0
Problem with Gluster 3.12.4, VM and sharding
Please share the volume-info output and the logs under /var/log/glusterfs/ from all your nodes. for investigating the issue. -Krutika On Tue, Jan 16, 2018 at 1:30 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > Hi to everyone. > > I've got a strange problem with a gluster setup: 3 nodes with Centos 7.4, > Gluster 3.12.4 from Centos/Gluster
2017 Nov 09
2
Error logged in fuse-mount log file
resending mail from another id, doubt on whether mail reaches mailing list. ---------- Forwarded message ---------- From: Amudhan P <amudhan83 at gmail.com<mailto:amudhan83 at gmail.com>> Date: Tue, Nov 7, 2017 at 6:43 PM Subject: error logged in fuse-mount log file To: Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>> Hi, I am using
2017 Nov 10
0
Error logged in fuse-mount log file
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > ---------- Forwarded message ---------- > From: *Amudhan P* <amudhan83 at gmail.com> > Date: Tue, Nov 7, 2017 at 6:43 PM > Subject: error logged in fuse-mount log
2017 Nov 13
2
Error logged in fuse-mount log file
Hi Nithya, I have checked gfid in all the bricks in disperse set for the folder. it all same there is no difference. regards Amudhan P On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > Comments inline. > > Regards, > Nithya > > On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: >