Displaying 20 results from an estimated 1000 matches similar to: "Problem with Gluster 3.12.4, VM and sharding"
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
After other test (I'm trying to convice myself about gluster reliability
:-) I've found that with
performance.write-behind off
the vm works without problem. Now I'll try with write-behind on and
flush-behind on too.
Il 18/01/2018 13:30, Krutika Dhananjay ha scritto:
> Thanks for that input. Adding Niels since the issue is reproducible
> only with libgfapi.
>
>
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Thanks for that input. Adding Niels since the issue is reproducible only
with libgfapi.
-Krutika
On Thu, Jan 18, 2018 at 1:39 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Another update.
>
> I've setup a replica 3 volume without sharding and tried to install a VM
> on a qcow2 volume on that device; however the result is the same and the vm
>
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Hi,
after our IRC chat I've rebuilt a virtual machine with FUSE based
virtual disk. Everything worked flawlessly.
Now I'm sending you the output of the requested getfattr command on the
disk image:
# file: TestFUSE-vda.qcow2
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0x40ffafbbe987445692bb31295fa40105
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I actually use FUSE and it works. If i try to use "libgfapi" direct
interface to gluster in qemu-kvm, the problem appears.
Il 17/01/2018 11:35, Krutika Dhananjay ha scritto:
> Really? Then which protocol exactly do you see this issue with?
> libgfapi? NFS?
>
> -Krutika
>
> On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi
> Srl <luca at
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Another update.
I've setup a replica 3 volume without sharding and tried to install a VM
on a qcow2 volume on that device; however the result is the same and the
vm image has been corrupted, exactly at the same point.
Here's the volume info of the create volume:
Volume Name: gvtest
Type: Replicate
Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d
Status: Started
Snapshot Count: 0
Number
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Really? Then which protocol exactly do you see this issue with? libgfapi?
NFS?
-Krutika
On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Of course. Here's the full log. Please, note that in FUSE mode everything
> works apparently without problems. I've installed 4 vm and updated them
> without problems.
>
>
>
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Of course. Here's the full log. Please, note that in FUSE mode
everything works apparently without problems. I've installed 4 vm and
updated them without problems.
Il 17/01/2018 11:00, Krutika Dhananjay ha scritto:
>
>
> On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi
> Srl <luca at gvnet.it <mailto:luca at gvnet.it>> wrote:
>
>
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> I've made the test with raw image format (preallocated too) and the
> corruption problem is still there (but without errors in bricks' log file).
>
> What does the "link" error in bricks log files means ?
>
> I've seen the source code looking for the
2018 Jan 16
0
Problem with Gluster 3.12.4, VM and sharding
Please share the volume-info output and the logs under /var/log/glusterfs/
from all your nodes. for investigating the issue.
-Krutika
On Tue, Jan 16, 2018 at 1:30 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Hi to everyone.
>
> I've got a strange problem with a gluster setup: 3 nodes with Centos 7.4,
> Gluster 3.12.4 from Centos/Gluster
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've made the test with raw image format (preallocated too) and the
corruption problem is still there (but without errors in bricks' log file).
What does the "link" error in bricks log files means ?
I've seen the source code looking for the lines where it happens and it
seems a warning (it doesn't imply a failure).
Il 16/01/2018 17:39, Ing. Luca Lazzeroni - Trend
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
An update:
I've tried, for my tests, to create the vm volume as
qemu-img create -f qcow2 -o preallocation=full
gluster://gluster1/Test/Test-vda.img 20G
et voila !
No errors at all, neither in bricks' log file (the "link failed" message
disappeared), neither in VM (no corruption and installed succesfully).
I'll do another test with a fully preallocated raw image.
Il
2018 Jan 16
1
Problem with Gluster 3.12.4, VM and sharding
Also to help isolate the component, could you answer these:
1. on a different volume with shard not enabled, do you see this issue?
2. on a plain 3-way replicated volume (no arbiter), do you see this issue?
On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> Please share the volume-info output and the logs under /var/log/glusterfs/
> from all your
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've just done all the steps to reproduce the problem.
Tha VM volume has been created via "qemu-img create -f qcow2
Test-vda2.qcow2 20G" on the gluster volume mounted via FUSE. I've tried
also to create the volume with preallocated metadata, which moves the
problem a bit far away (in time). The volume is a replice 3 arbiter 1
volume hosted on XFS bricks.
Here are the
2018 Jan 16
2
Strange messages in mnt-xxx.log
Hi,
I'm testing gluster 3.12.4 and, by inspecting log files
/var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many
lines saying:
[2018-01-15 09:45:41.066914] I [MSGID: 109063]
[dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in
(null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-15 09:45:45.755021] I [MSGID: 109063]
2018 Jan 17
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
Here's the volume info:
Volume Name: gv2a2
Type: Replicate
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick2/gv2a2
Brick2: gluster3:/bricks/brick3/gv2a2
Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
Options Reconfigured:
storage.owner-gid: 107
2018 Jan 17
0
Strange messages in mnt-xxx.log
Hi,
On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at trendservizi.it> wrote:
> Hi,
>
> I'm testing gluster 3.12.4 and, by inspecting log files
> /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines
> saying:
>
> [2018-01-15 09:45:41.066914] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize]
2018 Jan 23
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at trendservizi.it> wrote:
> Here's the volume info:
>
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1:
2018 Mar 26
0
Sharding problem - multiple shard copies with mismatching gfids
The gfid mismatch here is between the shard and its "link-to" file, the
creation of which happens at a layer below that of shard translator on the
stack.
Adding DHT devs to take a look.
-Krutika
On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday <ihalliday at ndevix.com> wrote:
> Hello all,
>
> We are having a rather interesting problem with one of our VM storage
>
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> The gfid mismatch here is between the shard and its "link-to" file, the
> creation of which happens at a layer below that of shard translator on the
> stack.
>
> Adding DHT devs to take a look.
>
Thanks Krutika. I assume shard doesn't do any dentry operations like
rename,
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all,
We are having a rather interesting problem with one of our VM storage
systems. The GlusterFS client is throwing errors relating to GFID
mismatches. We traced this down to multiple shards being present on the
gluster nodes, with different gfids.
Hypervisor gluster mount log:
[2018-03-25 18:54:19.261733] E [MSGID: 133010]
[shard.c:1724:shard_common_lookup_shards_cbk]