similar to: uaser added to a local group doesn't get permissions

Displaying 20 results from an estimated 2000 matches similar to: "uaser added to a local group doesn't get permissions"

2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
After other test (I'm trying to convice myself about gluster reliability :-) I've found that with performance.write-behind off the vm works without problem. Now I'll try with write-behind on and flush-behind on too. Il 18/01/2018 13:30, Krutika Dhananjay ha scritto: > Thanks for that input. Adding Niels since the issue is reproducible > only with libgfapi. > >
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Thanks for that input. Adding Niels since the issue is reproducible only with libgfapi. -Krutika On Thu, Jan 18, 2018 at 1:39 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > Another update. > > I've setup a replica 3 volume without sharding and tried to install a VM > on a qcow2 volume on that device; however the result is the same and the vm >
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Another update. I've setup a replica 3 volume without sharding and tried to install a VM on a qcow2 volume on that device; however the result is the same and the vm image has been corrupted, exactly at the same point. Here's the volume info of the create volume: Volume Name: gvtest Type: Replicate Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d Status: Started Snapshot Count: 0 Number
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Hi, after our IRC chat I've rebuilt a virtual machine with FUSE based virtual disk. Everything worked flawlessly. Now I'm sending you the output of the requested getfattr command on the disk image: # file: TestFUSE-vda.qcow2 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x40ffafbbe987445692bb31295fa40105
2007 Nov 13
1
[Fwd: Re: VoiceMail hangup]
Hi Neofita, Doug and All. I think I've the same problem but I don't know if it's related to the bug suggested below. I try to explain my behavior: - I dial the voicemail extension. - I hear: "You have 1 new message. Press 1 for new messages, press 2 for... or # to exit" (I listen the complete message or most part of it) - I press 1 - I can hear the first recorded message.
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I actually use FUSE and it works. If i try to use "libgfapi" direct interface to gluster in qemu-kvm, the problem appears. Il 17/01/2018 11:35, Krutika Dhananjay ha scritto: > Really? Then which protocol exactly do you see this issue with? > libgfapi? NFS? > > -Krutika > > On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi > Srl <luca at
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Really? Then which protocol exactly do you see this issue with? libgfapi? NFS? -Krutika On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > Of course. Here's the full log. Please, note that in FUSE mode everything > works apparently without problems. I've installed 4 vm and updated them > without problems. > > >
2018 Jan 23
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at trendservizi.it> wrote: > Here's the volume info: > > > Volume Name: gv2a2 > Type: Replicate > Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1:
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Of course. Here's the full log. Please, note that in FUSE mode everything works apparently without problems. I've installed 4 vm and updated them without problems. Il 17/01/2018 11:00, Krutika Dhananjay ha scritto: > > > On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi > Srl <luca at gvnet.it <mailto:luca at gvnet.it>> wrote: > >
2018 Jan 17
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
Here's the volume info: Volume Name: gv2a2 Type: Replicate Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: gluster1:/bricks/brick2/gv2a2 Brick2: gluster3:/bricks/brick3/gv2a2 Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter) Options Reconfigured: storage.owner-gid: 107
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > I've made the test with raw image format (preallocated too) and the > corruption problem is still there (but without errors in bricks' log file). > > What does the "link" error in bricks log files means ? > > I've seen the source code looking for the
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've made the test with raw image format (preallocated too) and the corruption problem is still there (but without errors in bricks' log file). What does the "link" error in bricks log files means ? I've seen the source code looking for the lines where it happens and it seems a warning (it doesn't imply a failure). Il 16/01/2018 17:39, Ing. Luca Lazzeroni - Trend
2018 Jan 17
0
Strange messages in mnt-xxx.log
Hi, On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at trendservizi.it> wrote: > Hi, > > I'm testing gluster 3.12.4 and, by inspecting log files > /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines > saying: > > [2018-01-15 09:45:41.066914] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize]
2003 Apr 17
1
An OK too much when starting smb service
Hi, I've got a question, mostly out of curiosity, because my Mandrake Linux 9.0 + Samba 2.2.8a is actually working fine. Whenever I (re)start Samba by issuing "service smb start", I get the following output: [root@localhost marco]# service smb start Avvio servizi SMB: [ OK ] Avvio servizi NMB:
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
An update: I've tried, for my tests, to create the vm volume as qemu-img create -f qcow2 -o preallocation=full gluster://gluster1/Test/Test-vda.img 20G et voila ! No errors at all, neither in bricks' log file (the "link failed" message disappeared), neither in VM (no corruption and installed succesfully). I'll do another test with a fully preallocated raw image. Il
2018 Jan 16
0
Problem with Gluster 3.12.4, VM and sharding
Please share the volume-info output and the logs under /var/log/glusterfs/ from all your nodes. for investigating the issue. -Krutika On Tue, Jan 16, 2018 at 1:30 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at gvnet.it> wrote: > Hi to everyone. > > I've got a strange problem with a gluster setup: 3 nodes with Centos 7.4, > Gluster 3.12.4 from Centos/Gluster
2018 Jan 16
2
Strange messages in mnt-xxx.log
Hi, I'm testing gluster 3.12.4 and, by inspecting log files /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines saying: [2018-01-15 09:45:41.066914] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 [2018-01-15 09:45:45.755021] I [MSGID: 109063]
2018 Jan 16
2
Problem with Gluster 3.12.4, VM and sharding
Hi to everyone. I've got a strange problem with a gluster setup: 3 nodes with Centos 7.4, Gluster 3.12.4 from Centos/Gluster repositories, QEMU-KVM version 2.9.0 (compiled from RHEL sources). I'm running volumes in replica 3 arbiter 1 mode (but I've got a volume in "pure" replica 3 mode too). I've applied the "virt" group settings to my volumes since they
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports lots of files quite quickly but does not spawn any glfsheal process. And neither does restarting glusterd. Is there some way to selectively run glfsheal to fix one brick at a time? Diego Il 21/03/2023 01:21,
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful. Best Regards,Strahil Nikolov? On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports