search for: samppah

Displaying 16 results from an estimated 16 matches for "samppah".

2018 Jan 29
2
Stale locks on shards
On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> wrote: Hi! Yes, thank you for asking. I found out this line in the production environment: lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd- a423-4fb2-b83c-2d1d5e78e1fb.32", "glusterfs.clrlk.tinode.kblocked", 0x7f2d7c4379f0, 4096) = -1 EPERM (Ope...
2018 Jan 29
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 29.01.2018 07:32: > On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> > wrote: > >> Hi! >> >> Yes, thank you for asking. I found out this line in the production >> environment: >> > lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd-a423-4fb2-b83c-2d1d5e78e1fb.32", >> "gluster...
2018 Jan 29
0
Stale locks on shards
Hi, Did you find the command from strace? On 25 Jan 2018 1:52 pm, "Pranith Kumar Karampuri" <pkarampu at redhat.com> wrote: > > > On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> > wrote: > >> Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: >> >>> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen >>> <samppah at neutraali.net> wrote: >>> >>> Hi! >>>> >>>> Thank you ve...
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > >> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi! >>> >>> Thank you very much for your help so far. Could you p...
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi! > > Thank you very much for your help so far. Could you please tell an example > command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" > seems to mount volume by itself. > You are correct, sorry, this was implemen...
2018 Jan 25
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi! >> >> Thank you very much for your help so far. Could you please tell an >> example command how to use aux-gid-mount to remove locks? "gluster >> vol clear-locks" seems to mount volume by itself. > > You are co...
2018 Jan 24
0
Stale locks on shards
...gid-mount to remove locks? "gluster vol clear-locks" seems to mount volume by itself. Best regards, Samuli Heinonen > Pranith Kumar Karampuri <mailto:pkarampu at redhat.com> > 23 January 2018 at 10.30 > > > On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen > <samppah at neutraali.net <mailto:samppah at neutraali.net>> wrote: > > Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > > On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen > <samppah at neutraali.net <mailto:samppah at neutraali.net>> wrote: >...
2012 Dec 15
3
IRC channel stuck on invite only
Hi, Any ops here? irc channel seems broken. Ta, Andrew
2018 Jan 23
2
Stale locks on shards
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > >> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi again, >>> >>> here is more information regarding issue descr...
2018 Jan 21
0
Stale locks on shards
...:a52055bd-e2e9-42dd-92a3-e96b693bcafe> (a52055bd-e2e9-42dd-92a3-e96b693bcafe) ==> (Operation not permitted) [Operation not permitted] Is there anyways to force self heal to stop? Any help would be very much appreciated :) Best regards, Samuli Heinonen > Samuli Heinonen <mailto:samppah at neutraali.net> > 20 January 2018 at 21.57 > Hi all! > > One hypervisor on our virtualization environment crashed and now some > of the VM images cannot be accessed. After investigation we found out > that there was lots of images that still had active lock on crashed >...
2018 Jan 23
3
Stale locks on shards
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi again, > > here is more information regarding issue described earlier > > It looks like self healing is stuck. According to "heal statistics" crawl > began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun >...
2018 Jan 20
3
Stale locks on shards
Hi all! One hypervisor on our virtualization environment crashed and now some of the VM images cannot be accessed. After investigation we found out that there was lots of images that still had active lock on crashed hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all
2013 Sep 16
1
Gluster 3.4 QEMU and Permission Denied Errors
Hey List, I'm trying to test out using Gluster 3.4 for virtual machine disks. My enviroment consists of two Fedora 19 hosts with gluster and qemu/kvm installed. I have a single volume on gluster called vmdata that contains my qcow2 formated image created like this: qemu-img create -f qcow2 gluster://localhost/vmdata/test1.qcow 8G I'm able to boot my created virtual machine but in the
2018 Jan 23
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi again, >> >> here is more information regarding issue described earlier >> >> It looks like self healing is stuck. According to "heal statistics" >> crawl began at Sat Jan 20 12:56:19 2018 and it's still goi...
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all, I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: All machines are running CentOS 6.4 and using
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm testing on 4 servers, each with access to /mnt/gluster (which consists of all directories /mnt/data01 - data24) on each server. I'm using configs I built from volgen, but every time I access a file (via an 'ls -l') for the first time, I get all of these messages in my logs on each server: [2010-05-04 10:50:30] W