similar to: Stale locks on shards

Displaying 20 results from an estimated 700 matches similar to: "Stale locks on shards"

2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi! > > Thank you very much for your help so far. Could you please tell an example > command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" > seems to mount volume by itself. > You are correct, sorry, this was implemented around 7 years back and I forgot
2018 Jan 23
2
Stale locks on shards
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > >> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi again, >>> >>> here is more information regarding issue described earlier >>> >>>
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > >> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi! >>> >>> Thank you very much for your help so far. Could you please tell an >>> example
2018 Jan 23
3
Stale locks on shards
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi again, > > here is more information regarding issue described earlier > > It looks like self healing is stuck. According to "heal statistics" crawl > began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun > Jan 21 20:30 when writing this).
2018 Jan 21
0
Stale locks on shards
Hi again, here is more information regarding issue described earlier It looks like self healing is stuck. According to "heal statistics" crawl began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun Jan 21 20:30 when writing this). However glustershd.log says that last heal was completed at "2018-01-20 11:00:13.090697" (which is 13:00 UTC+2).
2018 Jan 29
2
Stale locks on shards
On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> wrote: Hi! Yes, thank you for asking. I found out this line in the production environment: lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd- a423-4fb2-b83c-2d1d5e78e1fb.32", "glusterfs.clrlk.tinode.kblocked", 0x7f2d7c4379f0, 4096) = -1 EPERM (Operation not permitted) I was
2018 Jan 24
0
Stale locks on shards
Hi! Thank you very much for your help so far. Could you please tell an example command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" seems to mount volume by itself. Best regards, Samuli Heinonen > Pranith Kumar Karampuri <mailto:pkarampu at redhat.com> > 23 January 2018 at 10.30 > > > On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen >
2018 Jan 25
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi! >> >> Thank you very much for your help so far. Could you please tell an >> example command how to use aux-gid-mount to remove locks? "gluster >> vol clear-locks" seems to mount volume by itself.
2018 Jan 29
0
Stale locks on shards
Hi, Did you find the command from strace? On 25 Jan 2018 1:52 pm, "Pranith Kumar Karampuri" <pkarampu at redhat.com> wrote: > > > On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> > wrote: > >> Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: >> >>> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen
2018 Jan 29
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 29.01.2018 07:32: > On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> > wrote: > >> Hi! >> >> Yes, thank you for asking. I found out this line in the production >> environment: >> > lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd-a423-4fb2-b83c-2d1d5e78e1fb.32",
2018 Jan 23
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi again, >> >> here is more information regarding issue described earlier >> >> It looks like self healing is stuck. According to "heal statistics" >> crawl began at Sat Jan 20 12:56:19 2018
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all, I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: All machines are running CentOS 6.4 and using
2013 Sep 16
1
Gluster 3.4 QEMU and Permission Denied Errors
Hey List, I'm trying to test out using Gluster 3.4 for virtual machine disks. My enviroment consists of two Fedora 19 hosts with gluster and qemu/kvm installed. I have a single volume on gluster called vmdata that contains my qcow2 formated image created like this: qemu-img create -f qcow2 gluster://localhost/vmdata/test1.qcow 8G I'm able to boot my created virtual machine but in the
2012 May 29
1
need help to find type I error rate for modified F statistic
Hello everyone, I want to calculate type I error rate for modified F statistic for one way robust anova. I need to find the j group trimmed mean and winsorized sum of squared deviations. Here I attached my code for j=2 to make it simple. Originally I have j=4. Hope you can help. I need to run it for 1000 times My problem is: i) the value of F-test obtain from my simulation below is in negative
2012 Dec 15
3
IRC channel stuck on invite only
Hi, Any ops here? irc channel seems broken. Ta, Andrew
2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2002 Mar 09
3
3.1p1 + OpenSSL 0.9.5a -> no can do
What should I do in order to be able to compile OpenSSH 3.1p1 with OpenSSL 0.9.5a? I get a lot of EVP related compile errors in cipher.c, appended. Thanks. Samuli cipher.c: In function `cipher_init': cipher.c:200: void value not ignored as it ought to be cipher.c:206: warning: implicit declaration of function `EVP_CIPHER_CTX_set_key_length' cipher.c:210: void value not ignored as it
2017 Apr 10
2
Prompting the user for input in the PXE boot menu?
Hi, I would like to get some data from the user in the PXE boot menu before launching OS install. Something along these lines: Please enter hostname for the system: <user types the hostname> Please enter harddisk encryption password: <user types the password> The values thus obtained would be fed to the kernel command-line and from there to the OS installer. In our use-case it
2012 Jul 12
3
A few patches to git MASTER
Gentoo bundles flac 1.2.1 with numerous patches applied. Some of these patches are already included in upstream flac-dev (for example the gcc 4.3 cstring issue). Below are the patches supplied by Gentoo (merged against MASTER) and with the Changelog explanations of them as well as the discussion link surrounding the patch: *flac-9999-asm.patch:* *28 Sep 2007; Samuli Suominen <drac at
2011 Apr 07
3
Ubuntu Execution of '/etc/puppet/etckeeper-commit-pre' returned 1:
Hi I have just put puppet onto a new Ubuntu install and it ran a couple of times but now I get Execution of ''/etc/puppet/etckeeper-commit-pre'' returned 1: whenever puppetd runs. Grepping on etckeeper-commit turns up lots of posts to ubuntu and debian forums about changes made in February. Does anyone know what the story is? I''ve ended up with a screwed pam