similar to: IRC channel stuck on invite only

Displaying 20 results from an estimated 900 matches similar to: "IRC channel stuck on invite only"

2018 Jan 29
2
Stale locks on shards
On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> wrote: Hi! Yes, thank you for asking. I found out this line in the production environment: lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd- a423-4fb2-b83c-2d1d5e78e1fb.32", "glusterfs.clrlk.tinode.kblocked", 0x7f2d7c4379f0, 4096) = -1 EPERM (Operation not permitted) I was
2018 Jan 29
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 29.01.2018 07:32: > On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> > wrote: > >> Hi! >> >> Yes, thank you for asking. I found out this line in the production >> environment: >> > lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd-a423-4fb2-b83c-2d1d5e78e1fb.32",
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > >> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi! >>> >>> Thank you very much for your help so far. Could you please tell an >>> example
2018 Jan 29
0
Stale locks on shards
Hi, Did you find the command from strace? On 25 Jan 2018 1:52 pm, "Pranith Kumar Karampuri" <pkarampu at redhat.com> wrote: > > > On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> > wrote: > >> Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: >> >>> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi! > > Thank you very much for your help so far. Could you please tell an example > command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" > seems to mount volume by itself. > You are correct, sorry, this was implemented around 7 years back and I forgot
2018 Jan 23
2
Stale locks on shards
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > >> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi again, >>> >>> here is more information regarding issue described earlier >>> >>>
2018 Jan 25
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi! >> >> Thank you very much for your help so far. Could you please tell an >> example command how to use aux-gid-mount to remove locks? "gluster >> vol clear-locks" seems to mount volume by itself.
2018 Jan 24
0
Stale locks on shards
Hi! Thank you very much for your help so far. Could you please tell an example command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" seems to mount volume by itself. Best regards, Samuli Heinonen > Pranith Kumar Karampuri <mailto:pkarampu at redhat.com> > 23 January 2018 at 10.30 > > > On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen >
2018 Jan 20
3
Stale locks on shards
Hi all! One hypervisor on our virtualization environment crashed and now some of the VM images cannot be accessed. After investigation we found out that there was lots of images that still had active lock on crashed hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all
2018 Jan 21
0
Stale locks on shards
Hi again, here is more information regarding issue described earlier It looks like self healing is stuck. According to "heal statistics" crawl began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun Jan 21 20:30 when writing this). However glustershd.log says that last heal was completed at "2018-01-20 11:00:13.090697" (which is 13:00 UTC+2).
2018 Jan 23
3
Stale locks on shards
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi again, > > here is more information regarding issue described earlier > > It looks like self healing is stuck. According to "heal statistics" crawl > began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun > Jan 21 20:30 when writing this).
2018 Jan 23
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi again, >> >> here is more information regarding issue described earlier >> >> It looks like self healing is stuck. According to "heal statistics" >> crawl began at Sat Jan 20 12:56:19 2018
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all, I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: All machines are running CentOS 6.4 and using
2017 Dec 06
1
reminder: Community meeting at 15:00 UTC in #gluster-meeting on Freenode IRC
Hi all, This a reminder about today's meeting. The meeting will start later today at 15:00 UTC. You can convert that to a local time with the following command in your terminal: $ date -d '15:00 UTC' Because the meeting is an open floor, topics and updates need to be added to the meeting pad at: https://bit.ly/gluster-community-meetings Hope to see you later online, Niels
2013 Sep 16
1
Gluster 3.4 QEMU and Permission Denied Errors
Hey List, I'm trying to test out using Gluster 3.4 for virtual machine disks. My enviroment consists of two Fedora 19 hosts with gluster and qemu/kvm installed. I have a single volume on gluster called vmdata that contains my qcow2 formated image created like this: qemu-img create -f qcow2 gluster://localhost/vmdata/test1.qcow 8G I'm able to boot my created virtual machine but in the
2012 Aug 17
1
Fwd: vm pxe fail
----- Forwarded Message ----- From: "Andrew Holway" <a.holway at syseleven.de> To: "Alex Jia" <ajia at redhat.com> Cc: kvm at vger.kernel.org Sent: Friday, August 17, 2012 4:24:33 PM Subject: Re: [libvirt-users] vm pxe fail Hello, On Aug 17, 2012, at 4:34 AM, Alex Jia wrote: > Hi Andrew, > I can't confirm a root reason based on your information, perhaps
2023 Mar 29
1
gluster csi driver
Looking at this code, it's way more than I was looking for, too. I just need a replacement for the in-tree driver. I have a volume. I have about a half dozen pods that use that volume. I just need the same capabilities as the in-tree driver to satisfy that need. I want to use kadalu to replace the hacky thing I'm still doing using hostpath_pv, but last time I checked, it didn't build
2013 Jul 02
1
problem expanding a volume
Hello, I am having trouble expanding a volume. Every time I try to add bricks to the volume, I get this error: [root at gluster1 sdb1]# gluster volume add-brick vg0 gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1 /export/brick2/sdb1 or a prefix of it is already part of a volume Here is the volume info: [root at gluster1 sdb1]# gluster volume info vg0 Volume Name: vg0 Type:
2012 Aug 15
3
samba problem with kernel 2.6.32-279.*
Hello, We use Norton Ghost, running in a PXE booted DOS, to handle Windows XP images. The images are stored on a samba share on our CentOS 6 server. This has worked without any problems for years. After kernel 2.6.32-279* it has stopped working. The symptom is that if I boot in DOS, and do: net use x: \\myserver\ghostimages dir x: I get an infinite loop where the first file name is listed
2010 Mar 18
2
[Bug 27152] New: [G72M] Screen corruption when using KMS. Dell Latitude D620 / Quadro NVS 110M/GeForce Go 7300
http://bugs.freedesktop.org/show_bug.cgi?id=27152 Summary: [G72M] Screen corruption when using KMS. Dell Latitude D620 / Quadro NVS 110M/GeForce Go 7300 Product: xorg Version: unspecified Platform: Other OS/Version: All Status: NEW Severity: normal Priority: medium Component: