search for: vmstor1

Displaying 12 results from an estimated 12 matches for "vmstor1".

2018 Jan 20
3
Stale locks on shards
...hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all nodes. Here is part of statedump that shows shard having active lock on crashed node: [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 mandatory=0 inodelk-count=1 lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0 inodelk.inodelk[0](AC...
2018 Jan 21
0
Stale locks on shards
...en running now for over 16 hours without any information. In statedump I can see that storage nodes have locks on files and some of those are blocked. Ie. Here again it says that ovirt8z2 is having active lock even ovirt8z2 crashed after the lock was granted.: [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 mandatory=0 inodelk-count=3 lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0,...
2018 Jan 23
3
Stale locks on shards
...ver 16 hours without any information. > In statedump I can see that storage nodes have locks on files and some of > those are blocked. Ie. Here again it says that ovirt8z2 is having active > lock even ovirt8z2 crashed after the lock was granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = > 18446744073709551610, owner=d0c6d857a87...
2018 Jan 25
2
Stale locks on shards
...s on files and some >> of those >> are blocked. Ie. Here again it says that ovirt8z2 is >> having active >> lock even ovirt8z2 crashed after the lock was granted.: >> >> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >> mandatory=0 >> inodelk-count=3 >> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self- >> heal >> inodelk.inodelk[0](ACTIVE...
2018 Jan 23
2
Stale locks on shards
...statedump >>> I can see that storage nodes have locks on files and some of those >>> are blocked. Ie. Here again it says that ovirt8z2 is having active >>> lock even ovirt8z2 crashed after the lock was granted.: >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid >&g...
2018 Jan 24
0
Stale locks on shards
...storage nodes have locks on files and some > of those > are blocked. Ie. Here again it says that ovirt8z2 is > having active > lock even ovirt8z2 crashed after the lock was granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=...
2018 Jan 25
0
Stale locks on shards
...have locks on files and >>> some >>> of those >>> are blocked. Ie. Here again it says that ovirt8z2 is >>> having active >>> lock even ovirt8z2 crashed after the lock was >>> granted.: >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >>>...
2018 Jan 25
2
Stale locks on shards
...;>> some >>>> of those >>>> are blocked. Ie. Here again it says that ovirt8z2 is >>>> having active >>>> lock even ovirt8z2 crashed after the lock was >>>> granted.: >>>> >>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>>> mandatory=0 >>>> inodelk-count=3 >>>> >>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>>> inodelk.inodelk[0](ACTIVE)=type=WRITE,...
2018 Jan 29
0
Stale locks on shards
...gt;> of those >>>>> are blocked. Ie. Here again it says that ovirt8z2 is >>>>> having active >>>>> lock even ovirt8z2 crashed after the lock was >>>>> granted.: >>>>> >>>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>>>> mandatory=0 >>>>> inodelk-count=3 >>>>> >>>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>>>> inodelk.inodel...
2018 Jan 29
2
Stale locks on shards
On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> wrote: Hi! Yes, thank you for asking. I found out this line in the production environment: lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd- a423-4fb2-b83c-2d1d5e78e1fb.32", "glusterfs.clrlk.tinode.kblocked", 0x7f2d7c4379f0, 4096) = -1 EPERM (Operation not permitted) I was expecting .kall instead of .blocked, did you change the cli to kind blocked? And this one in test environment (with posix...
2018 Jan 23
0
Stale locks on shards
...any information. In statedump >> I can see that storage nodes have locks on files and some of those >> are blocked. Ie. Here again it says that ovirt8z2 is having active >> lock even ovirt8z2 crashed after the lock was granted.: >> >> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >> mandatory=0 >> inodelk-count=3 >> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid >> = 184467440737095...
2018 Jan 29
0
Stale locks on shards
...itti 29.01.2018 07:32: > On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> > wrote: > >> Hi! >> >> Yes, thank you for asking. I found out this line in the production >> environment: >> > lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd-a423-4fb2-b83c-2d1d5e78e1fb.32", >> "glusterfs.clrlk.tinode.kblocked", 0x7f2d7c4379f0, 4096) = -1 EPERM >> (Operation not permitted) > > I was expecting .kall instead of .blocked, > did you change the cli to kind blocked? > Yes, I wa...