search for: b0a3

Displaying 12 results from an estimated 12 matches for "b0a3".

Did you mean: b03a
2018 Jan 21
0
Stale locks on shards
...tion. In statedump I can see that storage nodes have locks on files and some of those are blocked. Ie. Here again it says that ovirt8z2 is having active lock even ovirt8z2 crashed after the lock was granted.: [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 mandatory=0 inodelk-count=3 lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0, connection-id=sto2z2.xxx-10975-2018/01/20-10:56:...
2018 Jan 25
2
Stale locks on shards
...blocked. Ie. Here again it says that ovirt8z2 is >> having active >> lock even ovirt8z2 crashed after the lock was granted.: >> >> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >> mandatory=0 >> inodelk-count=3 >> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self- >> heal >> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, >> len=0, pid >&g...
2018 Jan 23
2
Stale locks on shards
...files and some of those >>> are blocked. Ie. Here again it says that ovirt8z2 is having active >>> lock even ovirt8z2 crashed after the lock was granted.: >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid >>> = 18446744073709551610, owner=d0c6d857a87f0000, >>...
2018 Jan 23
3
Stale locks on shards
...can see that storage nodes have locks on files and some of > those are blocked. Ie. Here again it says that ovirt8z2 is having active > lock even ovirt8z2 crashed after the lock was granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = > 18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0, > connection-id=sto2z2...
2018 Jan 24
0
Stale locks on shards
...> are blocked. Ie. Here again it says that ovirt8z2 is > having active > lock even ovirt8z2 crashed after the lock was granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, > len=0, pid > = 18446744073709551...
2018 Jan 20
3
Stale locks on shards
Hi all! One hypervisor on our virtualization environment crashed and now some of the VM images cannot be accessed. After investigation we found out that there was lots of images that still had active lock on crashed hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all
2018 Jan 25
0
Stale locks on shards
...t;>> are blocked. Ie. Here again it says that ovirt8z2 is >>> having active >>> lock even ovirt8z2 crashed after the lock was >>> granted.: >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >>> start=0, >>> len=0, pid >>> = 18446744073709...
2018 Jan 25
2
Stale locks on shards
...Ie. Here again it says that ovirt8z2 is >>>> having active >>>> lock even ovirt8z2 crashed after the lock was >>>> granted.: >>>> >>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>>> mandatory=0 >>>> inodelk-count=3 >>>> >>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >>>> start=0, >>>> len=0, pid &g...
2018 Jan 29
0
Stale locks on shards
...that ovirt8z2 is >>>>> having active >>>>> lock even ovirt8z2 crashed after the lock was >>>>> granted.: >>>>> >>>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>>>> mandatory=0 >>>>> inodelk-count=3 >>>>> >>>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >>>>> start=0, >&g...
2018 Jan 29
2
Stale locks on shards
...ovirt8z2 is > having active > lock even ovirt8z2 crashed after the lock was > granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > > lock-dump.domain.domain=zone2- > ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, > start...
2018 Jan 23
0
Stale locks on shards
...nodes have locks on files and some of those >> are blocked. Ie. Here again it says that ovirt8z2 is having active >> lock even ovirt8z2 crashed after the lock was granted.: >> >> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >> mandatory=0 >> inodelk-count=3 >> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid >> = 18446744073709551610, owner=d0c6d857a87f0000, >> client=0x7f885845ef...
2018 Jan 29
0
Stale locks on shards
...says that >>> ovirt8z2 is >>> having active >>> lock even ovirt8z2 crashed after the lock was >>> granted.: >>> >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, >>> whence=0, >>> start=0, >>> len=0, pid &g...