search for: 75353c17

Displaying 12 results from an estimated 12 matches for "75353c17".

2018 Jan 20
3
Stale locks on shards
...ve locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all nodes. Here is part of statedump that shows shard having active lock on crashed node: [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 mandatory=0 inodelk-count=1 lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0 inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0...
2018 Jan 21
0
Stale locks on shards
...;, but it > doesn't seem possible to remove locks from shards. > > We are running GlusterFS 3.8.15 on all nodes. > > Here is part of statedump that shows shard having active lock on > crashed node: > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 > mandatory=0 > inodelk-count=1 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0 > inodelk.inodelk[0](ACTIVE)...
2018 Jan 25
2
Stale locks on shards
...8.15 on all nodes. >> >> Here is part of statedump that shows shard having >> active lock on >> crashed node: >> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >> path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 >> mandatory=0 >> inodelk-count=1 >> lock-dump.domain.domain=zone2- >> ssd1-vmstor1-replicate-0:metadata >> lock-dump.domain.domain=zone2- >> ssd1-vmstor1-replicate-0:s...
2018 Jan 23
2
Stale locks on shards
...> >>>> We are running GlusterFS 3.8.15 on all nodes. >>>> >>>> Here is part of statedump that shows shard having active lock on >>>> crashed node: >>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>> path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 >>>> mandatory=0 >>>> inodelk-count=1 >>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata >>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>>> lock-dump.domain.domain=zo...
2018 Jan 23
3
Stale locks on shards
...ot;, but it doesn't seem > possible to remove locks from shards. > > We are running GlusterFS 3.8.15 on all nodes. > > Here is part of statedump that shows shard having active lock on crashed > node: > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 > mandatory=0 > inodelk-count=1 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0 > inodelk.inodelk[0](ACTIVE)...
2018 Jan 24
0
Stale locks on shards
...are running GlusterFS 3.8.15 on all nodes. > > Here is part of statedump that shows shard having > active lock on > crashed node: > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 > mandatory=0 > inodelk-count=1 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > lock-d...
2018 Jan 25
0
Stale locks on shards
...We are running GlusterFS 3.8.15 on all nodes. >>> >>> Here is part of statedump that shows shard having >>> active lock on >>> crashed node: >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> >>> path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 >>> mandatory=0 >>> inodelk-count=1 >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> >>> lock...
2018 Jan 25
2
Stale locks on shards
...15 on all nodes. >>>> >>>> Here is part of statedump that shows shard having >>>> active lock on >>>> crashed node: >>>> >>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>> >>>> path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 >>>> mandatory=0 >>>> inodelk-count=1 >>>> >>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata >>>> >>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>&...
2018 Jan 29
0
Stale locks on shards
...;> >>>>> Here is part of statedump that shows shard having >>>>> active lock on >>>>> crashed node: >>>>> >>>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>>> >>>>> path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 >>>>> mandatory=0 >>>>> inodelk-count=1 >>>>> >>>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata >>>>> >>>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replica...
2018 Jan 29
2
Stale locks on shards
...n all nodes. > > Here is part of statedump that shows shard having > active lock on > crashed node: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > > path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 > mandatory=0 > inodelk-count=1 > > lock-dump.domain.domain=zone2- > ssd1-vmstor1-replicate-0:metadata > > lock-dump.domain.domain=zone2- > ssd1-vmstor1-replicate-0...
2018 Jan 23
0
Stale locks on shards
...from shards. >>> >>> We are running GlusterFS 3.8.15 on all nodes. >>> >>> Here is part of statedump that shows shard having active lock on >>> crashed node: >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 >>> mandatory=0 >>> inodelk-count=1 >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-rep...
2018 Jan 29
0
Stale locks on shards
...des. >>> >>> Here is part of statedump that shows shard >>> having >>> active lock on >>> crashed node: >>> >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> >>> >>> path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 >>> mandatory=0 >>> inodelk-count=1 >>> >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata >>> >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal &gt...