search for: cda9

Displaying 14 results from an estimated 14 matches for "cda9".

Did you mean: cd9
2018 Jan 21
0
Stale locks on shards
...ny information. In statedump I can see that storage nodes have locks on files and some of those are blocked. Ie. Here again it says that ovirt8z2 is having active lock even ovirt8z2 crashed after the lock was granted.: [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 mandatory=0 inodelk-count=3 lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0, connection-id=sto2z2.xxx-10975-2018/01...
2018 Jan 25
2
Stale locks on shards
...are blocked. Ie. Here again it says that ovirt8z2 is >> having active >> lock even ovirt8z2 crashed after the lock was granted.: >> >> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >> mandatory=0 >> inodelk-count=3 >> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self- >> heal >> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, >> len=0,...
2018 Jan 23
2
Stale locks on shards
...e locks on files and some of those >>> are blocked. Ie. Here again it says that ovirt8z2 is having active >>> lock even ovirt8z2 crashed after the lock was granted.: >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid >>> = 18446744073709551610, owner=d0c6d857a87f0000...
2018 Jan 23
3
Stale locks on shards
...atedump I can see that storage nodes have locks on files and some of > those are blocked. Ie. Here again it says that ovirt8z2 is having active > lock even ovirt8z2 crashed after the lock was granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = > 18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0, > connection...
2018 Jan 24
0
Stale locks on shards
...of those > are blocked. Ie. Here again it says that ovirt8z2 is > having active > lock even ovirt8z2 crashed after the lock was granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, > len=0, pid > = 1844674...
2018 Jan 20
3
Stale locks on shards
Hi all! One hypervisor on our virtualization environment crashed and now some of the VM images cannot be accessed. After investigation we found out that there was lots of images that still had active lock on crashed hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all
2018 Jan 25
0
Stale locks on shards
...f those >>> are blocked. Ie. Here again it says that ovirt8z2 is >>> having active >>> lock even ovirt8z2 crashed after the lock was >>> granted.: >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >>> start=0, >>> len=0, pid >>> = 1844...
2018 Jan 25
2
Stale locks on shards
...e blocked. Ie. Here again it says that ovirt8z2 is >>>> having active >>>> lock even ovirt8z2 crashed after the lock was >>>> granted.: >>>> >>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>>> mandatory=0 >>>> inodelk-count=3 >>>> >>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >>>> start=0, >>>> len...
2018 Jan 29
0
Stale locks on shards
...n it says that ovirt8z2 is >>>>> having active >>>>> lock even ovirt8z2 crashed after the lock was >>>>> granted.: >>>>> >>>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>>>> mandatory=0 >>>>> inodelk-count=3 >>>>> >>>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >>>>> start...
2018 Jan 29
2
Stale locks on shards
...says that ovirt8z2 is > having active > lock even ovirt8z2 crashed after the lock was > granted.: > > [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] > path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 > mandatory=0 > inodelk-count=3 > > lock-dump.domain.domain=zone2- > ssd1-vmstor1-replicate-0:self-heal > inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, >...
2018 Jan 23
0
Stale locks on shards
...t storage nodes have locks on files and some of those >> are blocked. Ie. Here again it says that ovirt8z2 is having active >> lock even ovirt8z2 crashed after the lock was granted.: >> >> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >> mandatory=0 >> inodelk-count=3 >> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid >> = 18446744073709551610, owner=d0c6d857a87f0000, >> client=0x...
2018 Jan 29
0
Stale locks on shards
...e again it says that >>> ovirt8z2 is >>> having active >>> lock even ovirt8z2 crashed after the lock was >>> granted.: >>> >>> >>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] >>> >>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27 >>> mandatory=0 >>> inodelk-count=3 >>> >>> >>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal >>> inodelk.inodelk[0](ACTIVE)=type=WRITE, >>> whence=0, >>> start=0, >>> len...
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
...torage total free space was getting lower than a threshold value (in this test it was 70Gb in the beginning and than was changed to 150Gb). I can say if was ~50-60% used all the time. When stopping the test the volume was looking like this: Volume Name: gv1 Type: Distribute Volume ID: fcdae350-cda9-4da3-bb70-63558ab11f56 Status: Started Snapshot Count: 0 Number of Bricks: 22 Transport-type: tcp Bricks: Brick1: dev-gluster1.qencode.com:/var/storage/brick/gv1 Brick2: dev-gluster2.qencode.com:/var/storage/brick/gv1 Brick3: master-59e8248a0ac511e892e90671029ed6b8.qencode.com: /var/storage...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you so much, I think we are close to build a stable storage solution according to your recommendations. Here's our rebalance log - please don't pay attention to error messages after 9AM - this is when we manually destroyed volume to recreate it for further testing. Also all remove-brick operations you could see in the log were executed manually when recreating volume.