Displaying 20 results from an estimated 43 matches for "inodelk".
2018 Jan 21
0
Stale locks on shards
...storage nodes have
locks on files and some of those are blocked. Ie. Here again it says
that ovirt8z2 is having active lock even ovirt8z2 crashed after the lock
was granted.:
[xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
mandatory=0
inodelk-count=3
lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid =
18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0,
connection-id=sto2z2.xxx-10975-2018/01/20-10:56:14:649541-zone2-ssd1-vmstor1-client-...
2018 Jan 20
3
Stale locks on shards
...sn't seem possible to remove locks from shards.
We are running GlusterFS 3.8.15 on all nodes.
Here is part of statedump that shows shard having active lock on crashed
node:
[xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21
mandatory=0
inodelk-count=1
lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata
lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0
inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid =
3568, owner=14ce372c397f0000, clien...
2018 Jan 25
2
Stale locks on shards
...ng active
>> lock even ovirt8z2 crashed after the lock was granted.:
>>
>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
>> mandatory=0
>> inodelk-count=3
>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-
>> heal
>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0,
>> len=0, pid
>> = 18446744073709551610, owner=d0c6d857a87f0000,
>>...
2018 Jan 23
2
Stale locks on shards
...again it says that ovirt8z2 is having active
>>> lock even ovirt8z2 crashed after the lock was granted.:
>>>
>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
>>> mandatory=0
>>> inodelk-count=3
>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid
>>> = 18446744073709551610, owner=d0c6d857a87f0000,
>>> client=0x7f885845efa0,
>>>
>>> connecti...
2018 Jan 23
3
Stale locks on shards
...les and some of
> those are blocked. Ie. Here again it says that ovirt8z2 is having active
> lock even ovirt8z2 crashed after the lock was granted.:
>
> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
> mandatory=0
> inodelk-count=3
> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid =
> 18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0,
> connection-id=sto2z2.xxx-10975-2018/01/20-10:56:14:
> 649541-zo...
2018 Jan 24
0
Stale locks on shards
...is
> having active
> lock even ovirt8z2 crashed after the lock was granted.:
>
> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
> mandatory=0
> inodelk-count=3
> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0,
> len=0, pid
> = 18446744073709551610, owner=d0c6d857a87f0000,
> client=0x7f885845efa0,
&...
2018 Jan 25
0
Stale locks on shards
...2 is
>>> having active
>>> lock even ovirt8z2 crashed after the lock was
>>> granted.:
>>>
>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
>>> mandatory=0
>>> inodelk-count=3
>>>
>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0,
>>> start=0,
>>> len=0, pid
>>> = 18446744073709551610, owner=d0c6d857a87f0000,
>>> client=0x7f885845...
2018 Jan 25
2
Stale locks on shards
...tive
>>>> lock even ovirt8z2 crashed after the lock was
>>>> granted.:
>>>>
>>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
>>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
>>>> mandatory=0
>>>> inodelk-count=3
>>>>
>>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
>>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0,
>>>> start=0,
>>>> len=0, pid
>>>> = 18446744073709551610, owner=d0c6d857a87f0000,
>>...
2018 Jan 29
0
Stale locks on shards
...ck even ovirt8z2 crashed after the lock was
>>>>> granted.:
>>>>>
>>>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
>>>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
>>>>> mandatory=0
>>>>> inodelk-count=3
>>>>>
>>>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
>>>>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0,
>>>>> start=0,
>>>>> len=0, pid
>>>>> = 18446744073709551610, owner=d...
2018 Jan 29
2
Stale locks on shards
...ven ovirt8z2 crashed after the lock was
> granted.:
>
> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
> mandatory=0
> inodelk-count=3
>
> lock-dump.domain.domain=zone2-
> ssd1-vmstor1-replicate-0:self-heal
> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0,
> start=0,
> len=0, pid
> = 1844674407370955161...
2018 Jan 23
0
Stale locks on shards
...; are blocked. Ie. Here again it says that ovirt8z2 is having active
>> lock even ovirt8z2 crashed after the lock was granted.:
>>
>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
>> mandatory=0
>> inodelk-count=3
>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
>> inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid
>> = 18446744073709551610, owner=d0c6d857a87f0000,
>> client=0x7f885845efa0,
>>
> connection-id=sto2z2.xxx-10975-2018...
2018 Jan 29
0
Stale locks on shards
...ive
>>> lock even ovirt8z2 crashed after the lock was
>>> granted.:
>>>
>>>
>>> [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode]
>>>
>>> path=/.shard/3d55f8cc-cda9-489a-b0a3-fd0f43d67876.27
>>> mandatory=0
>>> inodelk-count=3
>>>
>>>
>>> lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:self-heal
>>> inodelk.inodelk[0](ACTIVE)=type=WRITE,
>>> whence=0,
>>> start=0,
>>> len=0, pid
>>> = 18446744073709551610,
>>> owner=d0c6d857...
2018 Jan 05
0
Another VM crashed
...-datastore2-server: disconnecting
connection from srvpve2-6
3690-2017/12/07-07:58:44:188020-datastore2-client-0-0-0
[2018-01-02 12:42:01.140829] I [MSGID: 115013]
[server-helpers.c:293:do_fd_cleanup] 0-datastore2-server: fd cleanup on
/images/201/vm-201-di
sk-2.qcow2
[2018-01-02 12:42:01.140830] W [inodelk.c:399:pl_inodelk_log_cleanup]
0-datastore2-server: releasing lock on a8d82b3d-1cf9-45cf-9858-d854671
0b49c held by {client=0x7f4c840efc50, pid=0 lk-owner=5c6000d0397f0000}
[2018-01-02 12:42:01.140849] I [MSGID: 115013]
[server-helpers.c:293:do_fd_cleanup] 0-datastore2-server: fd cleanup on
/images/...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...00 us 645037.00 us 598
> OPENDIR
> 0.08 17360.97 us 149.00 us 962156.00 us 94
> SETATTR
> 0.12 2733.36 us 50.00 us 1683945.00 us 877
> FSTAT
> 0.21 3041.55 us 29.00 us 1021732.00 us 1368
> INODELK
> 0.24 389.93 us 107.00 us 550203.00 us 11840
> FXATTROP
> 0.34 14820.49 us 38.00 us 1935527.00 us 444
> STAT
> 0.88 3765.77 us 42.00 us 1341978.00 us 4581
> LOOKUP
> 15.06 19624.04 us 26.00 us...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...598
>> OPENDIR
>> 0.08 17360.97 us 149.00 us 962156.00 us 94
>> SETATTR
>> 0.12 2733.36 us 50.00 us 1683945.00 us 877
>> FSTAT
>> 0.21 3041.55 us 29.00 us 1021732.00 us 1368
>> INODELK
>> 0.24 389.93 us 107.00 us 550203.00 us 11840
>> FXATTROP
>> 0.34 14820.49 us 38.00 us 1935527.00 us 444
>> STAT
>> 0.88 3765.77 us 42.00 us 1341978.00 us 4581
>> LOOKUP
>> 15.0...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...>>> 0.08 17360.97 us 149.00 us 962156.00 us 94
>>> SETATTR
>>> 0.12 2733.36 us 50.00 us 1683945.00 us 877
>>> FSTAT
>>> 0.21 3041.55 us 29.00 us 1021732.00 us 1368
>>> INODELK
>>> 0.24 389.93 us 107.00 us 550203.00 us 11840
>>> FXATTROP
>>> 0.34 14820.49 us 38.00 us 1935527.00 us 444
>>> STAT
>>> 0.88 3765.77 us 42.00 us 1341978.00 us 4581
>>> L...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...08 17360.97 us 149.00 us 962156.00 us 94
>>>> SETATTR
>>>> 0.12 2733.36 us 50.00 us 1683945.00 us 877
>>>> FSTAT
>>>> 0.21 3041.55 us 29.00 us 1021732.00 us 1368
>>>> INODELK
>>>> 0.24 389.93 us 107.00 us 550203.00 us 11840
>>>> FXATTROP
>>>> 0.34 14820.49 us 38.00 us 1935527.00 us 444
>>>> STAT
>>>> 0.88 3765.77 us 42.00 us 1341978.00 us...
2011 Oct 18
2
gluster rebalance taking three months
Hi guys,
we have a rebalance running on eight bricks since July and this is
what the status looks like right now:
===Tue Oct 18 13:45:01 CST 2011 ====
rebalance step 1: layout fix in progress: fixed layout 223623
There are roughly 8T photos in the storage,so how long should this
rebalance take?
What does the number (in this case) 22362 represent?
Our gluster infomation:
Repository
2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
and now mount it via mount -t gluster and I can see everything. I am
still seeing the following error in /var/log/glusterfs/nfs.log
[2011-02-04 13:09:16.404851] E
[client-handshake.c:1079:client_query_portmap_cbk]
bhl-volume-client-98: failed to get the port number for remote
subvolume
[2011-02-04 13:09:16.404909] I
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...313379
>> FSTAT
>> 0.23 1059.84 us 25.00 us 2716124.00 us 38255
>> LOOKUP
>> 0.47 1024.11 us 54.00 us 6197164.00 us 81455
>> FXATTROP
>> 1.72 2984.00 us 15.00 us 37098954.00 us 103020
>> FINODELK
>> 5.92 44315.32 us 51.00 us 24731536.00 us 23957
>> FSYNC
>> 13.27 2399.78 us 25.00 us 22089540.00 us 991005
>> READ
>> 37.00 5980.43 us 52.00 us 22099889.00 us 1108976
>> WRITE
>> 41....