zhu.shangzhong at zte.com.cn
2018-Feb-26 09:01 UTC
[Samba] [ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060 ctdbd[6602]: Created PID file /run/ctdb/ctdbd.pid 2018/02/12 19:38:51.529120 ctdbd[6602]: Listening to ctdb socket /var/run/ctdb/ctdbd.socket 2018/02/12 19:38:51.529146 ctdbd[6602]: Set real-time scheduler priority 2018/02/12 19:38:51.648117 ctdbd[6602]: Starting event daemon /usr/libexec/ctdb/ctdb_eventd -e /etc/ctdb/events.d -s /var/run/ctdb/eventd.sock -P 6602 -l file:/var/log/log.ctdb -d NOTICE 2018/02/12 19:38:51.648390 ctdbd[6602]: connect() failed, errno=2 2018/02/12 19:38:51.693790 ctdb-eventd[6633]: listening on /var/run/ctdb/eventd.sock 2018/02/12 19:38:51.693893 ctdb-eventd[6633]: daemon started, pid=6633 2018/02/12 19:38:52.648474 ctdbd[6602]: Set runstate to INIT (1) 2018/02/12 19:38:54.505780 ctdbd[6602]: PNN is 1 2018/02/12 19:38:54.574993 ctdbd[6602]: Vacuuming is disabled for persistent database ctdb.tdb 2018/02/12 19:38:54.576297 ctdbd[6602]: Attached to database '/var/lib/ctdb/persistent/ctdb.tdb.1' with flags 0x400 2018/02/12 19:38:54.576322 ctdbd[6602]: Ignoring persistent database 'ctdb.tdb.2' 2018/02/12 19:38:54.576339 ctdbd[6602]: Ignoring persistent database 'ctdb.tdb.0' 2018/02/12 19:38:54.576364 ctdbd[6602]: Freeze db: ctdb.tdb 2018/02/12 19:38:54.576393 ctdbd[6602]: Set lock helper to "/usr/libexec/ctdb/ctdb_lock_helper" 2018/02/12 19:38:54.579527 ctdbd[6602]: Set runstate to SETUP (2) 2018/02/12 19:38:54.881828 ctdbd[6602]: Keepalive monitoring has been started 2018/02/12 19:38:54.881873 ctdbd[6602]: Set runstate to FIRST_RECOVERY (3) 2018/02/12 19:38:54.882020 ctdb-recoverd[7182]: monitor_cluster starting 2018/02/12 19:38:54.882620 ctdb-recoverd[7182]: Initial recovery master set - forcing election 2018/02/12 19:38:54.882702 ctdbd[6602]: This node (1) is now the recovery master 2018/02/12 19:38:55.882735 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:56.902874 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:57.885800 ctdb-recoverd[7182]: Election period ended 2018/02/12 19:38:57.886134 ctdb-recoverd[7182]: Node:1 was in recovery mode. Start recovery process 2018/02/12 19:38:57.886160 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:38:57.886187 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:38:57.886243 ctdb-recoverd[7182]: Set cluster mutex helper to "/usr/libexec/ctdb/ctdb_mutex_fcntl_helper" 2018/02/12 19:38:57.899722 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:38:57.899763 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:38:57.903138 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:58.887310 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:38:58.887353 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:38:58.893531 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:38:58.893571 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:38:58.903314 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:59.891024 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:38:59.891080 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:38:59.898336 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:38:59.898397 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:38:59.904710 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:00.893673 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:00.893741 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:00.901094 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:00.901152 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:00.911007 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:01.895044 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:01.895106 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:01.902379 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:01.902451 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:01.912054 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:02.896539 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:02.896597 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:02.904674 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:02.904736 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:02.912896 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:03.898495 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:03.898548 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:03.904876 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:03.904929 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:03.913736 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:04.899872 ctdb-recoverd[7182]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:04.899928 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:04.907784 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:04.907837 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:04.914048 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED
Rowland Penny
2018-Feb-26 09:12 UTC
[Samba] [ctdb] Unable to take recovery lock - contention
On Mon, 26 Feb 2018 17:01:29 +0800 (CST) "zhu.shangzhong--- via samba" <samba at lists.samba.org> wrote:> V2hlbiB0aGUgY3RkYiBpcyBzdGFydGluZywgdGhlICJVbmFibGUgdG8gdGFrZSByZWNvdmVyeSBs > b2NrIC0gY29udGVudGlvbiIgbG9nIHdpbGwgYmUgb3V0cHV0IGFsbCB0aGUgdGltZS4KV2hpY2gg > Y2FzZXMgd2lsbCB0aGUgInVuYWJsZSB0byB0YWtlIGxvY2siIGVycnJvciBiZSBvdXRwdXQ/ClRo > YW5rcyEKClRoZSBmb2xsb3dpbmcgdGhlIGN0ZGIgbG9nczoKMjAxOC8wMi8xMiAxOTozODo1MS4x > NDc5NTkgY3RkYmRbNTYxNV06IENUREIgc3RhcnRpbmcgb24gbm9kZQoyMDE4LzAyLzEyIDE5OjM4 > OjUxLjUyODkyMSBjdGRiZFs2NjAyXTogU3RhcnRpbmcgQ1REQkQgKFZlcnNpb24gNC42LjEwKSBh > cyBQSUQ6IDY2MDIKMjAxOC8wMi8xMiAxOTozODo1MS41MjkwNjAgY3RkYmRbNjYwMl06IENyZWF0 > ZWQgUElEIGZpbGUgL3J1bi9jdGRiL2N0ZGJkLnBpZAoyMDE4LzAyLzEyIDE5OjM4OjUxLjUyOTEy > MCBjdGRiZFs2NjAyXTogTGlzdGVuaW5nIHRvIGN0ZGIgc29ja2V0IC92YXIvcnVuL2N0ZGIvY3Rk > YmQuc29ja2V0CjIwMTgvMDIvMTIgMTk6Mzg6NTEuNTI5MTQ2IGN0ZGJkWzY2MDJdOiBTZXQgcmVh > bC10aW1lIHNjaGVkdWxlciBwcmlvcml0eQoyMDE4LzAyLzEyIDE5OjM4OjUxLjY0ODExNyBjdGRi > ZFs2NjAyXTogU3RhcnRpbmcgZXZlbnQgZGFlbW9uIC91c3IvbGliZXhlYy9jdGRiL2N0ZGJfZXZl > bnRkIC1lIC9ldGMvY3RkYi9ldmVudHMuZCAtcyAvdmFyL3J1bi9jdGRiL2V2ZW50ZC5zb2NrIC1Q > IDY2MDIgLWwgZmlsZTovdmFyL2xvZy9sb2cuY3RkYiAtZCBOT1RJQ0UKMjAxOC8wMi8xMiAxOToz > ODo1MS42NDgzOTAgY3RkYmRbNjYwMl06IGNvbm5lY3QoKSBmYWlsZWQsIGVycm5vPTIKMjAxOC8w > Mi8xMiAxOTozODo1MS42OTM3OTAgY3RkYi1ldmVudGRbNjYzM106IGxpc3RlbmluZyBvbiAvdmFy > L3J1bi9jdGRiL2V2ZW50ZC5zb2NrCjIwMTgvMDIvMTIgMTk6Mzg6NTEuNjkzODkzIGN0ZGItZXZl > bnRkWzY2MzNdOiBkYWVtb24gc3RhcnRlZCwgcGlkPTY2MzMKMjAxOC8wMi8xMiAxOTozODo1Mi42 > NDg0NzQgY3RkYmRbNjYwMl06IFNldCBydW5zdGF0ZSB0byBJTklUICgxKQoyMDE4LzAyLzEyIDE5 > OjM4OjU0LjUwNTc4MCBjdGRiZFs2NjAyXTogUE5OIGlzIDEKMjAxOC8wMi8xMiAxOTozODo1NC41 > NzQ5OTMgY3RkYmRbNjYwMl06IFZhY3V1bWluZyBpcyBkaXNhYmxlZCBmb3IgcGVyc2lzdGVudCBk > YXRhYmFzZSBjdGRiLnRkYgoyMDE4LzAyLzEyIDE5OjM4OjU0LjU3NjI5NyBjdGRiZFs2NjAyXTog > QXR0YWNoZWQgdG8gZGF0YWJhc2UgJy92YXIvbGliL2N0ZGIvcGVyc2lzdGVudC9jdGRiLnRkYi4x > JyB3aXRoIGZsYWdzIDB4NDAwCjIwMTgvMDIvMTIgMTk6Mzg6NTQuNTc2MzIyIGN0ZGJkWzY2MDJd > OiBJZ25vcmluZyBwZXJzaXN0ZW50IGRhdGFiYXNlICdjdGRiLnRkYi4yJwoyMDE4LzAyLzEyIDE5 > OjM4OjU0LjU3NjMzOSBjdGRiZFs2NjAyXTogSWdub3JpbmcgcGVyc2lzdGVudCBkYXRhYmFzZSAn > Y3RkYi50ZGIuMCcKMjAxOC8wMi8xMiAxOTozODo1NC41NzYzNjQgY3RkYmRbNjYwMl06IEZyZWV6 > ZSBkYjogY3RkYi50ZGIKMjAxOC8wMi8xMiAxOTozODo1NC41NzYzOTMgY3RkYmRbNjYwMl06IFNl > dCBsb2NrIGhlbHBlciB0byAiL3Vzci9saWJleGVjL2N0ZGIvY3RkYl9sb2NrX2hlbHBlciIKMjAx > OC8wMi8xMiAxOTozODo1NC41Nzk1MjcgY3RkYmRbNjYwMl06IFNldCBydW5zdGF0ZSB0byBTRVRV > UCAoMikKMjAxOC8wMi8xMiAxOTozODo1NC44ODE4MjggY3RkYmRbNjYwMl06IEtlZXBhbGl2ZSBt > b25pdG9yaW5nIGhhcyBiZWVuIHN0YXJ0ZWQKMjAxOC8wMi8xMiAxOTozODo1NC44ODE4NzMgY3Rk > YmRbNjYwMl06IFNldCBydW5zdGF0ZSB0byBGSVJTVF9SRUNPVkVSWSAoMykKMjAxOC8wMi8xMiAx > OTozODo1NC44ODIwMjAgY3RkYi1yZWNvdmVyZFs3MTgyXTogbW9uaXRvcl9jbHVzdGVyIHN0YXJ0 > aW5nCjIwMTgvMDIvMTIgMTk6Mzg6NTQuODgyNjIwIGN0ZGItcmVjb3ZlcmRbNzE4Ml06IEluaXRp > YWwgcmVjb3ZlcnkgbWFzdGVyIHNldCAtIGZvcmNpbmcgZWxlY3Rpb24KMjAxOC8wMi8xMiAxOToz > ODo1NC44ODI3MDIgY3RkYmRbNjYwMl06IFRoaXMgbm9kZSAoMSkgaXMgbm93IHRoZSByZWNvdmVy > eSBtYXN0ZXIKMjAxOC8wMi8xMiAxOTozODo1NS44ODI3MzUgY3RkYmRbNjYwMl06IENUREJfV0FJ > VF9VTlRJTF9SRUNPVkVSRUQKMjAxOC8wMi8xMiAxOTozODo1Ni45MDI4NzQgY3RkYmRbNjYwMl06 > IENUREJfV0FJVF9VTlRJTF9SRUNPVkVSRUQKMjAxOC8wMi8xMiAxOTozODo1Ny44ODU4MDAgY3Rk > Yi1yZWNvdmVyZFs3MTgyXTogRWxlY3Rpb24gcGVyaW9kIGVuZGVkCjIwMTgvMDIvMTIgMTk6Mzg6 > NTcuODg2MTM0IGN0ZGItcmVjb3ZlcmRbNzE4Ml06IE5vZGU6MSB3YXMgaW4gcmVjb3ZlcnkgbW9k > ZS4gU3RhcnQgcmVjb3ZlcnkgcHJvY2VzcwoyMDE4LzAyLzEyIDE5OjM4OjU3Ljg4NjE2MCBjdGRi > LXJlY292ZXJkWzcxODJdOiAuLi9jdGRiL3NlcnZlci9jdGRiX3JlY292ZXJkLmM6MTI2NyBTdGFy > dGluZyBkb19yZWNvdmVyeQoyMDE4LzAyLzEyIDE5OjM4OjU3Ljg4NjE4NyBjdGRiLXJlY292ZXJk > WzcxODJdOiBBdHRlbXB0aW5nIHRvIHRha2UgcmVjb3ZlcnkgbG9jayAoL3NoYXJlLWZzL2V4cG9y > dC9jdGRiLy5jdGRiL3JlY2xvY2spCjIwMTgvMDIvMTIgMTk6Mzg6NTcuODg2MjQzIGN0ZGItcmVj > b3ZlcmRbNzE4Ml06IFNldCBjbHVzdGVyIG11dGV4IGhlbHBlciB0byAiL3Vzci9saWJleGVjL2N0 > ZGIvY3RkYl9tdXRleF9mY250bF9oZWxwZXIiCjIwMTgvMDIvMTIgMTk6Mzg6NTcuODk5NzIyIGN0 > ZGItcmVjb3ZlcmRbNzE4Ml06IFVuYWJsZSB0byB0YWtlIHJlY292ZXJ5IGxvY2sgLSBjb250ZW50 > aW9uCjIwMTgvMDIvMTIgMTk6Mzg6NTcuODk5NzYzIGN0ZGItcmVjb3ZlcmRbNzE4Ml06IFVuYWJs > ZSB0byBnZXQgcmVjb3ZlcnkgbG9jayAtIHJldHJ5aW5nIHJlY292ZXJ5CjIwMTgvMDIvMTIgMTk6 > Mzg6NTcuOTAzMTM4IGN0ZGJkWzY2MDJdOiBDVERCX1dBSVRfVU5USUxfUkVDT1ZFUkVECjIwMTgv > MDIvMTIgMTk6Mzg6NTguODg3MzEwIGN0ZGItcmVjb3ZlcmRbNzE4Ml06IC4uL2N0ZGIvc2VydmVy > L2N0ZGJfcmVjb3ZlcmQuYzoxMjY3IFN0YXJ0aW5nIGRvX3JlY292ZXJ5CjIwMTgvMDIvMTIgMTk6 > Mzg6NTguODg3MzUzIGN0ZGItcmVjb3ZlcmRbNzE4Ml06IEF0dGVtcHRpbmcgdG8gdGFrZSByZWNv > dmVyeSBsb2NrICgvc2hhcmUtZnMvZXhwb3J0L2N0ZGIvLmN0ZGIvcmVjbG9jaykKMjAxOC8wMi8x > MiAxOTozODo1OC44OTM1MzEgY3RkYi1yZWNvdmVyZFs3MTgyXTogVW5hYmxlIHRvIHRha2UgcmVj > b3ZlcnkgbG9jayAtIGNvbnRlbnRpb24KMjAxOC8wMi8xMiAxOTozODo1OC44OTM1NzEgY3RkYi1y > ZWNvdmVyZFs3MTgyXTogVW5hYmxlIHRvIGdldCByZWNvdmVyeSBsb2NrIC0gcmV0cnlpbmcgcmVj > b3ZlcnkKMjAxOC8wMi8xMiAxOTozODo1OC45MDMzMTQgY3RkYmRbNjYwMl06IENUREJfV0FJVF9V > TlRJTF9SRUNPVkVSRUQKMjAxOC8wMi8xMiAxOTozODo1OS44OTEwMjQgY3RkYi1yZWNvdmVyZFs3 > MTgyXTogLi4vY3RkYi9zZXJ2ZXIvY3RkYl9yZWNvdmVyZC5jOjEyNjcgU3RhcnRpbmcgZG9fcmVj > b3ZlcnkKMjAxOC8wMi8xMiAxOTozODo1OS44OTEwODAgY3RkYi1yZWNvdmVyZFs3MTgyXTogQXR0 > ZW1wdGluZyB0byB0YWtlIHJlY292ZXJ5IGxvY2sgKC9zaGFyZS1mcy9leHBvcnQvY3RkYi8uY3Rk > Yi9yZWNsb2NrKQoyMDE4LzAyLzEyIDE5OjM4OjU5Ljg5ODMzNiBjdGRiLXJlY292ZXJkWzcxODJd > OiBVbmFibGUgdG8gdGFrZSByZWNvdmVyeSBsb2NrIC0gY29udGVudGlvbgoyMDE4LzAyLzEyIDE5 > OjM4OjU5Ljg5ODM5NyBjdGRiLXJlY292ZXJkWzcxODJdOiBVbmFibGUgdG8gZ2V0IHJlY292ZXJ5 > IGxvY2sgLSByZXRyeWluZyByZWNvdmVyeQoyMDE4LzAyLzEyIDE5OjM4OjU5LjkwNDcxMCBjdGRi > ZFs2NjAyXTogQ1REQl9XQUlUX1VOVElMX1JFQ09WRVJFRAoyMDE4LzAyLzEyIDE5OjM5OjAwLjg5 > MzY3MyBjdGRiLXJlY292ZXJkWzcxODJdOiAuLi9jdGRiL3NlcnZlci9jdGRiX3JlY292ZXJkLmM6 > MTI2NyBTdGFydGluZyBkb19yZWNvdmVyeQoyMDE4LzAyLzEyIDE5OjM5OjAwLjg5Mzc0MSBjdGRi > LXJlY292ZXJkWzcxODJdOiBBdHRlbXB0aW5nIHRvIHRha2UgcmVjb3ZlcnkgbG9jayAoL3NoYXJl > LWZzL2V4cG9ydC9jdGRiLy5jdGRiL3JlY2xvY2spCjIwMTgvMDIvMTIgMTk6Mzk6MDAuOTAxMDk0 > IGN0ZGItcmVjb3ZlcmRbNzE4Ml06IFVuYWJsZSB0byB0YWtlIHJlY292ZXJ5IGxvY2sgLSBjb250 > ZW50aW9uCjIwMTgvMDIvMTIgMTk6Mzk6MDAuOTAxMTUyIGN0ZGItcmVjb3ZlcmRbNzE4Ml06IFVu > YWJsZSB0byBnZXQgcmVjb3ZlcnkgbG9jayAtIHJldHJ5aW5nIHJlY292ZXJ5CjIwMTgvMDIvMTIg > MTk6Mzk6MDAuOTExMDA3IGN0ZGJkWzY2MDJdOiBDVERCX1dBSVRfVU5USUxfUkVDT1ZFUkVECjIw > MTgvMDIvMTIgMTk6Mzk6MDEuODk1MDQ0IGN0ZGItcmVjb3ZlcmRbNzE4Ml06IC4uL2N0ZGIvc2Vy > dmVyL2N0ZGJfcmVjb3ZlcmQuYzoxMjY3IFN0YXJ0aW5nIGRvX3JlY292ZXJ5CjIwMTgvMDIvMTIg > MTk6Mzk6MDEuODk1MTA2IGN0ZGItcmVjb3ZlcmRbNzE4Ml06IEF0dGVtcHRpbmcgdG8gdGFrZSBy > ZWNvdmVyeSBsb2NrICgvc2hhcmUtZnMvZXhwb3J0L2N0ZGIvLmN0ZGIvcmVjbG9jaykKMjAxOC8w > Mi8xMiAxOTozOTowMS45MDIzNzkgY3RkYi1yZWNvdmVyZFs3MTgyXTogVW5hYmxlIHRvIHRha2Ug > cmVjb3ZlcnkgbG9jayAtIGNvbnRlbnRpb24KMjAxOC8wMi8xMiAxOTozOTowMS45MDI0NTEgY3Rk > Yi1yZWNvdmVyZFs3MTgyXTogVW5hYmxlIHRvIGdldCByZWNvdmVyeSBsb2NrIC0gcmV0cnlpbmcg > cmVjb3ZlcnkKMjAxOC8wMi8xMiAxOTozOTowMS45MTIwNTQgY3RkYmRbNjYwMl06IENUREJfV0FJ > VF9VTlRJTF9SRUNPVkVSRUQKMjAxOC8wMi8xMiAxOTozOTowMi44OTY1MzkgY3RkYi1yZWNvdmVy > ZFs3MTgyXTogLi4vY3RkYi9zZXJ2ZXIvY3RkYl9yZWNvdmVyZC5jOjEyNjcgU3RhcnRpbmcgZG9f > cmVjb3ZlcnkKMjAxOC8wMi8xMiAxOTozOTowMi44OTY1OTcgY3RkYi1yZWNvdmVyZFs3MTgyXTog > QXR0ZW1wdGluZyB0byB0YWtlIHJlY292ZXJ5IGxvY2sgKC9zaGFyZS1mcy9leHBvcnQvY3RkYi8u > Y3RkYi9yZWNsb2NrKQoyMDE4LzAyLzEyIDE5OjM5OjAyLjkwNDY3NCBjdGRiLXJlY292ZXJkWzcx > ODJdOiBVbmFibGUgdG8gdGFrZSByZWNvdmVyeSBsb2NrIC0gY29udGVudGlvbgoyMDE4LzAyLzEy > IDE5OjM5OjAyLjkwNDczNiBjdGRiLXJlY292ZXJkWzcxODJdOiBVbmFibGUgdG8gZ2V0IHJlY292 > ZXJ5IGxvY2sgLSByZXRyeWluZyByZWNvdmVyeQoyMDE4LzAyLzEyIDE5OjM5OjAyLjkxMjg5NiBj > dGRiZFs2NjAyXTogQ1REQl9XQUlUX1VOVElMX1JFQ09WRVJFRAoyMDE4LzAyLzEyIDE5OjM5OjAz > Ljg5ODQ5NSBjdGRiLXJlY292ZXJkWzcxODJdOiAuLi9jdGRiL3NlcnZlci9jdGRiX3JlY292ZXJk > LmM6MTI2NyBTdGFydGluZyBkb19yZWNvdmVyeQoyMDE4LzAyLzEyIDE5OjM5OjAzLjg5ODU0OCBj > dGRiLXJlY292ZXJkWzcxODJdOiBBdHRlbXB0aW5nIHRvIHRha2UgcmVjb3ZlcnkgbG9jayAoL3No > YXJlLWZzL2V4cG9ydC9jdGRiLy5jdGRiL3JlY2xvY2spCjIwMTgvMDIvMTIgMTk6Mzk6MDMuOTA0 > ODc2IGN0ZGItcmVjb3ZlcmRbNzE4Ml06IFVuYWJsZSB0byB0YWtlIHJlY292ZXJ5IGxvY2sgLSBj > b250ZW50aW9uCjIwMTgvMDIvMTIgMTk6Mzk6MDMuOTA0OTI5IGN0ZGItcmVjb3ZlcmRbNzE4Ml06 > IFVuYWJsZSB0byBnZXQgcmVjb3ZlcnkgbG9jayAtIHJldHJ5aW5nIHJlY292ZXJ5CjIwMTgvMDIv > MTIgMTk6Mzk6MDMuOTEzNzM2IGN0ZGJkWzY2MDJdOiBDVERCX1dBSVRfVU5USUxfUkVDT1ZFUkVE > CjIwMTgvMDIvMTIgMTk6Mzk6MDQuODk5ODcyIGN0ZGItcmVjb3ZlcmRbNzE4Ml06IC4uL2N0ZGIv > c2VydmVyL2N0ZGJfcmVjb3ZlcmQuYzoxMjY3IFN0YXJ0aW5nIGRvX3JlY292ZXJ5CjIwMTgvMDIv > MTIgMTk6Mzk6MDQuODk5OTI4IGN0ZGItcmVjb3ZlcmRbNzE4Ml06IEF0dGVtcHRpbmcgdG8gdGFr > ZSByZWNvdmVyeSBsb2NrICgvc2hhcmUtZnMvZXhwb3J0L2N0ZGIvLmN0ZGIvcmVjbG9jaykKMjAx > OC8wMi8xMiAxOTozOTowNC45MDc3ODQgY3RkYi1yZWNvdmVyZFs3MTgyXTogVW5hYmxlIHRvIHRh > a2UgcmVjb3ZlcnkgbG9jayAtIGNvbnRlbnRpb24KMjAxOC8wMi8xMiAxOTozOTowNC45MDc4Mzcg > Y3RkYi1yZWNvdmVyZFs3MTgyXTogVW5hYmxlIHRvIGdldCByZWNvdmVyeSBsb2NrIC0gcmV0cnlp > bmcgcmVjb3ZlcnkKMjAxOC8wMi8xMiAxOTozOTowNC45MTQwNDggY3RkYmRbNjYwMl06IENUREJf > V0FJVF9VTlRJTF9SRUNPVkVSRUQ>Excuse me, but could you say that again, but this time in a readable form ? Rowland
Am Montag, 26. Februar 2018, 17:01:29 CET schrieb zhu.shangzhong--- via samba: Decoded base64 encoded body: When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060 ctdbd[6602]: Created PID file /run/ctdb/ ctdbd.pid 2018/02/12 19:38:51.529120 ctdbd[6602]: Listening to ctdb socket /var/ run/ctdb/ctdbd.socket 2018/02/12 19:38:51.529146 ctdbd[6602]: Set real-time scheduler priority 2018/02/12 19:38:51.648117 ctdbd[6602]: Starting event daemon /usr/ libexec/ctdb/ctdb_eventd -e /etc/ctdb/events.d -s /var/run /ctdb/eventd.sock -P 6602 -l file:/var/log/log.ctdb -d NOTICE 2018/02/12 19:38:51.648390 ctdbd[6602]: connect() failed, errno=2 2018/02/12 19:38:51.693790 ctdb-eventd[6633]: listening on /var/run/ ctdb/eventd.sock 2018/02/12 19:38:51.693893 ctdb-eventd[6633]: daemon started, pid=6633 2018/02/12 19:38:52.648474 ctdbd[6602]: Set runstate to INIT (1) 2018/02/12 19:38:54.505780 ctdbd[6602]: PNN is 1 2018/02/12 19:38:54.574993 ctdbd[6602]: Vacuuming is disabled for persistent database ctdb.tdb 2018/02/12 19:38:54.576297 ctdbd[6602]: Attached to database '/var/lib/ ctdb/persistent/ctdb.tdb.1' with flags 0x400 2018/02/12 19:38:54.576322 ctdbd[6602]: Ignoring persistent database 'ctdb.tdb.2' 2018/02/12 19:38:54.576339 ctdbd[6602]: Ignoring persistent database 'ctdb.tdb.0' 2018/02/12 19:38:54.576364 ctdbd[6602]: Freeze db: ctdb.tdb 2018/02/12 19:38:54.576393 ctdbd[6602]: Set lock helper to "/usr/libexec/ ctdb/ctdb_lock_helper" 2018/02/12 19:38:54.579527 ctdbd[6602]: Set runstate to SETUP (2) 2018/02/12 19:38:54.881828 ctdbd[6602]: Keepalive monitoring has been started 2018/02/12 19:38:54.881873 ctdbd[6602]: Set runstate to FIRST_RECOVERY (3) 2018/02/12 19:38:54.882020 ctdb-recoverd[7182]: monitor_cluster starting 2018/02/12 19:38:54.882620 ctdb-recoverd[7182]: Initial recovery master set - forcing election 2018/02/12 19:38:54.882702 ctdbd[6602]: This node (1) is now the recovery master 2018/02/12 19:38:55.882735 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:56.902874 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:55.882735 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:56.902874 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:57.885800 ctdb-recoverd[7182]: Election period ended 2018/02/12 19:38:57.886134 ctdb-recoverd[7182]: Node:1 was in recovery mode. Start recovery process 2018/02/12 19:38:57.886160 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:38:57.886187 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:38:57.886243 ctdb-recoverd[7182]: Set cluster mutex helper to "/usr/libexec/ctdb/ctdb_mutex_fcntl_helper" 2018/02/12 19:38:57.899722 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:38:57.899763 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:38:57.903138 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:58.887310 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:38:58.887353 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:38:58.893531 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:38:58.893571 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:38:58.903314 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:38:59.891024 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:38:59.891080 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:38:59.898336 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:38:59.898397 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:38:59.904710 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:00.893673 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:00.893741 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:00.901094 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:00.901152 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:00.911007 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:01.895044 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:01.895106 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:01.902379 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:01.902451 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:01.912054 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:02.896539 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:02.896597 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:02.904674 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:02.904736 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:02.912896 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:03.898495 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:03.898548 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:03.904876 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:03.904929 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:03.913736 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED 2018/02/12 19:39:04.899872 ctdb-recoverd[7182]: ../ctdb/server/ ctdb_recoverd.c:1267 Starting do_recovery 2018/02/12 19:39:04.899928 ctdb-recoverd[7182]: Attempting to take recovery lock (/share-fs/export/ctdb/.ctdb/reclock) 2018/02/12 19:39:04.907784 ctdb-recoverd[7182]: Unable to take recovery lock - contention 2018/02/12 19:39:04.907837 ctdb-recoverd[7182]: Unable to get recovery lock - retrying recovery 2018/02/12 19:39:04.914048 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED -- Gruss Harry Jede
Reasonably Related Threads
- 答复: [ctdb] Unable to take recovery lock - contention
- 答复: [ctdb] Unable to take recovery lock - contention
- [ctdb] Unable to take recovery lock - contention
- [ctdb]Unable to run startrecovery event(if mail content is encrypted, please see the attached file)
- CTDB Path