Displaying 12 results from an estimated 12 matches for "dead_nod".
Did you mean:
dead_node
2007 May 17
1
[PATCH] ocfs: use list_for_each_entry where benefical
...alive */
- lock = list_entry (iter, struct dlm_lock, list);
if (lock->ml.node != dlm->node_num) {
spin_unlock(&res->spinlock);
return lock->ml.node;
@@ -3234,8 +3219,7 @@ static int dlm_add_migration_mle(struct
void dlm_clean_master_list(struct dlm_ctxt *dlm, u8 dead_node)
{
- struct list_head *iter, *iter2;
- struct dlm_master_list_entry *mle;
+ struct dlm_master_list_entry *mle, *next;
struct dlm_lock_resource *res;
unsigned int hash;
@@ -3245,9 +3229,7 @@ top:
/* clean the master list */
spin_lock(&dlm->master_lock);
- list_for_each_safe(iter...
2009 Feb 03
10
Convert mle list to a hash
These patches convert the mle list to a hash. The same patches apply on
ocfs2 1.4 too.
Currently, we use the same number of hash pages for mles and lockres'.
This will be addressed in a future patch that will make both of them
configurable.
Sunil
2009 Feb 26
13
o2dlm mle hash patches - round 2
The changes from the last drop are:
1. Patch 11 removes struct dlm_lock_name.
2. Patch 12 is an unrelated bugfix. Actually is related to a bugfix
that we are retracting in mainline currently. The patch may need more testing.
While I did hit the condition in my testing, Marcos hasn't. I am sending it
because it can be queued for 2.6.30. Give us more time to test.
3. Patch 13 will be useful
2009 Apr 17
26
OCFS2 1.4: Patches backported from mainline
Please review the list of patches being applied to the ocfs2 1.4 tree.
All patches list the mainline commit hash.
Thanks
Sunil
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...m 3) at 185.200.1.15:7100 shutdown, state 7
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.214606] o2cb: o2dlm has evicted node 1 from domain AB92EF420A5A475ABD6C139B0C7DDD1C
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.214613] (kworker/u:2,19288,6):dlm_begin_reco_handler:2728 AB92EF420A5A475ABD6C139B0C7DDD1C: dead_node previously set to 1, node 4 changing it to 1
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.317544] o2dlm: Node 4 (he) is the Recovery Master for the dead node 1 in domain AB92EF420A5A475ABD6C139B0C7DDD1C
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.317548] o2dlm: End recovery on domain AB92EF420A5A475ABD6C13...
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...m 3) at 185.200.1.15:7100 shutdown, state 7
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.214606] o2cb: o2dlm has evicted node 1 from domain AB92EF420A5A475ABD6C139B0C7DDD1C
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.214613] (kworker/u:2,19288,6):dlm_begin_reco_handler:2728 AB92EF420A5A475ABD6C139B0C7DDD1C: dead_node previously set to 1, node 4 changing it to 1
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.317544] o2dlm: Node 4 (he) is the Recovery Master for the dead node 1 in domain AB92EF420A5A475ABD6C139B0C7DDD1C
Apr 27 17:44:22 ZHJD-VM6 kernel: [ 4236.317548] o2dlm: End recovery on domain AB92EF420A5A475ABD6C13...
2013 Jul 26
0
[PATCH] ocfs2: dlm_request_all_locks() should deal with the status sent from target node
...quest_from, NULL);
+ &lr, sizeof(lr), request_from, &status);
/* negative status is handled by caller */
if (ret < 0)
mlog(ML_ERROR, "%s: Error %d send LOCK_REQUEST to node %u "
"to recover dead node %u\n", dlm->name, ret,
request_from, dead_node);
+ else
+ ret = status;
// return from here, then
// sleep until all received or error
return ret;
--
1.7.9.7
2013 Aug 27
0
[patch 07/22] ocfs2: dlm_request_all_locks() should deal with the status sent from target node
...quest_from, NULL);
+ &lr, sizeof(lr), request_from, &status);
/* negative status is handled by caller */
if (ret < 0)
mlog(ML_ERROR, "%s: Error %d send LOCK_REQUEST to node %u "
"to recover dead node %u\n", dlm->name, ret,
request_from, dead_node);
+ else
+ ret = status;
// return from here, then
// sleep until all received or error
return ret;
_
2009 Apr 22
1
[PATCH 1/1] OCFS2: fasten dlm_lock_resource hash_table lookups
...he owner
* if necessary */
for (i = 0; i < DLM_HASH_BUCKETS; i++) {
- bucket = &(dlm->lockres_hash[i]);
+ bucket = dlm_lockres_hash(dlm, i);
hlist_for_each_entry(res, hash_iter, bucket, hash_node) {
if (res->state & DLM_LOCK_RES_RECOVERING) {
if (res->owner == dead_node) {
@@ -2259,7 +2259,7 @@ static void dlm_do_local_recovery_cleanu
* need to be fired as a result.
*/
for (i = 0; i < DLM_HASH_BUCKETS; i++) {
- bucket = &(dlm->lockres_hash[i]);
+ bucket = dlm_lockres_hash(dlm, i);
hlist_for_each_entry(res, iter, bucket, hash_node) {...
2009 May 01
0
[PATCH 1/3] OCFS2: speed up dlm_lockr_resouce hash_table lookups
...he owner
* if necessary */
for (i = 0; i < DLM_HASH_BUCKETS; i++) {
- bucket = &(dlm->lockres_hash[i]);
+ bucket = dlm_lockres_hash(dlm, i);
hlist_for_each_entry(res, hash_iter, bucket, hash_node) {
if (res->state & DLM_LOCK_RES_RECOVERING) {
if (res->owner == dead_node) {
@@ -2259,7 +2259,7 @@ static void dlm_do_local_recovery_cleanu
* need to be fired as a result.
*/
for (i = 0; i < DLM_HASH_BUCKETS; i++) {
- bucket = &(dlm->lockres_hash[i]);
+ bucket = dlm_lockres_hash(dlm, i);
hlist_for_each_entry(res, iter, bucket, hash_node) {...
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi,
The following patches comprise the bulk of Ocfs2 updates for the
2.6.30 merge window. Aside from larger, more involved fixes, we're adding
the following features, which I will describe in the order their patches are
mailed.
Sunil's exported some more state to our debugfs files, and
consolidated some other aspects of our debugfs infrastructure. This will
further aid us in debugging
2008 Apr 02
10
[PATCH 0/62] Ocfs2 updates for 2.6.26-rc1
The following series of patches comprises the bulk of our outstanding
changes for Ocfs2.
Aside from the usual set of cleanups and fixes that were inappropriate for
2.6.25, there are a few highlights:
The '/sys/o2cb' directory has been moved to '/sys/fs/o2cb'. The new location
meshes better with modern sysfs layout. A symbolic link has been placed in
the old location so as to