Junxiao Bi
2016-Jan-20 03:13 UTC
[Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
Hi, This serial of patches is to fix the issue that when storage down, all nodes will fence self due to write timeout. With this patch set, all nodes will keep going until storage back online, except if the following issue happens, then all nodes will do as before to fence self. 1. io error got 2. network between nodes down 3. nodes panic Junxiao Bi (6): ocfs2: o2hb: add negotiate timer ocfs2: o2hb: add NEGO_TIMEOUT message ocfs2: o2hb: add NEGOTIATE_APPROVE message ocfs2: o2hb: add some user/debug log ocfs2: o2hb: don't negotiate if last hb fail ocfs2: o2hb: fix hb hung time fs/ocfs2/cluster/heartbeat.c | 181 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 175 insertions(+), 6 deletions(-) Thanks, Junxiao.
Junxiao Bi
2016-Jan-20 03:13 UTC
[Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer
When storage down, all nodes will fence self due to write timeout. The negotiate timer is designed to avoid this, with it node will wait until storage up again. Negotiate timer working in the following way: 1. The timer expires before write timeout timer, its timeout is half of write timeout now. It is re-queued along with write timeout timer. If expires, it will send NEGO_TIMEOUT message to master node(node with lowest node number). This message does nothing but marks a bit in a bitmap recording which nodes are negotiating timeout on master node. 2. If storage down, nodes will send this message to master node, then when master node finds its bitmap including all online nodes, it sends NEGO_APPROVL message to all nodes one by one, this message will re-queue write timeout timer and negotiate timer. For any node doesn't receive this message or meets some issue when handling this message, it will be fenced. If storage up at any time, o2hb_thread will run and re-queue all the timer, nothing will be affected by these two steps. Signed-off-by: Junxiao Bi <junxiao.bi at oracle.com> Reviewed-by: Ryan Ding <ryan.ding at oracle.com> --- fs/ocfs2/cluster/heartbeat.c | 52 ++++++++++++++++++++++++++++++++++++++---- 1 file changed, 48 insertions(+), 4 deletions(-) diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index a3cc6d2fc896..b601ee95de50 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -272,6 +272,10 @@ struct o2hb_region { struct delayed_work hr_write_timeout_work; unsigned long hr_last_timeout_start; + /* negotiate timer, used to negotiate extending hb timeout. */ + struct delayed_work hr_nego_timeout_work; + unsigned long hr_nego_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; + /* Used during o2hb_check_slot to hold a copy of the block * being checked because we temporarily have to zero out the * crc field. */ @@ -320,7 +324,7 @@ static void o2hb_write_timeout(struct work_struct *work) o2quo_disk_timeout(); } -static void o2hb_arm_write_timeout(struct o2hb_region *reg) +static void o2hb_arm_timeout(struct o2hb_region *reg) { /* Arm writeout only after thread reaches steady state */ if (atomic_read(®->hr_steady_iterations) != 0) @@ -338,11 +342,50 @@ static void o2hb_arm_write_timeout(struct o2hb_region *reg) reg->hr_last_timeout_start = jiffies; schedule_delayed_work(®->hr_write_timeout_work, msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)); + + cancel_delayed_work(®->hr_nego_timeout_work); + /* negotiate timeout must be less than write timeout. */ + schedule_delayed_work(®->hr_nego_timeout_work, + msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)/2); + memset(reg->hr_nego_node_bitmap, 0, sizeof(reg->hr_nego_node_bitmap)); } -static void o2hb_disarm_write_timeout(struct o2hb_region *reg) +static void o2hb_disarm_timeout(struct o2hb_region *reg) { cancel_delayed_work_sync(®->hr_write_timeout_work); + cancel_delayed_work_sync(®->hr_nego_timeout_work); +} + +static void o2hb_nego_timeout(struct work_struct *work) +{ + struct o2hb_region *reg + container_of(work, struct o2hb_region, + hr_nego_timeout_work.work); + unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; + int master_node; + + o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap)); + /* lowest node as master node to make negotiate decision. */ + master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0); + + if (master_node == o2nm_this_node()) { + set_bit(master_node, reg->hr_nego_node_bitmap); + if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap, + sizeof(reg->hr_nego_node_bitmap))) { + /* check negotiate bitmap every second to do timeout + * approve decision. + */ + schedule_delayed_work(®->hr_nego_timeout_work, + msecs_to_jiffies(1000)); + + return; + } + + /* approve negotiate timeout request. */ + } else { + /* negotiate timeout with master node. */ + } + } static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc) @@ -1033,7 +1076,7 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) /* Skip disarming the timeout if own slot has stale/bad data */ if (own_slot_ok) { o2hb_set_quorum_device(reg); - o2hb_arm_write_timeout(reg); + o2hb_arm_timeout(reg); } bail: @@ -1115,7 +1158,7 @@ static int o2hb_thread(void *data) } } - o2hb_disarm_write_timeout(reg); + o2hb_disarm_timeout(reg); /* unclean stop is only used in very bad situation */ for(i = 0; !reg->hr_unclean_stop && i < reg->hr_blocks; i++) @@ -1762,6 +1805,7 @@ static ssize_t o2hb_region_dev_store(struct config_item *item, } INIT_DELAYED_WORK(®->hr_write_timeout_work, o2hb_write_timeout); + INIT_DELAYED_WORK(®->hr_nego_timeout_work, o2hb_nego_timeout); /* * A node is considered live after it has beat LIVE_THRESHOLD -- 1.7.9.5
Junxiao Bi
2016-Jan-20 03:13 UTC
[Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
This message is sent to master node when non-master nodes's negotiate timer expired. Master node records these nodes in a bitmap which is used to do write timeout timer re-queue decision. Signed-off-by: Junxiao Bi <junxiao.bi at oracle.com> Reviewed-by: Ryan Ding <ryan.ding at oracle.com> --- fs/ocfs2/cluster/heartbeat.c | 66 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 65 insertions(+), 1 deletion(-) diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index b601ee95de50..ecf8a5e21c38 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -280,6 +280,10 @@ struct o2hb_region { * being checked because we temporarily have to zero out the * crc field. */ struct o2hb_disk_heartbeat_block *hr_tmp_block; + + /* Message key for negotiate timeout message. */ + unsigned int hr_key; + struct list_head hr_handler_list; }; struct o2hb_bio_wait_ctxt { @@ -288,6 +292,14 @@ struct o2hb_bio_wait_ctxt { int wc_error; }; +enum { + O2HB_NEGO_TIMEOUT_MSG = 1, +}; + +struct o2hb_nego_msg { + u8 node_num; +}; + static void o2hb_write_timeout(struct work_struct *work) { int failed, quorum; @@ -356,6 +368,24 @@ static void o2hb_disarm_timeout(struct o2hb_region *reg) cancel_delayed_work_sync(®->hr_nego_timeout_work); } +static int o2hb_send_nego_msg(int key, int type, u8 target) +{ + struct o2hb_nego_msg msg; + int status, ret; + + msg.node_num = o2nm_this_node(); +again: + ret = o2net_send_message(type, key, &msg, sizeof(msg), + target, &status); + + if (ret == -EAGAIN || ret == -ENOMEM) { + msleep(100); + goto again; + } + + return ret; +} + static void o2hb_nego_timeout(struct work_struct *work) { struct o2hb_region *reg @@ -384,8 +414,24 @@ static void o2hb_nego_timeout(struct work_struct *work) /* approve negotiate timeout request. */ } else { /* negotiate timeout with master node. */ + o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG, + master_node); } +} + +static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data, + void **ret_data) +{ + struct o2hb_region *reg = (struct o2hb_region *)data; + struct o2hb_nego_msg *nego_msg; + nego_msg = (struct o2hb_nego_msg *)msg->buf; + if (nego_msg->node_num < O2NM_MAX_NODES) + set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap); + else + mlog(ML_ERROR, "got nego timeout message from bad node.\n"); + + return 0; } static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc) @@ -1493,6 +1539,7 @@ static void o2hb_region_release(struct config_item *item) list_del(®->hr_all_item); spin_unlock(&o2hb_live_lock); + o2net_unregister_handler_list(®->hr_handler_list); kfree(reg); } @@ -2039,13 +2086,30 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g config_item_init_type_name(®->hr_item, name, &o2hb_region_type); + /* this is the same way to generate msg key as dlm, for local heartbeat, + * name is also the same, so make initial crc value different to avoid + * message key conflict. + */ + reg->hr_key = crc32_le(reg->hr_region_num + O2NM_MAX_REGIONS, + name, strlen(name)); + INIT_LIST_HEAD(®->hr_handler_list); + ret = o2net_register_handler(O2HB_NEGO_TIMEOUT_MSG, reg->hr_key, + sizeof(struct o2hb_nego_msg), + o2hb_nego_timeout_handler, + reg, NULL, ®->hr_handler_list); + if (ret) + goto free; + ret = o2hb_debug_region_init(reg, o2hb_debug_dir); if (ret) { config_item_put(®->hr_item); - goto free; + goto free_handler; } return ®->hr_item; + +free_handler: + o2net_unregister_handler_list(®->hr_handler_list); free: kfree(reg); return ERR_PTR(ret); -- 1.7.9.5
Junxiao Bi
2016-Jan-20 03:13 UTC
[Ocfs2-devel] [PATCH 3/6] ocfs2: o2hb: add NEGOTIATE_APPROVE message
This message is used to re-queue write timeout timer and negotiate timer when all nodes suffer a write hung to storage, this makes node not fence self if storage down. Signed-off-by: Junxiao Bi <junxiao.bi at oracle.com> Reviewed-by: Ryan Ding <ryan.ding at oracle.com> --- fs/ocfs2/cluster/heartbeat.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index ecf8a5e21c38..d5ef8dce08da 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -294,6 +294,7 @@ struct o2hb_bio_wait_ctxt { enum { O2HB_NEGO_TIMEOUT_MSG = 1, + O2HB_NEGO_APPROVE_MSG = 2, }; struct o2hb_nego_msg { @@ -392,7 +393,7 @@ static void o2hb_nego_timeout(struct work_struct *work) container_of(work, struct o2hb_region, hr_nego_timeout_work.work); unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; - int master_node; + int master_node, i; o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap)); /* lowest node as master node to make negotiate decision. */ @@ -412,6 +413,17 @@ static void o2hb_nego_timeout(struct work_struct *work) } /* approve negotiate timeout request. */ + o2hb_arm_timeout(reg); + + i = -1; + while ((i = find_next_bit(live_node_bitmap, + O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) { + if (i == master_node) + continue; + + o2hb_send_nego_msg(reg->hr_key, + O2HB_NEGO_APPROVE_MSG, i); + } } else { /* negotiate timeout with master node. */ o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG, @@ -434,6 +446,13 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data, return 0; } +static int o2hb_nego_approve_handler(struct o2net_msg *msg, u32 len, void *data, + void **ret_data) +{ + o2hb_arm_timeout((struct o2hb_region *)data); + return 0; +} + static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc) { atomic_set(&wc->wc_num_reqs, 1); @@ -2100,6 +2119,13 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g if (ret) goto free; + ret = o2net_register_handler(O2HB_NEGO_APPROVE_MSG, reg->hr_key, + sizeof(struct o2hb_nego_msg), + o2hb_nego_approve_handler, + reg, NULL, ®->hr_handler_list); + if (ret) + goto free_handler; + ret = o2hb_debug_region_init(reg, o2hb_debug_dir); if (ret) { config_item_put(®->hr_item); -- 1.7.9.5
Junxiao Bi
2016-Jan-20 03:13 UTC
[Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log
Signed-off-by: Junxiao Bi <junxiao.bi at oracle.com> Reviewed-by: Ryan Ding <ryan.ding at oracle.com> --- fs/ocfs2/cluster/heartbeat.c | 39 ++++++++++++++++++++++++++++++++------- 1 file changed, 32 insertions(+), 7 deletions(-) diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index d5ef8dce08da..6c57fd21e597 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -292,6 +292,8 @@ struct o2hb_bio_wait_ctxt { int wc_error; }; +#define O2HB_NEGO_TIMEOUT_MS (O2HB_MAX_WRITE_TIMEOUT_MS/2) + enum { O2HB_NEGO_TIMEOUT_MSG = 1, O2HB_NEGO_APPROVE_MSG = 2, @@ -359,7 +361,7 @@ static void o2hb_arm_timeout(struct o2hb_region *reg) cancel_delayed_work(®->hr_nego_timeout_work); /* negotiate timeout must be less than write timeout. */ schedule_delayed_work(®->hr_nego_timeout_work, - msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)/2); + msecs_to_jiffies(O2HB_NEGO_TIMEOUT_MS)); memset(reg->hr_nego_node_bitmap, 0, sizeof(reg->hr_nego_node_bitmap)); } @@ -393,14 +395,19 @@ static void o2hb_nego_timeout(struct work_struct *work) container_of(work, struct o2hb_region, hr_nego_timeout_work.work); unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; - int master_node, i; + int master_node, i, ret; o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap)); /* lowest node as master node to make negotiate decision. */ master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0); if (master_node == o2nm_this_node()) { - set_bit(master_node, reg->hr_nego_node_bitmap); + if (!test_bit(master_node, reg->hr_nego_node_bitmap)) { + printk(KERN_NOTICE "o2hb: node %d hb write hung for %ds on region %s (%s).\n", + o2nm_this_node(), O2HB_NEGO_TIMEOUT_MS/1000, + config_item_name(®->hr_item), reg->hr_dev_name); + set_bit(master_node, reg->hr_nego_node_bitmap); + } if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap, sizeof(reg->hr_nego_node_bitmap))) { /* check negotiate bitmap every second to do timeout @@ -412,6 +419,8 @@ static void o2hb_nego_timeout(struct work_struct *work) return; } + printk(KERN_NOTICE "o2hb: all nodes hb write hung, maybe region %s (%s) is down.\n", + config_item_name(®->hr_item), reg->hr_dev_name); /* approve negotiate timeout request. */ o2hb_arm_timeout(reg); @@ -421,13 +430,23 @@ static void o2hb_nego_timeout(struct work_struct *work) if (i == master_node) continue; - o2hb_send_nego_msg(reg->hr_key, + mlog(ML_HEARTBEAT, "send NEGO_APPROVE msg to node %d\n", i); + ret = o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_APPROVE_MSG, i); + if (ret) + mlog(ML_ERROR, "send NEGO_APPROVE msg to node %d fail %d\n", + i, ret); } } else { /* negotiate timeout with master node. */ - o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG, - master_node); + printk(KERN_NOTICE "o2hb: node %d hb write hung for %ds on region %s (%s), negotiate timeout with node %d.\n", + o2nm_this_node(), O2HB_NEGO_TIMEOUT_MS/1000, config_item_name(®->hr_item), + reg->hr_dev_name, master_node); + ret = o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG, + master_node); + if (ret) + mlog(ML_ERROR, "send NEGO_TIMEOUT msg to node %d fail %d\n", + master_node, ret); } } @@ -438,6 +457,8 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data, struct o2hb_nego_msg *nego_msg; nego_msg = (struct o2hb_nego_msg *)msg->buf; + printk(KERN_NOTICE "o2hb: receive negotiate timeout message from node %d on region %s (%s).\n", + nego_msg->node_num, config_item_name(®->hr_item), reg->hr_dev_name); if (nego_msg->node_num < O2NM_MAX_NODES) set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap); else @@ -449,7 +470,11 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data, static int o2hb_nego_approve_handler(struct o2net_msg *msg, u32 len, void *data, void **ret_data) { - o2hb_arm_timeout((struct o2hb_region *)data); + struct o2hb_region *reg = (struct o2hb_region *)data; + + printk(KERN_NOTICE "o2hb: negotiate timeout approved by master node on region %s (%s).\n", + config_item_name(®->hr_item), reg->hr_dev_name); + o2hb_arm_timeout(reg); return 0; } -- 1.7.9.5
Junxiao Bi
2016-Jan-20 03:13 UTC
[Ocfs2-devel] [PATCH 5/6] ocfs2: o2hb: don't negotiate if last hb fail
Sometimes io error is returned when storage is down for a while. Like for iscsi device, stroage is made offline when session timeout, and this will make all io return -EIO. For this case, nodes shouldn't do negotiate timeout but should fence self. So let nodes fence self when o2hb_do_disk_heartbeat return an error, this is the same behavior with o2hb without negotiate timer. Signed-off-by: Junxiao Bi <junxiao.bi at oracle.com> Reviewed-by: Ryan Ding <ryan.ding at oracle.com> --- fs/ocfs2/cluster/heartbeat.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index 6c57fd21e597..cb931381f474 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -284,6 +284,9 @@ struct o2hb_region { /* Message key for negotiate timeout message. */ unsigned int hr_key; struct list_head hr_handler_list; + + /* last hb status, 0 for success, other value for error. */ + int hr_last_hb_status; }; struct o2hb_bio_wait_ctxt { @@ -397,6 +400,12 @@ static void o2hb_nego_timeout(struct work_struct *work) unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; int master_node, i, ret; + /* don't negotiate timeout if last hb failed since it is very + * possible io failed. Should let write timeout fence self. + */ + if (reg->hr_last_hb_status) + return; + o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap)); /* lowest node as master node to make negotiate decision. */ master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0); @@ -1230,6 +1239,7 @@ static int o2hb_thread(void *data) before_hb = ktime_get_real(); ret = o2hb_do_disk_heartbeat(reg); + reg->hr_last_hb_status = ret; after_hb = ktime_get_real(); -- 1.7.9.5
hr_last_timeout_start should be set as the last time where hb is still OK. When hb write timeout, hung time will be (jiffies - hr_last_timeout_start). Signed-off-by: Junxiao Bi <junxiao.bi at oracle.com> Reviewed-by: Ryan Ding <ryan.ding at oracle.com> --- fs/ocfs2/cluster/heartbeat.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index cb931381f474..a3ce5a734b7b 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -357,7 +357,6 @@ static void o2hb_arm_timeout(struct o2hb_region *reg) spin_unlock(&o2hb_live_lock); } cancel_delayed_work(®->hr_write_timeout_work); - reg->hr_last_timeout_start = jiffies; schedule_delayed_work(®->hr_write_timeout_work, msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)); @@ -1176,6 +1175,7 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) if (own_slot_ok) { o2hb_set_quorum_device(reg); o2hb_arm_timeout(reg); + reg->hr_last_timeout_start = jiffies; } bail: -- 1.7.9.5
Hi Junxiao, Thank for your fix. Just one quick question, this fix only effects OCFS2 O2CB case, right? If the user selects pacemaker as cluster stack? OCFS2 file system will encounter the same problem? Thanks Gang>>> > Hi, > > This serial of patches is to fix the issue that when storage down, > all nodes will fence self due to write timeout. > With this patch set, all nodes will keep going until storage back > online, except if the following issue happens, then all nodes will > do as before to fence self. > 1. io error got > 2. network between nodes down > 3. nodes panic > > Junxiao Bi (6): > ocfs2: o2hb: add negotiate timer > ocfs2: o2hb: add NEGO_TIMEOUT message > ocfs2: o2hb: add NEGOTIATE_APPROVE message > ocfs2: o2hb: add some user/debug log > ocfs2: o2hb: don't negotiate if last hb fail > ocfs2: o2hb: fix hb hung time > > fs/ocfs2/cluster/heartbeat.c | 181 > ++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 175 insertions(+), 6 deletions(-) > > Thanks, > Junxiao. > > _______________________________________________ > Ocfs2-devel mailing list > Ocfs2-devel at oss.oracle.com > https://oss.oracle.com/mailman/listinfo/ocfs2-devel
Hi Junxiao, Thanks for the patch set. In case only one node storage link down, if this node doesn't fence self, other nodes will still check and mark this node dead, which will cause cluster membership inconsistency. In your patch set, I cannot see any logic to handle this. Am I missing something? On 2016/1/20 11:13, Junxiao Bi wrote:> Hi, > > This serial of patches is to fix the issue that when storage down, > all nodes will fence self due to write timeout. > With this patch set, all nodes will keep going until storage back > online, except if the following issue happens, then all nodes will > do as before to fence self. > 1. io error got > 2. network between nodes down > 3. nodes panic > > Junxiao Bi (6): > ocfs2: o2hb: add negotiate timer > ocfs2: o2hb: add NEGO_TIMEOUT message > ocfs2: o2hb: add NEGOTIATE_APPROVE message > ocfs2: o2hb: add some user/debug log > ocfs2: o2hb: don't negotiate if last hb fail > ocfs2: o2hb: fix hb hung time > > fs/ocfs2/cluster/heartbeat.c | 181 ++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 175 insertions(+), 6 deletions(-) > > Thanks, > Junxiao. > > _______________________________________________ > Ocfs2-devel mailing list > Ocfs2-devel at oss.oracle.com > https://oss.oracle.com/mailman/listinfo/ocfs2-devel > >
Hi, junxiao! We can't find correct fencing log after a node fencing itself. We know there is log such as following in source code: printk(KERN_ERR "*** ocfs2 is very sorry to be fencing this " "system by restarting ***\n"); But we NEVER found this message from /var/log/message or last "demsg". Do u mean we can find this message from local fs log after applying this patch set? Or any way to find this output (without netconsole), thx? rwxybh From: Junxiao Bi Date: 2016-01-20 11:13 To: ocfs2-devel CC: mfasheh Subject: [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Hi, This serial of patches is to fix the issue that when storage down, all nodes will fence self due to write timeout. With this patch set, all nodes will keep going until storage back online, except if the following issue happens, then all nodes will do as before to fence self. 1. io error got 2. network between nodes down 3. nodes panic Junxiao Bi (6): ocfs2: o2hb: add negotiate timer ocfs2: o2hb: add NEGO_TIMEOUT message ocfs2: o2hb: add NEGOTIATE_APPROVE message ocfs2: o2hb: add some user/debug log ocfs2: o2hb: don't negotiate if last hb fail ocfs2: o2hb: fix hb hung time fs/ocfs2/cluster/heartbeat.c | 181 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 175 insertions(+), 6 deletions(-) Thanks, Junxiao. _______________________________________________ Ocfs2-devel mailing list Ocfs2-devel at oss.oracle.com https://oss.oracle.com/mailman/listinfo/ocfs2-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-devel/attachments/20160121/7b9ef8eb/attachment.html