Displaying 20 results from an estimated 40 matches for "node_num".
2006 Jan 09
0
[PATCH 01/11] ocfs2: event-driven quorum
...after %u "
"milliseconds\n", reg->hr_dev_name,
@@ -588,6 +589,7 @@ static void o2hb_queue_node_event(struct
{
assert_spin_locked(&o2hb_live_lock);
+ INIT_LIST_HEAD(&event->hn_item);
event->hn_event_type = type;
event->hn_node = node;
event->hn_node_num = node_num;
@@ -598,6 +600,18 @@ static void o2hb_queue_node_event(struct
list_add_tail(&event->hn_item, &o2hb_node_events);
}
+void o2hb_notify(enum o2hb_callback_type type, struct o2nm_node *node,
+ int node_num)
+{
+ struct o2hb_node_event event;
+
+ spin_lock(&am...
2010 Aug 26
1
[PATCH 2/5] ocfs2/dlm: add lockres as parameter to dlm_new_lock()
...>flags |= DLM_LKSB_GET_LVB;
+ mlog(0, "set DLM_LKSB_GET_LVB flag\n");
+ }
+
dlm_lock_attach_lockres(newlock, res);
status = dlmlock_master(dlm, res, newlock, be32_to_cpu(create->flags));
@@ -678,16 +679,6 @@ retry_convert:
goto error;
}
- dlm_get_next_cookie(dlm->node_num, &tmpcookie);
- lock = dlm_new_lock(mode, dlm->node_num, tmpcookie, lksb);
- if (!lock) {
- dlm_error(status);
- goto error;
- }
-
- if (!recovery)
- dlm_wait_for_recovery(dlm);
-
/* find or create the lock resource */
res = dlm_get_lock_resource(dlm, name, namelen, flags);...
2008 Apr 02
10
[PATCH 0/62] Ocfs2 updates for 2.6.26-rc1
The following series of patches comprises the bulk of our outstanding
changes for Ocfs2.
Aside from the usual set of cleanups and fixes that were inappropriate for
2.6.25, there are a few highlights:
The '/sys/o2cb' directory has been moved to '/sys/fs/o2cb'. The new location
meshes better with modern sysfs layout. A symbolic link has been placed in
the old location so as to
2013 Nov 01
1
How to break out the unstop loop in the recovery thread? Thanks a lot.
...fs2_recovery_thread, there may be an un-stop loop which result in the super-large syslog file.
__ocfs2_recovery_thread
{
................................................
while (rm->rm_used) {
.............................................
status = ocfs2_recover_node(osb, node_num, slot_num);
skip_recovery:
if (!status) {
ocfs2_recovery_map_clear(osb, node_num);
} else {
mlog(ML_ERROR,
"Error %d recovering node %d on device (%u,%u)!\n",...
2007 May 17
1
[PATCH] ocfs: use list_for_each_entry where benefical
...43,7 @@ static int dlm_is_lockres_migrateable(st
ret = 0;
queue = &res->granted;
for (i = 0; i < 3; i++) {
- list_for_each(iter, queue) {
- lock = list_entry(iter, struct dlm_lock, list);
+ list_for_each_entry(lock, queue, list) {
++count;
if (lock->ml.node == dlm->node_num) {
mlog(0, "found a lock owned by this node still "
@@ -2923,18 +2912,16 @@ again:
static void dlm_remove_nonlocal_locks(struct dlm_ctxt *dlm,
struct dlm_lock_resource *res)
{
- struct list_head *iter, *iter2;
struct list_head *queue = &res->granted;
int i, bi...
2009 Mar 04
2
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount
...rack online/offline slots, so we could recover
+ * offline slots during recovery and mount
+ */
+
+struct ocfs2_replay_map {
+ unsigned int rm_slots;
+ unsigned char rm_replay_slots[0];
+};
+
+int ocfs2_compute_replay_slots(struct ocfs2_super *osb)
+{
+ struct ocfs2_replay_map *replay_map;
+ int i, node_num;
+
+ if (osb->replay_map)
+ return 0;
+
+ replay_map = kzalloc(sizeof(struct ocfs2_replay_map) +
+ (osb->max_slots * sizeof(char)), GFP_KERNEL);
+ if (!replay_map) {
+ mlog_errno(-ENOMEM);
+ return -ENOMEM;
+ }
+
+ spin_lock(&osb->osb_lock);
+
+ replay_map->rm_slots = osb-...
2014 Sep 26
2
One node hangs up issue requiring goog idea, thanks
...dlm->ast_lock);
while (!list_empty(&dlm->pending_asts)) {
lock = list_entry(dlm->pending_asts.next,
@@ -539,9 +542,16 @@ static void dlm_flush_asts(struct dlm_ct
spin_unlock(&dlm->ast_lock);
if (lock->ml.node != dlm->node_num) {
- ret = dlm_do_remote_ast(dlm, res, lock);
- if (ret < 0)
+ ret = dlm_do_remote_ast(dlm, res, lock);
+ if (ret < 0) {
mlog_errno(ret);
+ while (...
2010 Oct 08
23
O2CB global heartbeat - hopefully final drop!
All,
This is hopefully the final drop of the patches for adding global heartbeat
to the o2cb stack.
The diff from the previous set is here:
http://oss.oracle.com/~smushran/global-hb-diff-2010-10-07
Implemented most of the suggestions provided by Joel and Wengang.
The most important one was to activate the feature only at the end,
Also, got mostly a clean run with checkpatch.pl.
Sunil
2009 Apr 07
1
Backport to 1.4 of patch that recovers orphans from offline slots
The following patch is a backport of patch that recovers orphans from offline
slots. It is being backported from mainline to 1.4
mainline patch: 0001-Patch-to-recover-orphans-in-offline-slots-during-rec.patch
Thanks,
--Srini
2009 Mar 06
0
[PATCH 1/1] ocfs2: recover orphans in offline slots during recovery and mount
...If we've already queued the replay, we don't have any more to do */
+ if (osb->replay_map->rm_state == REPLAY_DONE)
+ return;
+
+ osb->replay_map->rm_state = state;
+}
+
+int ocfs2_compute_replay_slots(struct ocfs2_super *osb)
+{
+ struct ocfs2_replay_map *replay_map;
+ int i, node_num;
+
+ /* If replay map is already set, we don't do it again */
+ if (osb->replay_map)
+ return 0;
+
+ replay_map = kzalloc(sizeof(struct ocfs2_replay_map) +
+ (osb->max_slots * sizeof(char)), GFP_KERNEL);
+
+ if (!replay_map) {
+ mlog_errno(-ENOMEM);
+ return -ENOMEM;
+ }
+
+ spi...
2009 Feb 19
2
Patch to recover orphans in offline slots
This patch is against ocfs2-1.4 and also applies to ocfs2-1.2. ocfs2 mainline
requires only the first portion of the patch and hence will make a separate
patch for that.
2009 Mar 06
1
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount (revised)
...If we've already queued the replay, we don't have any more to do */
+ if (osb->replay_map->rm_state == REPLAY_DONE)
+ return;
+
+ osb->replay_map->rm_state = state;
+}
+
+int ocfs2_compute_replay_slots(struct ocfs2_super *osb)
+{
+ struct ocfs2_replay_map *replay_map;
+ int i, node_num;
+
+ /* If replay map is already set, we don't do it again */
+ if (osb->replay_map)
+ return 0;
+
+ replay_map = kzalloc(sizeof(struct ocfs2_replay_map) +
+ (osb->max_slots * sizeof(char)), GFP_KERNEL);
+
+ if (!replay_map) {
+ mlog_errno(-ENOMEM);
+ return -ENOMEM;
+ }
+
+ spi...
2008 Aug 01
1
[git patches] Ocfs2 and Configfs fixes
The only non-fix here is Joel's new configfs convenience macros, but nobody
is using them yet, so I think the patch is safe.
By the way, these patches (as usual) are all rebased on top of your latest
tree. I think that since the vast majority of ocfs2 and configfs patches are
self-contained and within a small area of the kernel that this should
probably be fine. If you feel otherwise however,
2007 Aug 24
2
[git patch] klibc bzero, mount fixes + random stuff
...ee(d);
}
-int link_to_name(char *link_name, char *link_target)
+static int link_to_name(char *link_name, char *link_target)
{
int res = link(link_target, link_name);
return res;
@@ -352,7 +339,7 @@ static void hash_insert(struct inode_val *new_value)
/* Associate FILE_NAME with the inode NODE_NUM. (Insert into hash table.) */
-void
+static void
add_inode(unsigned long node_num, char *file_name, unsigned long major_num,
unsigned long minor_num)
{
@@ -399,7 +386,7 @@ add_inode(unsigned long node_num, char *file_name, unsigned long major_num,
hash_num++;
}
-char *find_inode_file...
2023 Jun 13
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
..._worker() --> Line 2439 in dlmmaster.c
dlm_drop_lockres_ref_done() --> Line 2459 in dlmmaster.c
lockname = res->lockname.name; --> Line 2416 in dlmmaster.c (Access
res->lockname.name)
dlm_get_lock_resource() --> Line 701 in dlmmaster.c
if (res->owner != dlm->node_num) --> Line 1023 in dlmmaster.c (Access
res->owner)
The variables res->lockname.name and res->owner are accessed respectively
without holding the lock res->spinlock, and thus data races can occur.
I am not quite sure whether these possible data races are real and how to
fix
them if t...
2008 Sep 11
4
Some more debug stuff
Added two debugfs entries... one to dump o2hb livenodes and the other
to dump osb.
$ cat /sys/kernel/debug/ocfs2/BC4F4550BEA74F92BDCC746AAD2EC0BF/fs_state
Device => Id: 8,65 Uuid: BC4F4550BEA74F92BDCC746AAD2EC0BF Gen: 0xA02024F2 Label: sunil-xattr
Volume => State: 1 Flags: 0x0
Sizes => Block: 4096 Cluster: 4096
Features => Compat: 0x1 Incompat: 0x350 ROcompat: 0x1
2023 Jun 16
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
...Line 2416 in dlmmaster.c (Access
> res->lockname.name)
lockname won't changed during the lockres lifecycle.
So this won't cause any real problem since now it holds a reference.
>
> dlm_get_lock_resource() --> Line 701 in dlmmaster.c
> if (res->owner != dlm->node_num) --> Line 1023 in dlmmaster.c (Access
> res->owner)
Do you mean in dlm_wait_for_lock_mastery()?
Even if owner changes suddenly, it will recheck, so I think it is also fine.
Thanks,
Joseph
>
> The variables res->lockname.name and res->owner are accessed respectively
> wit...
2009 Apr 17
26
OCFS2 1.4: Patches backported from mainline
Please review the list of patches being applied to the ocfs2 1.4 tree.
All patches list the mainline commit hash.
Thanks
Sunil
2009 Feb 26
13
o2dlm mle hash patches - round 2
The changes from the last drop are:
1. Patch 11 removes struct dlm_lock_name.
2. Patch 12 is an unrelated bugfix. Actually is related to a bugfix
that we are retracting in mainline currently. The patch may need more testing.
While I did hit the condition in my testing, Marcos hasn't. I am sending it
because it can be queued for 2.6.30. Give us more time to test.
3. Patch 13 will be useful
2004 Jun 06
1
[PATCH] use sb_getblk
...g.c (working copy)
@@ -496,7 +491,7 @@
}
blocknum = lock_off >> sb->s_blocksize_bits;
- bh = getblk(OCFS_GET_BLOCKDEV(sb), blocknum, sb->s_blocksize);
+ bh = sb_getblk(sb, blocknum);
if (bh == NULL) {
LOG_ERROR_STATUS (status = -EIO);
goto finally;
@@ -646,7 +641,7 @@
((node_num + OCFS_VOLCFG_HDR_SECTORS) * osb->sect_size);
blocknum = offset >> sb->s_blocksize_bits;
- bh = getblk(OCFS_GET_BLOCKDEV(sb), blocknum, sb->s_blocksize);
+ bh = sb_getblk(sb, blocknum);
if (bh == NULL) {
status = -EIO;
LOG_ERROR_STATUS(status);
Index: src/buffer_head_io.c...