Displaying 20 results from an estimated 35 matches for "debug_locks".
2014 Oct 29
1
smbstatus hang with CTDB 2.5.4 and Samba 4.1.13
...as different flags for node 1. It has 0x02 vs our 0x00
2014/10/29 11:12:48.462448 [recoverd:6488266]: Use flags 0x00 from local
recmaster node for cluster update of node 1 flags
2014/10/29 11:12:48.483362 [3932342]: Freeze priority 1
2014/10/29 11:12:58.574548 [3932342]:
/usr/smb_cluster/etc/ctdb/debug_locks.sh[23]: flock: not found
2014/10/29 11:13:08.569593 [3932342]:
/usr/smb_cluster/etc/ctdb/debug_locks.sh[23]: flock: not found
2014/10/29 11:13:18.604124 [3932342]:
/usr/smb_cluster/etc/ctdb/debug_locks.sh[23]: flock: not found
2014/10/29 11:13:28.638279 [3932342]:
/usr/smb_cluster/etc/ctdb/de...
2017 Apr 19
6
CTDB problems
...for 500 seconds
===== Start of debug locks PID=9372 =====
8084 /usr/sbin/smbd brlock.tdb.2 7044 7044
20931 /usr/libexec/ctdb/ctdb_lock_helper brlock.tdb.2 7044 7044 W
21665 /usr/sbin/smbd brlock.tdb.2 174200 174200
----- Stack trace for PID=21665 -----
2017/04/19 12:11:39.571097 [ 5417]: /etc/ctdb/debug_locks.sh: line 73:
gstack: command not found
----- Stack trace for PID=8084 -----
2017/04/19 12:11:39.571346 [ 5417]: /etc/ctdb/debug_locks.sh: line 73:
gstack: command not found
===== End of debug locks PID=9372 =====
2017/04/19 12:37:19.547636 [vacuum-locking.tdb: 3790]:
tdb(/var/lib/ctdb/locking.td...
2017 Oct 27
2
ctdb vacuum timeouts and record locks
...his in the ctdb logs:
ctdbd[89]: Vacuuming child process timed out for db locking.tdb
ctdbd[89]: Vacuuming child process timed out for db locking.tdb
ctdbd[89]: Unable to get RECORD lock on database locking.tdb for 10 seconds
ctdbd[89]: Set lock debugging helper to
"/usr/local/samba/etc/ctdb/debug_locks.sh"
/usr/local/samba/etc/ctdb/debug_locks.sh: 142:
/usr/local/samba/etc/ctdb/debug_locks.sh: cannot create : Directory
nonexistent
sh: echo: I/O error
sh: echo: I/O error
sh: echo: I/O error
sh: echo: I/O error
cat: write error: Broken pipe
sh: echo: I/O error
ctdbd[89]: Unable to get RECORD...
2017 Oct 27
3
ctdb vacuum timeouts and record locks
Hi Martin,
Thanks for reading and taking the time to reply
>> ctdbd[89]: Unable to get RECORD lock on database locking.tdb for 20 seconds
>> /usr/local/samba/etc/ctdb/debug_locks.sh: 142:
>> /usr/local/samba/etc/ctdb/debug_locks.sh: cannot create : Directory
>> nonexistent
>> sh: echo: I/O error
>> sh: echo: I/O error
>
> That's weird. The only file really created by that script is the lock
> file that is used to make sure we don't...
2017 Oct 27
0
ctdb vacuum timeouts and record locks
...t; ctdbd[89]: Vacuuming child process timed out for db locking.tdb
> ctdbd[89]: Vacuuming child process timed out for db locking.tdb
> ctdbd[89]: Unable to get RECORD lock on database locking.tdb for 10 seconds
> ctdbd[89]: Set lock debugging helper to
> "/usr/local/samba/etc/ctdb/debug_locks.sh"
> /usr/local/samba/etc/ctdb/debug_locks.sh: 142:
> /usr/local/samba/etc/ctdb/debug_locks.sh: cannot create : Directory
> nonexistent
> sh: echo: I/O error
> sh: echo: I/O error
> sh: echo: I/O error
> sh: echo: I/O error
> cat: write error: Broken pipe
> sh: ec...
2017 Apr 19
1
CTDB problems
On Wed, 19 Apr 2017 18:06:35 +0200, David Disseldorp via samba wrote:
> > 2017/04/19 10:40:31.294250 [ 7423]: /etc/ctdb/debug_locks.sh: line 73:
> > gstack: command not found
>
> This script attempts to dump the stack trace of the blocked process,
> but can't as gstack isn't installed - it should be available in the
> gdb package.
>
> @Martin: would the attached (untested) patch make sense?...
2017 Nov 02
0
ctdb vacuum timeouts and record locks
...g on since last
night. The need to fix it was urgent (when isn't it?) so I didn't have
time to poke around for clues, but immediately restarted the lxc
container. But this time it wouldn't restart, which I had time to trace
to a hung smbd process, and between that and a run of the debug_locks.sh
script, I traced it to the user reporting the problem. Given that the
user was primarily having problems with files in a given folder, I am
thinking this is because of some kind of lock on a file within that
folder.
Ended up rebooting both physical machines, problem solved. for now.
So,...
2015 Jun 06
0
[PATCH 2/5] threads: Acquire and release the lock around each public guestfs_* API.
Since each ACQUIRE_LOCK/RELEASE_LOCK call must balance, this code is
difficult to debug. Enable DEBUG_LOCK to add some prints which can
help.
The only definitive list of public APIs is found indirectly in the
generator (in generator/c.ml : globals).
---
generator/c.ml | 18 ++++++++++++++
src/errors.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++----
src/events.c
2017 Nov 06
2
ctdb vacuum timeouts and record locks
...; night. The need to fix it was urgent (when isn't it?) so I didn't have
> time to poke around for clues, but immediately restarted the lxc
> container. But this time it wouldn't restart, which I had time to trace
> to a hung smbd process, and between that and a run of the debug_locks.sh
> script, I traced it to the user reporting the problem. Given that the
> user was primarily having problems with files in a given folder, I am
> thinking this is because of some kind of lock on a file within that
> folder.
>
> Ended up rebooting both physical machines, p...
2017 Nov 02
2
ctdb vacuum timeouts and record locks
...; night. The need to fix it was urgent (when isn't it?) so I didn't have
> time to poke around for clues, but immediately restarted the lxc
> container. But this time it wouldn't restart, which I had time to trace
> to a hung smbd process, and between that and a run of the debug_locks.sh
> script, I traced it to the user reporting the problem. Given that the
> user was primarily having problems with files in a given folder, I am
> thinking this is because of some kind of lock on a file within that folder.
>
> Ended up rebooting both physical machines, problem...
2012 Nov 06
1
[PATCH] xen/events: xen/events: fix RCU warning
...--------------------
[ 2.513183] include/linux/rcupdate.h:725 rcu_read_lock() used illegally while idle!
[ 2.513271]
[ 2.513271] other info that might help us debug this:
[ 2.513271]
[ 2.513388]
[ 2.513388] RCU used illegally from idle CPU!
[ 2.513388] rcu_scheduler_active = 1, debug_locks = 1
[ 2.513511] RCU used illegally from extended quiescent state!
[ 2.513572] 1 lock held by swapper/0/0:
[ 2.513626] #0: (rcu_read_lock){......}, at: [<ffffffff810e9fe0>] __atomic_notifier_call_chain+0x0/0x140
[ 2.513815]
[ 2.513815] stack backtrace:
[ 2.513897] Pid: 0, c...
2012 Nov 06
1
[PATCH] xen/events: xen/events: fix RCU warning
...--------------------
[ 2.513183] include/linux/rcupdate.h:725 rcu_read_lock() used illegally while idle!
[ 2.513271]
[ 2.513271] other info that might help us debug this:
[ 2.513271]
[ 2.513388]
[ 2.513388] RCU used illegally from idle CPU!
[ 2.513388] rcu_scheduler_active = 1, debug_locks = 1
[ 2.513511] RCU used illegally from extended quiescent state!
[ 2.513572] 1 lock held by swapper/0/0:
[ 2.513626] #0: (rcu_read_lock){......}, at: [<ffffffff810e9fe0>] __atomic_notifier_call_chain+0x0/0x140
[ 2.513815]
[ 2.513815] stack backtrace:
[ 2.513897] Pid: 0, c...
2015 Jun 06
7
[PATCH 0/5] Add support for thread-safe handle.
This patch isn't ready to go upstream. In fact, I think we might do a
quick 1.30 release soon, and save this patch, and also the extensive
changes proposed for the test suite[1], to after 1.30.
Currently it is not safe to use the same handle from multiple threads,
unless you implement your own mutexes. See:
http://libguestfs.org/guestfs.3.html#multiple-handles-and-multiple-threads
These
2015 Feb 04
1
[PATCH v3 18/18] vhost: vhost_scsi_handle_vq() should just use copy_from_user()
...memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov,
- int offset, int len);
#endif
diff --git a/lib/Makefile b/lib/Makefile
index 3c3b30b..1071d06 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -24,7 +24,7 @@ obj-y += lockref.o
obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \
bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \
- gcd.o lcm.o list_sort.o uuid.o flex_array.o iovec.o clz_ctz.o \
+ gcd.o lcm.o list_sort.o uuid.o flex_array.o clz_ctz.o \
bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \
percpu-refcou...
2015 Feb 04
1
[PATCH v3 18/18] vhost: vhost_scsi_handle_vq() should just use copy_from_user()
...memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov,
- int offset, int len);
#endif
diff --git a/lib/Makefile b/lib/Makefile
index 3c3b30b..1071d06 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -24,7 +24,7 @@ obj-y += lockref.o
obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \
bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \
- gcd.o lcm.o list_sort.o uuid.o flex_array.o iovec.o clz_ctz.o \
+ gcd.o lcm.o list_sort.o uuid.o flex_array.o clz_ctz.o \
bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \
percpu-refcou...
2011 Nov 18
3
[PATCH] vhost-net: Acquire device lock when releasing device
...uspicious RCU usage. ]
[ 2025.645182] -------------------------------
[ 2025.645927] drivers/vhost/vhost.c:475 suspicious rcu_dereference_protected() usage!
[ 2025.647329]
[ 2025.647330] other info that might help us debug this:
[ 2025.647331]
[ 2025.649042]
[ 2025.649043] rcu_scheduler_active = 1, debug_locks = 1
[ 2025.650235] no locks held by trinity/21042.
[ 2025.650971]
[ 2025.650972] stack backtrace:
[ 2025.651789] Pid: 21042, comm: trinity Not tainted 3.2.0-rc2-sasha-00057-ga9098b3 #5
[ 2025.653342] Call Trace:
[ 2025.653792] [<ffffffff810b4a6a>] lockdep_rcu_suspicious+0xaf/0xb9
[ 2025.6549...
2011 Nov 18
3
[PATCH] vhost-net: Acquire device lock when releasing device
...uspicious RCU usage. ]
[ 2025.645182] -------------------------------
[ 2025.645927] drivers/vhost/vhost.c:475 suspicious rcu_dereference_protected() usage!
[ 2025.647329]
[ 2025.647330] other info that might help us debug this:
[ 2025.647331]
[ 2025.649042]
[ 2025.649043] rcu_scheduler_active = 1, debug_locks = 1
[ 2025.650235] no locks held by trinity/21042.
[ 2025.650971]
[ 2025.650972] stack backtrace:
[ 2025.651789] Pid: 21042, comm: trinity Not tainted 3.2.0-rc2-sasha-00057-ga9098b3 #5
[ 2025.653342] Call Trace:
[ 2025.653792] [<ffffffff810b4a6a>] lockdep_rcu_suspicious+0xaf/0xb9
[ 2025.6549...
2006 Apr 23
1
fsck_ufs locked in snaplk
Colleagues,
one of my servers had to be rebooted uncleanly and then I have backgrounded
fsck locked for more than an our in snaplk:
742 root 1 -4 4 1320K 688K snaplk 0:02 0.00% fsck_ufs
File system in question is 200G gmirror on SATA. Usually making a snapshot
(e.g., for making dumps) consumes 3-4 minutes for that fs, so it seems to me
that filesystem is in a deadlock.
Any
2020 Jan 07
0
locking warnings in drm/virtio code
...rc5+ #605 Not tainted
[ 37.707522] -----------------------------
[ 37.708015] include/linux/dma-resv.h:247 suspicious
rcu_dereference_protected() usage!
[ 37.708899]
[ 37.708899] other info that might help us debug this:
[ 37.708899]
[ 37.709856]
[ 37.709856] rcu_scheduler_active = 2, debug_locks = 1
[ 37.710771] 3 locks held by Xorg/1869:
[ 37.711266] #0: ffff8880a976fa48 (crtc_ww_class_acquire){+.+.}, at:
drm_mode_cursor_common (linux/drivers/gpu/drm/drm_plane.c:949)
[ 37.712372] #1: ffff8880b32e00a8 (crtc_ww_class_mutex){+.+.}, at:
drm_modeset_lock (linux/drivers/gpu/drm/drm_modese...
2015 Jun 11
1
Re: [PATCH 2/5] threads: Acquire and release the lock around each public guestfs_* API.
Hi,
On Saturday 06 June 2015 14:20:38 Richard W.M. Jones wrote:
> Since each ACQUIRE_LOCK/RELEASE_LOCK call must balance, this code is
> difficult to debug. Enable DEBUG_LOCK to add some prints which can
> help.
There's some way this could be simplified:
> const char *
> guestfs_last_error (guestfs_h *g)
> {
> - return g->last_error;
> + const char *r;
>