Displaying 6 results from an estimated 6 matches similar to: "Deadlock in btrfs-cleaner, related to snapshot deletion"
2012 Jul 31
2
Btrfs Intermittent ENOSPC Issues
I''ve been working on running down intermittent ENOSPC issues.
I can only seem to replicate ENOSPC errors when running zlib
compression. However, I have been seeing similar ENOSPC errors to a
lesser extent when playing with the LZ4HC patches.
I apologize for not following up on this sooner, but I had drifted
away from using zlib, and didn''t notice there was still an issue.
My
2010 Nov 18
9
Interesting problem with write data.
Hi,
Recently, I made a btrfs to use. And I met slowness problem. Trying
to diag it. I found this:
1. dd if=/dev/zero of=test count=1024 bs=1MB
This is fast, at about 25MB/s, and reasonable iowait.
2. dd if=/dev/zero of=test count=1 bs=1GB
This is pretty slow, at about 1.5MB/s, and 90%+ iowait, constantly.
May I know why it works like this? Thanks.
--
To unsubscribe from this list: send the
2013 Jun 10
1
btrfs-cleaner Blocked on xfstests 068
I''m running into a problem with the btrfs-cleaner thread becoming
blocked on xfstests 068.
The test locks up indefinitely without completing (normally it
finished in about 45 seconds on my test box).
I''ve replicated the issue on 3.10.0_rc5 and the for-linus branch of 3.9.0.
I ran a git bisect on the 3.9.0 for-linus branch, and tracked my issue
to the following commit:
commit
2012 Aug 01
7
[PATCH] Btrfs: barrier before waitqueue_active
We need an smb_mb() before waitqueue_active to avoid missing wakeups.
Before Mitch was hitting a deadlock between the ordered flushers and the
transaction commit because the ordered flushers were waiting for more refs
and were never woken up, so those smp_mb()''s are the most important.
Everything else I added for correctness sake and to avoid getting bitten by
this again somewhere else.
2017 Oct 18
2
Null deference panic in CentOS-6.5
Hi,
I got a panic when running CentOS-6.5:
crash> bt
PID: 106074 TASK: ffff8839c1e32ae0 CPU: 4 COMMAND: "flushd4[cbd-sd-"
#0 [ffff8839c2a91900] machine_kexec at ffffffff81038fa9
#1 [ffff8839c2a91960] crash_kexec at ffffffff810c5992
#2 [ffff8839c2a91a30] oops_end at ffffffff81515c90
#3 [ffff8839c2a91a60] no_context at ffffffff81049f1b
#4 [ffff8839c2a91ab0]
2012 Oct 17
28
Xen PVM: Strange lockups when running PostgreSQL load
I am currently looking at a bug report[1] which is happening when
a Xen PVM guest with multiple VCPUs is running a high IO database
load (a test script is available in the bug report).
In experimenting it seems that this happens (or is getting more
likely) when the number of VCPUs is 8 or higher (though I have
not tried 6, only 2 and 4), having autogroup enabled seems to
make it more likely, too