Displaying 20 results from an estimated 900 matches similar to: "Re: [btrfs-transacti] & btrfs-endio-wri] - WAS: Re: [btrfs-delalloc-]"
2011 Jun 27
7
[btrfs-delalloc-]
Hello all.
What we have:
SL6 - kernel 2.6.32-131.2.1.el6.x86_64
btrfs on mdadm RAID5 with 8 HDD - 27T partition.
I see this at top:
1182 root 20 0 0 0 0 R 100.0 0.0 16:39.73
[btrfs-delalloc-]
And LA is grow. What is this and how can I fix it?
--
Best regards,
Proskurin Kirill
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
2013 Apr 13
0
btrfs crash (and softlockup btrfs-endio-wri)
I am using NFS over brtfs (vanilla 3.8.5) for heavy CoW to clone virtual
disks with sizes 20-50GB. It worked OK for a couple of days, but
yesterday it crashed. Reboot fixed the problem and I do not see any data
corruption. I have a couple of different kdumps, I will include one as
text and attach the other ones.
I am using Fedora 18 with vanilla 3.8.5. The filesystem is created over
a SAN volume
2012 Aug 01
7
[PATCH] Btrfs: barrier before waitqueue_active
We need an smb_mb() before waitqueue_active to avoid missing wakeups.
Before Mitch was hitting a deadlock between the ordered flushers and the
transaction commit because the ordered flushers were waiting for more refs
and were never woken up, so those smp_mb()''s are the most important.
Everything else I added for correctness sake and to avoid getting bitten by
this again somewhere else.
2011 Aug 09
17
Re: Applications using fsync cause hangs for several seconds every few minutes
On 06/21/2011 01:15 PM, Jan Stilow wrote:
> Hello,
>
> Nirbheek Chauhan <nirbheek <at> gentoo.org> writes:
>> [...]
>>
>> Every few minutes, (I guess) when applications do fsync (firefox,
>> xchat, vim, etc), all applications that use fsync() hang for several
>> seconds, and applications that use general IO suffer extreme
>> slowdowns.
2013 Oct 08
3
[PATCH] Btrfs: limit delalloc pages outside of find_delalloc_range
Liu fixed part of this problem and unfortunately I steered him in slightly the
wrong direction and so didn''t completely fix the problem. The problem is we
limit the size of the delalloc range we are looking for to max bytes and then we
try to lock that range. If we fail to lock the pages in that range we will
shrink the max bytes to a single page and re loop. However if our first page
2012 Nov 03
0
btrfs kernel threads producing high load and slow system down
Hello,
I habe the problems described in here
https://btrfs.wiki.kernel.org/index.php/Gotchas:
Files with a lot of random writes can become heavily fragmented
(10000+ extents) causing trashing on HDDs and excessive multi-second
spikes of CPU load on systems with an SSD or large amount a RAM.
On servers and workstations this affects databases and virtual machine images.
The nodatacow mount option
2013 Jun 20
0
[PATCH] Btrfs: stop using try_to_writeback_inodes_sb_nr to flush delalloc
try_to_writeback_inodes_sb_nr returns 1 if writeback is already underway, which
is completely fraking useless for us as we need to make sure pages are actually
written before we go and check if there are ordered extents. So replace this
with an open coding of try_to_writeback_inodes_sb_nr minus the writeback
underway check so that we are sure to actually have flushed some dirty pages out
and will
2013 Oct 28
0
[PATCH] Btrfs: make sure the delalloc workers actually flush compressed writes
When using delalloc workers in a non-waiting way (like for enospc handling) we
can end up not actually waiting for the dirty pages to be started if we have
compression. We need to add an extra filemap flush to make sure any async
extents that have started are actually moved along before returning. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
---
fs/btrfs/inode.c | 18
2010 Mar 12
2
[PATCH] Btrfs: force delalloc flushing when things get desperate
When testing with max_extents=4k, we enospc out really really early. The reason
for this is we really overwhelm the system with our worst case calculation.
When we try to flush delalloc, we don''t want everybody to wait around forever,
so we wake up the waiters when we''ve done some of the work in hopes that its
enough work to get everything they need done. The problem with this
2011 Oct 06
26
[PATCH v0 00/18] btfs: Subvolume Quota Groups
This is a first draft of a subvolume quota implementation. It is possible
to limit subvolumes and any group of subvolumes and also to track the amount
of space that will get freed when deleting snapshots.
The current version is functionally incomplete, with the main missing feature
being the initial scan and rescan of an existing filesystem.
I put some effort into writing an introduction into
2013 May 22
1
Top shows brtfs-cache-1 and brtfs-endio-met while the hard drives seem busy
Hi,
I have setup about a year ago a BTRFS RAID 1 filesystem on two 2TB
Western Digital WD20EARS hard drives and I have created subvolumes
that I mount regularly as I need them. I put mostly music, videos and
various files on them as well as some Git bare repositories for my
work files but for the last couple of weeks, there seems to be some
activity happening on the drives for a few minutes and
2013 Oct 23
0
Soft lockup btrfs-transacti:680
When I try to umount btrfs filesystem I get always this error with
kernel 3.11.4 and 3.11.3, but I can mount and umount without error on
kernel 3.11.2.
Exact error messages are:
BUG: soft lockup - CPU#0 stuck for 23s! [btrfs-transacti:680]
BUG: soft lockup - CPU#1 stuck for 23s! [umount:1575]
I''m on Fedora 19
I have run scrub and there are no errors:
# btrfs scrub status /home
scrub
2013 Jun 11
1
btrfs-transacti:1014 blocked for more than 120 seconds.
Hey,
I''ve a 2x4TB RAID1 setup with btrfs on kernel 3.8.0. Under high I/O load
(BackupPC dump or writing a large file over gigabit) I get messages in
syslog such as the one mentioned in the subject.
The full non-logcheck-ignored log is under [1].
A BackupPC dump between the same exact machines onto a 2TB ext4 volume
take 90 minutes on average, the process on the btrfs volume took 465
2011 Sep 06
3
btrfs-delalloc - threaded?
Hi all.
I was doing some testing with writing out data to a BTFS filesystem
with the compress-force option. With 1 program running, I saw
btfs-delalloc taking about 1 CPU worth of time, much as could be
expected. I then started up 2 programs at the same time, writing data
to the BTRFS volume. btrfs-delalloc still only used 1 CPU worth of
time. Is btrfs-delalloc threaded, to where it can use
2013 Feb 09
2
v3.8-rc6: btrfs-transacti Tainted: GF in btrfs_orphan_commit_root
Running an Ubuntu Raring VM which was built a week ago that is now
running 3.8-rc6, I was booting it last night when it hung. After a few
forced reboots, it came back up and I found the attached in kern.log.
Mostly, the VM has been used for testing anisble deployment, so not a
lot of work, just upgrading and installing software, then rebooting.
Are these reports useful? Is there any
2008 Oct 16
3
Multiple "mail" field in one LDAP account
Hello all!
#pkg_info | grep dovecot
dovecot-1.1.3_1
dovecot-managesieve-0.10.3
dovecot-sieve-1.1.5_1
Im trying to do this:
Im have a LDAP account with multiple "mail" field like this(many strings
cuted):
dn: uid=k.proskurin,ou=Users,dc=Moscow,dc=CAS
uid: k.proskurin
userPassword: {CRYPT}$1$ETadxf6G$O2bNUQVSHxksUp08V/iY2.
mail: sysadmin at domain.off
mail: proskurin-kv at domain.off
2009 Apr 24
3
1 Dovecot proxy to 2 real IMAP servers
Hello all.
I have 2 Dovecot IMAP servers with different mailboxes. What serves
different email domains.
I want to add one Dovecot Proxy server and make him understand based on
user domain - were it need to transfer a connection.
I use LDAP auth based on "mail" attribute.
It is possible?
P.S. Proxy documentation at dovecot.org not help at all and seems to not
complited. :-(
--
2010 Mar 22
4
Dovecot-1.2 + Sieve + Managesieve on Debian
Hello!
I think about migrate from FreeBSD to Linux because I need DRBD for
clustering.
I want to use Debian Lenny but run on some problems.
I want near latest Dovecot packages. Native repos are too old. Backports
seems to don`t have a Sieve&Managesieve support and we need it(am I
wrong?). Stephan Bosch auto packeges is not for production.
How could I solve this without make own
2008 Sep 08
3
LDAP filters
Hello all.
I'm have this problem:
I'm want auth users by "uid" filed but this filed uniq only in one LDAP
container. So im what to tell dovecot were to look some "uid" by "mail"
filed of this user.
Can im do this:
pass_filter = (&(objectClass=mailUser)(mail=*@%d)(uid=%n))
Will this construction work?
--
Best regards,
Proskurin Kirill
2008 Jul 08
2
Dovecot CRAM-MD5 & DIGEST-MD5
Hello all.
Im try to make a SMTP Auth using Docecot SASL.
Im use swaks for tests.
Im store users in LDAP.
As im understand for CRAM & DIGEST MD5 we need to store pass in a clear
text?... Ok.
mail: admin3 at domain.off
userPassword: 123 <- Clear text
What im do
%swaks -a CRAM-MD5 -au admin3 at domain.off -ap 123
To: admin3 at domain.off
=== Trying mx.domain.off:25...
=== Connected to