Josef Bacik
2011-Aug-01 15:21 UTC
PLEASE TEST: Everybody who is seeing weird and long hangs
Hello, We''ve seen a lot of reports of people having these constant long pauses when doing things like sync or such. The stack traces usually all look the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents and one is btrfs-submit-# stuck in get_request_wait. I had originally thought this was due to the new plugging stuff, but I think it just makes the problem happen more quickly as we''ve seen that 2.6.38 which we thought was ok will still have the problem happen if given enough time. I _think_ this is because of the way we write out metadata in the transaction commit phase. We''re doing write_on_page for every dirty page in the btree during the commit. This sucks because basically we end up with one bio per page, which makes us blow out our nr_requests constantly, which is why btrfs-submit-# is always stuck in get_request_wait. What we need to do instead is use filemap_fdatawrite which will do a WB_SYNC_ALL but will do it via writepages, so hopefully we will get less bios and this problem will go away. Please try this very hastily put together patch if you are experiencing this problem and let me know if it fixes it for you. Thanks, Josef diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index eb55863..86217a4 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -577,15 +577,17 @@ int btrfs_end_transaction_dmeta(struct btrfs_trans_handle *trans, int btrfs_write_marked_extents(struct btrfs_root *root, struct extent_io_tree *dirty_pages, int mark) { - int ret; - int err = 0; - int werr = 0; - struct page *page; +// int ret; +// int err = 0; +// int werr = 0; +// struct page *page; struct inode *btree_inode = root->fs_info->btree_inode; - u64 start = 0; - u64 end; - unsigned long index; +// u64 start = 0; +// u64 end; +// unsigned long index; + return filemap_fdatawrite(btree_inode->i_mapping); + /* while (1) { ret = find_first_extent_bit(dirty_pages, start, &start, &end, mark); @@ -624,7 +626,8 @@ int btrfs_write_marked_extents(struct btrfs_root *root, } if (err) werr = err; - return werr; + */ +// return werr; } /* -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason
2011-Aug-01 15:45 UTC
Re: PLEASE TEST: Everybody who is seeing weird and long hangs
Excerpts from Josef Bacik''s message of 2011-08-01 11:21:34 -0400:> Hello, > > We''ve seen a lot of reports of people having these constant long pauses > when doing things like sync or such. The stack traces usually all look > the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents > and one is btrfs-submit-# stuck in get_request_wait. I had originally > thought this was due to the new plugging stuff, but I think it just > makes the problem happen more quickly as we''ve seen that 2.6.38 which we > thought was ok will still have the problem happen if given enough time. > > I _think_ this is because of the way we write out metadata in the > transaction commit phase. We''re doing write_on_page for every dirty > page in the btree during the commit. This sucks because basically we > end up with one bio per page, which makes us blow out our nr_requests > constantly, which is why btrfs-submit-# is always stuck in > get_request_wait. What we need to do instead is use filemap_fdatawrite > which will do a WB_SYNC_ALL but will do it via writepages, so hopefully > we will get less bios and this problem will go away. Please try this > very hastily put together patch if you are experiencing this problem and > let me know if it fixes it for you. Thanks,I''m definitely curious to hear if this helps, but I think it might cause a different set of problems. It writes everything that is dirty on the btree, which includes a lot of things we''ve cow''d in the current transaction and marked dirty. They will have to go through COW again if someone wants to modify them again. The btrfs writepage code does this: ret = __extent_writepage(page, wbc, &epd); extent_write_cache_pages(tree, mapping, &wbc_writepages, __extent_writepage, &epd, flush_write_bio); flush_epd_write_bio(&epd); So during the commit phase we''ll be grabbing adjacent pages already. My bet is that our problem will go away if we remove this extra IO completely. -chris> > Josef > > diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c > index eb55863..86217a4 100644 > --- a/fs/btrfs/transaction.c > +++ b/fs/btrfs/transaction.c > @@ -577,15 +577,17 @@ int btrfs_end_transaction_dmeta(struct > btrfs_trans_handle *trans, > int btrfs_write_marked_extents(struct btrfs_root *root, > struct extent_io_tree *dirty_pages, int mark) > { > - int ret; > - int err = 0; > - int werr = 0; > - struct page *page; > +// int ret; > +// int err = 0; > +// int werr = 0; > +// struct page *page; > struct inode *btree_inode = root->fs_info->btree_inode; > - u64 start = 0; > - u64 end; > - unsigned long index; > +// u64 start = 0; > +// u64 end; > +// unsigned long index; > > + return filemap_fdatawrite(btree_inode->i_mapping); > + /* > while (1) { > ret = find_first_extent_bit(dirty_pages, start, &start, &end, > mark); > @@ -624,7 +626,8 @@ int btrfs_write_marked_extents(struct btrfs_root *root, > } > if (err) > werr = err; > - return werr; > + */ > +// return werr; > } > > /*-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Josef Bacik
2011-Aug-01 16:03 UTC
Re: PLEASE TEST: Everybody who is seeing weird and long hangs
On 08/01/2011 11:45 AM, Chris Mason wrote:> Excerpts from Josef Bacik''s message of 2011-08-01 11:21:34 -0400: >> Hello, >> >> We''ve seen a lot of reports of people having these constant long pauses >> when doing things like sync or such. The stack traces usually all look >> the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents >> and one is btrfs-submit-# stuck in get_request_wait. I had originally >> thought this was due to the new plugging stuff, but I think it just >> makes the problem happen more quickly as we''ve seen that 2.6.38 which we >> thought was ok will still have the problem happen if given enough time. >> >> I _think_ this is because of the way we write out metadata in the >> transaction commit phase. We''re doing write_on_page for every dirty >> page in the btree during the commit. This sucks because basically we >> end up with one bio per page, which makes us blow out our nr_requests >> constantly, which is why btrfs-submit-# is always stuck in >> get_request_wait. What we need to do instead is use filemap_fdatawrite >> which will do a WB_SYNC_ALL but will do it via writepages, so hopefully >> we will get less bios and this problem will go away. Please try this >> very hastily put together patch if you are experiencing this problem and >> let me know if it fixes it for you. Thanks, > > I''m definitely curious to hear if this helps, but I think it might cause > a different set of problems. It writes everything that is dirty on the > btree, which includes a lot of things we''ve cow''d in the current > transaction and marked dirty. They will have to go through COW again > if someone wants to modify them again. >But this is happening in the commit after we''ve done all of our work, we shouldn''t be dirtying anything else at this point right?> The btrfs writepage code does this: > > ret = __extent_writepage(page, wbc, &epd); > > extent_write_cache_pages(tree, mapping, &wbc_writepages, > __extent_writepage, &epd, flush_write_bio); > flush_epd_write_bio(&epd); >Yeah but nr_to_write is 1, so after the __extent_writepage it will be 0 and extent_write_cache_pages will just return since there''s nothing to write, so we''ll still end up with 1 page at a time being written out. Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason
2011-Aug-01 17:54 UTC
Re: PLEASE TEST: Everybody who is seeing weird and long hangs
Excerpts from Josef Bacik''s message of 2011-08-01 12:03:34 -0400:> On 08/01/2011 11:45 AM, Chris Mason wrote: > > Excerpts from Josef Bacik''s message of 2011-08-01 11:21:34 -0400: > >> Hello, > >> > >> We''ve seen a lot of reports of people having these constant long pauses > >> when doing things like sync or such. The stack traces usually all look > >> the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents > >> and one is btrfs-submit-# stuck in get_request_wait. I had originally > >> thought this was due to the new plugging stuff, but I think it just > >> makes the problem happen more quickly as we''ve seen that 2.6.38 which we > >> thought was ok will still have the problem happen if given enough time. > >> > >> I _think_ this is because of the way we write out metadata in the > >> transaction commit phase. We''re doing write_on_page for every dirty > >> page in the btree during the commit. This sucks because basically we > >> end up with one bio per page, which makes us blow out our nr_requests > >> constantly, which is why btrfs-submit-# is always stuck in > >> get_request_wait. What we need to do instead is use filemap_fdatawrite > >> which will do a WB_SYNC_ALL but will do it via writepages, so hopefully > >> we will get less bios and this problem will go away. Please try this > >> very hastily put together patch if you are experiencing this problem and > >> let me know if it fixes it for you. Thanks, > > > > I''m definitely curious to hear if this helps, but I think it might cause > > a different set of problems. It writes everything that is dirty on the > > btree, which includes a lot of things we''ve cow''d in the current > > transaction and marked dirty. They will have to go through COW again > > if someone wants to modify them again. > > > > But this is happening in the commit after we''ve done all of our work, we > shouldn''t be dirtying anything else at this point right?The commit code is setup to unblock people before we start the IO: trans->transaction->blocked = 0; spin_lock(&root->fs_info->trans_lock); root->fs_info->running_transaction = NULL; root->fs_info->trans_no_join = 0; spin_unlock(&root->fs_info->trans_lock); mutex_unlock(&root->fs_info->reloc_mutex); wake_up(&root->fs_info->transaction_wait); ret = btrfs_write_and_wait_transaction(trans, root); So, we should have concurrent FS mods for a new transaction while we are writing out this old transaction.> > > The btrfs writepage code does this: > > > > ret = __extent_writepage(page, wbc, &epd); > > > > extent_write_cache_pages(tree, mapping, &wbc_writepages, > > __extent_writepage, &epd, flush_write_bio); > > flush_epd_write_bio(&epd); > > > > Yeah but nr_to_write is 1, so after the __extent_writepage it will be 0 > and extent_write_cache_pages will just return since there''s nothing to > write, so we''ll still end up with 1 page at a time being written out. > Thanks,We bump nr_to_write to 64: struct writeback_control wbc_writepages = { .sync_mode = wbc->sync_mode, .older_than_this = NULL, .nr_to_write = 64, .range_start = page_offset(page) + PAGE_CACHE_SIZE, .range_end = (loff_t)-1, }; ret = __extent_writepage(page, wbc, &epd); extent_write_cache_pages(tree, mapping, &wbc_writepages, __extent_writepage, &epd, flush_write_bio); flush_epd_write_bio(&epd); -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Josef Bacik
2011-Aug-01 18:01 UTC
Re: PLEASE TEST: Everybody who is seeing weird and long hangs
On 08/01/2011 01:54 PM, Chris Mason wrote:> Excerpts from Josef Bacik''s message of 2011-08-01 12:03:34 -0400: >> On 08/01/2011 11:45 AM, Chris Mason wrote: >>> Excerpts from Josef Bacik''s message of 2011-08-01 11:21:34 -0400: >>>> Hello, >>>> >>>> We''ve seen a lot of reports of people having these constant long pauses >>>> when doing things like sync or such. The stack traces usually all look >>>> the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents >>>> and one is btrfs-submit-# stuck in get_request_wait. I had originally >>>> thought this was due to the new plugging stuff, but I think it just >>>> makes the problem happen more quickly as we''ve seen that 2.6.38 which we >>>> thought was ok will still have the problem happen if given enough time. >>>> >>>> I _think_ this is because of the way we write out metadata in the >>>> transaction commit phase. We''re doing write_on_page for every dirty >>>> page in the btree during the commit. This sucks because basically we >>>> end up with one bio per page, which makes us blow out our nr_requests >>>> constantly, which is why btrfs-submit-# is always stuck in >>>> get_request_wait. What we need to do instead is use filemap_fdatawrite >>>> which will do a WB_SYNC_ALL but will do it via writepages, so hopefully >>>> we will get less bios and this problem will go away. Please try this >>>> very hastily put together patch if you are experiencing this problem and >>>> let me know if it fixes it for you. Thanks, >>> >>> I''m definitely curious to hear if this helps, but I think it might cause >>> a different set of problems. It writes everything that is dirty on the >>> btree, which includes a lot of things we''ve cow''d in the current >>> transaction and marked dirty. They will have to go through COW again >>> if someone wants to modify them again. >>> >> >> But this is happening in the commit after we''ve done all of our work, we >> shouldn''t be dirtying anything else at this point right? > > The commit code is setup to unblock people before we start the IO: > > trans->transaction->blocked = 0; > spin_lock(&root->fs_info->trans_lock); > root->fs_info->running_transaction = NULL; > root->fs_info->trans_no_join = 0; > spin_unlock(&root->fs_info->trans_lock); > mutex_unlock(&root->fs_info->reloc_mutex); > > wake_up(&root->fs_info->transaction_wait); > > ret = btrfs_write_and_wait_transaction(trans, root); > > So, we should have concurrent FS mods for a new transaction while we are > writing out this old transaction. >Ah right, but then this brings up another question, we shouldn''t cow them again since we would have set the new transid. And isn''t this kind of bad, since somebody could come in and dirty a piece of metadata before we have a chance to write it out for this transaction, so we end up writing out the new data instead of what we are trying to commit? And also the writepages() thing would get around this problem since we are SYNC_ALL which now tags all dirty pages as TOWRITE and then writes those pages instead of writing all dirty pages. So anything being dirtied once we started writepages would be fine. So this really could explain why this is sucking for people, we are just walking through and writing everything that''s dirty, and then doing the same thing in wait_marked_extents() again, so we could be writing out things that aren''t in the transaction that we committed, which would mean we''re writing way more than we need to.>> >>> The btrfs writepage code does this: >>> >>> ret = __extent_writepage(page, wbc, &epd); >>> >>> extent_write_cache_pages(tree, mapping, &wbc_writepages, >>> __extent_writepage, &epd, flush_write_bio); >>> flush_epd_write_bio(&epd); >>> >> >> Yeah but nr_to_write is 1, so after the __extent_writepage it will be 0 >> and extent_write_cache_pages will just return since there''s nothing to >> write, so we''ll still end up with 1 page at a time being written out. >> Thanks, > > We bump nr_to_write to 64: > > struct writeback_control wbc_writepages = { > .sync_mode = wbc->sync_mode, > .older_than_this = NULL, > .nr_to_write = 64, > .range_start = page_offset(page) + PAGE_CACHE_SIZE, > .range_end = (loff_t)-1, > }; > > ret = __extent_writepage(page, wbc, &epd); > > extent_write_cache_pages(tree, mapping, &wbc_writepages, > __extent_writepage, &epd, flush_write_bio); > flush_epd_write_bio(&epd); >Oops I missed that, thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason
2011-Aug-01 18:21 UTC
Re: PLEASE TEST: Everybody who is seeing weird and long hangs
Excerpts from Josef Bacik''s message of 2011-08-01 14:01:35 -0400:> On 08/01/2011 01:54 PM, Chris Mason wrote: > > Excerpts from Josef Bacik''s message of 2011-08-01 12:03:34 -0400: > >> On 08/01/2011 11:45 AM, Chris Mason wrote: > >>> Excerpts from Josef Bacik''s message of 2011-08-01 11:21:34 -0400: > >>>> Hello, > >>>> > >>>> We''ve seen a lot of reports of people having these constant long pauses > >>>> when doing things like sync or such. The stack traces usually all look > >>>> the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents > >>>> and one is btrfs-submit-# stuck in get_request_wait. I had originally > >>>> thought this was due to the new plugging stuff, but I think it just > >>>> makes the problem happen more quickly as we''ve seen that 2.6.38 which we > >>>> thought was ok will still have the problem happen if given enough time. > >>>> > >>>> I _think_ this is because of the way we write out metadata in the > >>>> transaction commit phase. We''re doing write_on_page for every dirty > >>>> page in the btree during the commit. This sucks because basically we > >>>> end up with one bio per page, which makes us blow out our nr_requests > >>>> constantly, which is why btrfs-submit-# is always stuck in > >>>> get_request_wait. What we need to do instead is use filemap_fdatawrite > >>>> which will do a WB_SYNC_ALL but will do it via writepages, so hopefully > >>>> we will get less bios and this problem will go away. Please try this > >>>> very hastily put together patch if you are experiencing this problem and > >>>> let me know if it fixes it for you. Thanks, > >>> > >>> I''m definitely curious to hear if this helps, but I think it might cause > >>> a different set of problems. It writes everything that is dirty on the > >>> btree, which includes a lot of things we''ve cow''d in the current > >>> transaction and marked dirty. They will have to go through COW again > >>> if someone wants to modify them again. > >>> > >> > >> But this is happening in the commit after we''ve done all of our work, we > >> shouldn''t be dirtying anything else at this point right? > > > > The commit code is setup to unblock people before we start the IO: > > > > trans->transaction->blocked = 0; > > spin_lock(&root->fs_info->trans_lock); > > root->fs_info->running_transaction = NULL; > > root->fs_info->trans_no_join = 0; > > spin_unlock(&root->fs_info->trans_lock); > > mutex_unlock(&root->fs_info->reloc_mutex); > > > > wake_up(&root->fs_info->transaction_wait); > > > > ret = btrfs_write_and_wait_transaction(trans, root); > > > > So, we should have concurrent FS mods for a new transaction while we are > > writing out this old transaction. > > > > Ah right, but then this brings up another question, we shouldn''t cow > them again since we would have set the new transid. And isn''t this kind > of bad, since somebody could come in and dirty a piece of metadata > before we have a chance to write it out for this transaction, so we end > up writing out the new data instead of what we are trying to commit?I think we''re mixing together different ideas here. If we''re doing a commit on transaction N, we allow N+1 to start while we''re doing the btrfs_write_and_wait_transaction(). N+1 might allocate and dirty a new block, which btrfs_write_and_wait_transaction might start IO on. Strictly speaking this isn''t a problem. It doesn''t break any rules of COW because we''re allowed to write metadata at any time. But, once we do write it, we must COW it again if we want to change it. So, anything that btrfs_write_and_wait_transaction() catches from transaction N+1 is likely to make more work for us because future mods will have to allocate a new block. Basically it''s wasted IO. But, it''s also free IO, assuming it was contiguous. The problem is that write_cache_pages isn''t actually making sure it was contiguous, so we end up doing many more writes than we could have. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
cwillu
2011-Aug-01 23:28 UTC
Re: PLEASE TEST: Everybody who is seeing weird and long hangs
On Mon, Aug 1, 2011 at 12:21 PM, Chris Mason <chris.mason@oracle.com> wrote:> Excerpts from Josef Bacik''s message of 2011-08-01 14:01:35 -0400: >> On 08/01/2011 01:54 PM, Chris Mason wrote: >> > Excerpts from Josef Bacik''s message of 2011-08-01 12:03:34 -0400: >> >> On 08/01/2011 11:45 AM, Chris Mason wrote: >> >>> Excerpts from Josef Bacik''s message of 2011-08-01 11:21:34 -0400: >> >>>> Hello, >> >>>> >> >>>> We''ve seen a lot of reports of people having these constant long pauses >> >>>> when doing things like sync or such. The stack traces usually all look >> >>>> the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents >> >>>> and one is btrfs-submit-# stuck in get_request_wait. I had originally >> >>>> thought this was due to the new plugging stuff, but I think it just >> >>>> makes the problem happen more quickly as we''ve seen that 2.6.38 which we >> >>>> thought was ok will still have the problem happen if given enough time. >> >>>> >> >>>> I _think_ this is because of the way we write out metadata in the >> >>>> transaction commit phase. We''re doing write_on_page for every dirty >> >>>> page in the btree during the commit. This sucks because basically we >> >>>> end up with one bio per page, which makes us blow out our nr_requests >> >>>> constantly, which is why btrfs-submit-# is always stuck in >> >>>> get_request_wait. What we need to do instead is use filemap_fdatawrite >> >>>> which will do a WB_SYNC_ALL but will do it via writepages, so hopefully >> >>>> we will get less bios and this problem will go away. Please try this >> >>>> very hastily put together patch if you are experiencing this problem and >> >>>> let me know if it fixes it for you. Thanks, >> >>> >> >>> I''m definitely curious to hear if this helps, but I think it might cause >> >>> a different set of problems. It writes everything that is dirty on the >> >>> btree, which includes a lot of things we''ve cow''d in the current >> >>> transaction and marked dirty. They will have to go through COW again >> >>> if someone wants to modify them again. >> >>> >> >> >> >> But this is happening in the commit after we''ve done all of our work, we >> >> shouldn''t be dirtying anything else at this point right? >> > >> > The commit code is setup to unblock people before we start the IO: >> > >> > trans->transaction->blocked = 0; >> > spin_lock(&root->fs_info->trans_lock); >> > root->fs_info->running_transaction = NULL; >> > root->fs_info->trans_no_join = 0; >> > spin_unlock(&root->fs_info->trans_lock); >> > mutex_unlock(&root->fs_info->reloc_mutex); >> > >> > wake_up(&root->fs_info->transaction_wait); >> > >> > ret = btrfs_write_and_wait_transaction(trans, root); >> > >> > So, we should have concurrent FS mods for a new transaction while we are >> > writing out this old transaction. >> > >> >> Ah right, but then this brings up another question, we shouldn''t cow >> them again since we would have set the new transid. And isn''t this kind >> of bad, since somebody could come in and dirty a piece of metadata >> before we have a chance to write it out for this transaction, so we end >> up writing out the new data instead of what we are trying to commit? > > I think we''re mixing together different ideas here. If we''re doing a > commit on transaction N, we allow N+1 to start while we''re doing the > btrfs_write_and_wait_transaction(). N+1 might allocate and dirty a new > block, which btrfs_write_and_wait_transaction might start IO on. > > Strictly speaking this isn''t a problem. It doesn''t break any rules of > COW because we''re allowed to write metadata at any time. But, once we > do write it, we must COW it again if we want to change it. So, anything > that btrfs_write_and_wait_transaction() catches from transaction N+1 is > likely to make more work for us because future mods will have to > allocate a new block. Basically it''s wasted IO. > > But, it''s also free IO, assuming it was contiguous. The problem is that > write_cache_pages isn''t actually making sure it was contiguous, so we > end up doing many more writes than we could have. > > -chris > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >First user ("youagree") reported back on irc: <youagree> guys, just came to report its much worse with josef''s patch <youagree> now i can hardly start anything, it''s slowed down most of the time -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason
2011-Aug-02 00:09 UTC
Re: PLEASE TEST: Everybody who is seeing weird and long hangs
Excerpts from cwillu''s message of 2011-08-01 19:28:35 -0400:> On Mon, Aug 1, 2011 at 12:21 PM, Chris Mason <chris.mason@oracle.com> wrote: > > Excerpts from Josef Bacik''s message of 2011-08-01 14:01:35 -0400: > >> On 08/01/2011 01:54 PM, Chris Mason wrote: > >> > Excerpts from Josef Bacik''s message of 2011-08-01 12:03:34 -0400: > >> >> On 08/01/2011 11:45 AM, Chris Mason wrote: > >> >>> Excerpts from Josef Bacik''s message of 2011-08-01 11:21:34 -0400: > >> >>>> Hello, > >> >>>> > >> >>>> We''ve seen a lot of reports of people having these constant long pauses > >> >>>> when doing things like sync or such. Â The stack traces usually all look > >> >>>> the same, one is btrfs-transaction stuck in btrfs_wait_marked_extents > >> >>>> and one is btrfs-submit-# stuck in get_request_wait. Â I had originally > >> >>>> thought this was due to the new plugging stuff, but I think it just > >> >>>> makes the problem happen more quickly as we''ve seen that 2.6.38 which we > >> >>>> thought was ok will still have the problem happen if given enough time. > >> >>>> > >> >>>> I _think_ this is because of the way we write out metadata in the > >> >>>> transaction commit phase. Â We''re doing write_on_page for every dirty > >> >>>> page in the btree during the commit. Â This sucks because basically we > >> >>>> end up with one bio per page, which makes us blow out our nr_requests > >> >>>> constantly, which is why btrfs-submit-# is always stuck in > >> >>>> get_request_wait. Â What we need to do instead is use filemap_fdatawrite > >> >>>> which will do a WB_SYNC_ALL but will do it via writepages, so hopefully > >> >>>> we will get less bios and this problem will go away. Â Please try this > >> >>>> very hastily put together patch if you are experiencing this problem and > >> >>>> let me know if it fixes it for you. Â Thanks, > >> >>> > >> >>> I''m definitely curious to hear if this helps, but I think it might cause > >> >>> a different set of problems. Â It writes everything that is dirty on the > >> >>> btree, which includes a lot of things we''ve cow''d in the current > >> >>> transaction and marked dirty. Â They will have to go through COW again > >> >>> if someone wants to modify them again. > >> >>> > >> >> > >> >> But this is happening in the commit after we''ve done all of our work, we > >> >> shouldn''t be dirtying anything else at this point right? > >> > > >> > The commit code is setup to unblock people before we start the IO: > >> > > >> > Â Â Â Â trans->transaction->blocked = 0; > >> > Â Â Â Â spin_lock(&root->fs_info->trans_lock); > >> > Â Â Â Â root->fs_info->running_transaction = NULL; > >> > Â Â Â Â root->fs_info->trans_no_join = 0; > >> > Â Â Â Â spin_unlock(&root->fs_info->trans_lock); > >> > Â Â Â Â mutex_unlock(&root->fs_info->reloc_mutex); > >> > > >> > Â Â Â Â wake_up(&root->fs_info->transaction_wait); > >> > > >> > Â Â Â Â ret = btrfs_write_and_wait_transaction(trans, root); > >> > > >> > So, we should have concurrent FS mods for a new transaction while we are > >> > writing out this old transaction. > >> > > >> > >> Ah right, but then this brings up another question, we shouldn''t cow > >> them again since we would have set the new transid. Â And isn''t this kind > >> of bad, since somebody could come in and dirty a piece of metadata > >> before we have a chance to write it out for this transaction, so we end > >> up writing out the new data instead of what we are trying to commit? > > > > I think we''re mixing together different ideas here. Â If we''re doing a > > commit on transaction N, we allow N+1 to start while we''re doing the > > btrfs_write_and_wait_transaction(). Â N+1 might allocate and dirty a new > > block, which btrfs_write_and_wait_transaction might start IO on. > > > > Strictly speaking this isn''t a problem. Â It doesn''t break any rules of > > COW because we''re allowed to write metadata at any time. Â But, once we > > do write it, we must COW it again if we want to change it. Â So, anything > > that btrfs_write_and_wait_transaction() catches from transaction N+1 is > > likely to make more work for us because future mods will have to > > allocate a new block. Â Basically it''s wasted IO. > > > > But, it''s also free IO, assuming it was contiguous. Â The problem is that > > write_cache_pages isn''t actually making sure it was contiguous, so we > > end up doing many more writes than we could have. > > First user ("youagree") reported back on irc: > > <youagree> guys, just came to report its much worse with josef''s patch > <youagree> now i can hardly start anything, it''s slowed down most of the timeJosef''s filemap_fdatawrite patch? He sent a second one to the list that gets rid of the extra IO done by the current code. That''s the one we hope will fix things. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html