Hi all, I have a 10TB array (zpool = 2x 5 disk raidz1), I had dedup enabled on a couple of filesystems which I decided to delete last week, the first contained about 6GB of data and was deleted in about 30 minutes, the second (about 100GB of VMs) is still being deleted (I think) 4.5 days later! Now, I''ve seen delete "dedup enabled fs" operations take a while before (2 days) but 4.5 days is a surprise. I am wondering what (if anything) I can do to speed this up, my server only has 4GB RAM, would it be beneficial/safe for me to switch off, upgrade to 8GB? I am assuming this may help the delete operation as more memory should mean that more of the dedup table is stored in RAM? Or is there anything else I can do to speed things up or indeed determine how much longer left? I''d appreciate any advice, cheers -- This message posted from opensolaris.org
On Sun, Aug 15, 2010 at 2:30 PM, Marc Emmerson <marc.emmerson at gmail.com>wrote:> Hi all, > I have a 10TB array (zpool = 2x 5 disk raidz1), I had dedup enabled on a > couple of filesystems which I decided to delete last week, the first > contained about 6GB of data and was deleted in about 30 minutes, the second > (about 100GB of VMs) is still being deleted (I think) 4.5 days later! > > Now, I''ve seen delete "dedup enabled fs" operations take a while before (2 > days) but 4.5 days is a surprise. > > I am wondering what (if anything) I can do to speed this up, my server only > has 4GB RAM, would it be beneficial/safe for me to switch off, upgrade to > 8GB? I am assuming this may help the delete operation as more memory should > mean that more of the dedup table is stored in RAM? > > Or is there anything else I can do to speed things up or indeed determine > how much longer left? > > I''d appreciate any advice, cheers > >It would be extremely beneficial for you to switch off and upgrade to 8GB. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100815/ebb14005/attachment.html>
thanks Tim, I have just chucked in another 4GB, hopefully I''ll have my server back come the morning! cheers, Marc -- This message posted from opensolaris.org
Victor Latushkin
2010-Aug-15 22:41 UTC
[zfs-discuss] Help! Dedup delete FS advice needed!!
On Aug 15, 2010, at 11:30 PM, Marc Emmerson wrote:> Hi all, > I have a 10TB array (zpool = 2x 5 disk raidz1), I had dedup enabled on a couple of filesystems which I decided to delete last week, the first contained about 6GB of data and was deleted in about 30 minutes, the second (about 100GB of VMs) is still being deleted (I think) 4.5 days later!Could you please post output of echo "::arc" | mdb -k victor> > Now, I''ve seen delete "dedup enabled fs" operations take a while before (2 days) but 4.5 days is a surprise. > > I am wondering what (if anything) I can do to speed this up, my server only has 4GB RAM, would it be beneficial/safe for me to switch off, upgrade to 8GB? I am assuming this may help the delete operation as more memory should mean that more of the dedup table is stored in RAM? > > Or is there anything else I can do to speed things up or indeed determine how much longer left? > > I''d appreciate any advice, cheers > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Victor, I just woke up and checked my server and the delete operation has completed, however I ran your command anyway and here is the output: marc at server:~$ echo "::arc" | pfexec mdb -k hits = 352207629 misses = 2291912 demand_data_hits = 270352 demand_data_misses = 6955 demand_metadata_hits = 42142882 demand_metadata_misses = 1707403 prefetch_data_hits = 698 prefetch_data_misses = 1526 prefetch_metadata_hits = 309793697 prefetch_metadata_misses = 576028 mru_hits = 1893108 mru_ghost_hits = 1001360 mfu_hits = 279741307 mfu_ghost_hits = 733122 deleted = 394887 recycle_miss = 377618 mutex_miss = 24 evict_skip = 40043727 evict_l2_cached = 185477632 evict_l2_eligible = 6408233984 evict_l2_ineligible = 1307796992 hash_elements = 22851 hash_elements_max = 510829 hash_collisions = 2565282 hash_chains = 1878 hash_chain_max = 15 p = 722 MB c = 2183 MB c_min = 862 MB c_max = 6903 MB size = 717 MB hdr_size = 101617104 data_size = 608756736 other_size = 41600128 l2_hits = 7684 l2_misses = 2280245 l2_feeds = 23245 l2_rw_clash = 0 l2_read_bytes = 31473664 l2_write_bytes = 358850560 l2_writes_sent = 321 l2_writes_done = 321 l2_writes_error = 0 l2_writes_hdr_miss = 2 l2_evict_lock_retry = 0 l2_evict_reading = 0 l2_free_on_write = 2678 l2_abort_lowmem = 0 l2_cksum_bad = 0 l2_io_error = 0 l2_size = 43856384 l2_hdr_size = 0 memory_throttle_count = 0 arc_no_grow = 0 arc_tempreserve = 0 MB arc_meta_used = 253 MB arc_meta_limit = 1725 MB arc_meta_max = 2153 MB I''d be interested to know if there is anything significant here. Thanks, Marc -- This message posted from opensolaris.org
Tim, thanks, you were right, it looks like the destroy completed in about an hour or so after the additional memory was added. Much appreciated, Marc -- This message posted from opensolaris.org
Victor Latushkin
2010-Aug-16 08:54 UTC
[zfs-discuss] Help! Dedup delete FS advice needed!!
On Aug 16, 2010, at 12:29 PM, Marc Emmerson wrote:> Hi Victor, > I just woke up and checked my server and the delete operation has completed, however I ran your command anyway and here is the output:If all is well, then requested information is no longer relevant ;-) victor> > marc at server:~$ echo "::arc" | pfexec mdb -k > hits = 352207629 > misses = 2291912 > demand_data_hits = 270352 > demand_data_misses = 6955 > demand_metadata_hits = 42142882 > demand_metadata_misses = 1707403 > prefetch_data_hits = 698 > prefetch_data_misses = 1526 > prefetch_metadata_hits = 309793697 > prefetch_metadata_misses = 576028 > mru_hits = 1893108 > mru_ghost_hits = 1001360 > mfu_hits = 279741307 > mfu_ghost_hits = 733122 > deleted = 394887 > recycle_miss = 377618 > mutex_miss = 24 > evict_skip = 40043727 > evict_l2_cached = 185477632 > evict_l2_eligible = 6408233984 > evict_l2_ineligible = 1307796992 > hash_elements = 22851 > hash_elements_max = 510829 > hash_collisions = 2565282 > hash_chains = 1878 > hash_chain_max = 15 > p = 722 MB > c = 2183 MB > c_min = 862 MB > c_max = 6903 MB > size = 717 MB > hdr_size = 101617104 > data_size = 608756736 > other_size = 41600128 > l2_hits = 7684 > l2_misses = 2280245 > l2_feeds = 23245 > l2_rw_clash = 0 > l2_read_bytes = 31473664 > l2_write_bytes = 358850560 > l2_writes_sent = 321 > l2_writes_done = 321 > l2_writes_error = 0 > l2_writes_hdr_miss = 2 > l2_evict_lock_retry = 0 > l2_evict_reading = 0 > l2_free_on_write = 2678 > l2_abort_lowmem = 0 > l2_cksum_bad = 0 > l2_io_error = 0 > l2_size = 43856384 > l2_hdr_size = 0 > memory_throttle_count = 0 > arc_no_grow = 0 > arc_tempreserve = 0 MB > arc_meta_used = 253 MB > arc_meta_limit = 1725 MB > arc_meta_max = 2153 MB > > I''d be interested to know if there is anything significant here. > > Thanks, > > Marc > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss