Robert Milkowski
2006-Sep-05 17:48 UTC
[zfs-discuss] ZFS forces system to paging to the point it is unresponsive
Hi. v440, S10U2 + patches OS and Kernel Version: SunOS XXXXX 5.10 Generic_118833-20 sun4u sparc SUNW,Sun-Fire-V440 NFS server with ZFS as a local storage. We were rsyncing UFS filesystem to ZFS filesystem exported over NFS. After some time server which exports ZFS over NFS was unresponsive. Operator decided to force panic and reboot server. Further examination showed that system was heavily paging probably due to ZFS as no other services are running there. I had just another problem - looks similar to last one. I decided to put nfsd into RT class. I guess ZFS is using all memory for its caches and after some time it fails to free it and forces system to paging. This is BAD, really BAD. More details to previous problem. bash-3.00# savecore /f3-1/ System dump time: Sat Sep 2 03:31:18 2006 Constructing namelist /f3-1//unix.0 Constructing corefile /f3-1//vmcore.0 100% done: 1043993 of 1043993 pages saved bash-3.00# cd /f3-1/ bash-3.00# bash-3.00# mdb 0 Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp fctl qlc ssd lofs zfs random logindmux ptm cpc nfs ipc ]> ::statusdebugging crash dump vmcore.0 (64-bit) from XXXXXX operating system: 5.10 Generic_118833-20 (sun4u) panic message: sync initiated dump content: kernel pages only> > ::spaADDR STATE NAME 0000060001271680 ACTIVE f3-1 0000060003bd4dc0 ACTIVE f3-2> > ::memstatPage Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 1016199 7939 98% Anon 4420 34 0% Exec and libs 736 5 0% Page cache 36 0 0% Free (cachelist) 1962 15 0% Free (freelist) 18338 143 2% Total 1041691 8138 Physical 1024836 8006> > ::swapinfoADDR VNODE PAGES FREE NAME 00000600034ab5a0 600012ff8c0 1048763 1028489 /dev/md/dsk/d15>We were synchronizing lot of small files over nfs and writing to f3-1/d611. I would say that with ZFS it''s expected to be on low memory most of the time but not to the point when host starts to paging. bash-3.00# sar -g SunOS XXXXX 5.10 Generic_118833-20 sun4u 09/02/2006 00:00:00 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf [...] 02:15:01 0.03 0.04 0.02 0.00 0.00 02:20:00 0.04 0.04 0.02 0.00 0.00 02:25:00 0.02 0.03 0.01 0.00 0.00 02:30:00 0.02 0.03 0.01 0.00 0.00 02:35:00 0.03 0.03 0.01 0.00 0.00 02:40:01 0.03 0.04 0.03 0.00 0.00 02:45:02 5.98 82.77 93.20 65115.59 0.00 03:39:28 unix restarts 03:40:00 0.35 0.61 0.61 0.00 60.00 03:45:00 0.03 0.06 0.06 0.00 0.00 03:50:00 0.02 0.03 0.02 0.00 0.00 03:55:00 0.02 0.02 0.02 0.00 0.00 bash-3.00# sar -u SunOS XXXX 5.10 Generic_118833-20 sun4u 09/02/2006 00:00:00 %usr %sys %wio %idle [...] 02:00:00 0 1 0 99 02:05:00 0 1 0 99 02:10:00 0 1 0 99 02:15:01 0 1 0 99 02:20:00 0 15 0 85 02:25:00 0 34 0 66 02:30:00 0 20 0 80 02:35:00 0 22 0 78 02:40:01 0 45 0 55 02:45:02 0 61 0 38 03:39:28 unix restarts 03:40:00 5 10 0 84 03:45:00 1 1 0 98 03:50:00 0 0 0 100 bash-3.00# sar -q SunOS xxx 5.10 Generic_118833-20 sun4u 09/02/2006 00:00:00 runq-sz %runocc swpq-sz %swpocc [...] 02:00:00 0.0 0 0.0 0 02:05:00 1.0 0 0.0 0 02:10:00 0.0 0 0.0 0 02:15:01 0.0 0 0.0 0 02:20:00 1.1 5 0.0 0 02:25:00 1.4 12 0.0 0 02:30:00 2.1 6 0.0 0 02:35:00 3.4 9 0.0 0 02:40:01 2.8 25 0.0 0 02:45:02 4.0 44 116.6 12 03:39:28 unix restarts 03:40:00 1.0 3 0.0 0 03:45:00 0.0 0 0.0 0 03:50:00 0.0 0 0.0 0 Crashdump could be provided off-the list and not for public eyes. This message posted from opensolaris.org
Mark Maybee
2006-Sep-05 19:25 UTC
[zfs-discuss] ZFS forces system to paging to the point it is unresponsive
Robert, I would be interested in seeing your crash dump. ZFS will consume much of your memory *in the absence of memory pressure*, but it should be responsive to memory pressure, and give up memory when this happens. It looks like you have 8GB of memory on your system? ZFS should never consume more than 7GB of this under any circumstances. Note there are a few outstanding bugs that could be coming into play here: 6456888 zpool scrubbing leads to memory exhaustion and system hang 6416757 zfs could still use less memory 6447701 ZFS hangs when iSCSI Target attempts to initialize its backing store -Mark P.S. It would be useful to see the output of: > arc::print and > ::kmastat Robert Milkowski wrote:> Hi. > > v440, S10U2 + patches > > OS and Kernel Version: SunOS XXXXX 5.10 Generic_118833-20 sun4u sparc SUNW,Sun-Fire-V440 > > NFS server with ZFS as a local storage. > > > We were rsyncing UFS filesystem to ZFS filesystem exported over NFS. After some time server which exports ZFS over NFS was unresponsive. Operator decided to force panic and reboot server. Further examination showed that system was heavily paging probably due to ZFS as no other services are running there. > > I had just another problem - looks similar to last one. > I decided to put nfsd into RT class. > > I guess ZFS is using all memory for its caches and after some time it fails to free it and forces system to paging. This is BAD, really BAD. > > > More details to previous problem. > > bash-3.00# savecore /f3-1/ > System dump time: Sat Sep 2 03:31:18 2006 > Constructing namelist /f3-1//unix.0 > Constructing corefile /f3-1//vmcore.0 > 100% done: 1043993 of 1043993 pages saved > bash-3.00# cd /f3-1/ > bash-3.00# > bash-3.00# mdb 0 > Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp fctl qlc ssd lofs zfs random logindmux ptm > cpc nfs ipc ] > >>::status > > debugging crash dump vmcore.0 (64-bit) from XXXXXX > operating system: 5.10 Generic_118833-20 (sun4u) > panic message: sync initiated > dump content: kernel pages only > >>::spa > > ADDR STATE NAME > 0000060001271680 ACTIVE f3-1 > 0000060003bd4dc0 ACTIVE f3-2 > >>::memstat > > Page Summary Pages MB %Tot > ------------ ---------------- ---------------- ---- > Kernel 1016199 7939 98% > Anon 4420 34 0% > Exec and libs 736 5 0% > Page cache 36 0 0% > Free (cachelist) 1962 15 0% > Free (freelist) 18338 143 2% > > Total 1041691 8138 > Physical 1024836 8006 > >>::swapinfo > > ADDR VNODE PAGES FREE NAME > 00000600034ab5a0 600012ff8c0 1048763 1028489 /dev/md/dsk/d15 > > > > We were synchronizing lot of small files over nfs and writing to f3-1/d611. I would say that with ZFS it''s expected to be > on low memory most of the time but not to the point when host starts to paging. > > bash-3.00# sar -g > > SunOS XXXXX 5.10 Generic_118833-20 sun4u 09/02/2006 > > 00:00:00 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf > [...] > 02:15:01 0.03 0.04 0.02 0.00 0.00 > 02:20:00 0.04 0.04 0.02 0.00 0.00 > 02:25:00 0.02 0.03 0.01 0.00 0.00 > 02:30:00 0.02 0.03 0.01 0.00 0.00 > 02:35:00 0.03 0.03 0.01 0.00 0.00 > 02:40:01 0.03 0.04 0.03 0.00 0.00 > 02:45:02 5.98 82.77 93.20 65115.59 0.00 > 03:39:28 unix restarts > 03:40:00 0.35 0.61 0.61 0.00 60.00 > 03:45:00 0.03 0.06 0.06 0.00 0.00 > 03:50:00 0.02 0.03 0.02 0.00 0.00 > 03:55:00 0.02 0.02 0.02 0.00 0.00 > > bash-3.00# sar -u > > SunOS XXXX 5.10 Generic_118833-20 sun4u 09/02/2006 > > 00:00:00 %usr %sys %wio %idle > [...] > 02:00:00 0 1 0 99 > 02:05:00 0 1 0 99 > 02:10:00 0 1 0 99 > 02:15:01 0 1 0 99 > 02:20:00 0 15 0 85 > 02:25:00 0 34 0 66 > 02:30:00 0 20 0 80 > 02:35:00 0 22 0 78 > 02:40:01 0 45 0 55 > 02:45:02 0 61 0 38 > 03:39:28 unix restarts > 03:40:00 5 10 0 84 > 03:45:00 1 1 0 98 > 03:50:00 0 0 0 100 > > bash-3.00# sar -q > > SunOS xxx 5.10 Generic_118833-20 sun4u 09/02/2006 > > 00:00:00 runq-sz %runocc swpq-sz %swpocc > [...] > 02:00:00 0.0 0 0.0 0 > 02:05:00 1.0 0 0.0 0 > 02:10:00 0.0 0 0.0 0 > 02:15:01 0.0 0 0.0 0 > 02:20:00 1.1 5 0.0 0 > 02:25:00 1.4 12 0.0 0 > 02:30:00 2.1 6 0.0 0 > 02:35:00 3.4 9 0.0 0 > 02:40:01 2.8 25 0.0 0 > 02:45:02 4.0 44 116.6 12 > 03:39:28 unix restarts > 03:40:00 1.0 3 0.0 0 > 03:45:00 0.0 0 0.0 0 > 03:50:00 0.0 0 0.0 0 > > > Crashdump could be provided off-the list and not for public eyes. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski
2006-Sep-05 20:29 UTC
[zfs-discuss] Re: ZFS forces system to paging to the point it is
Yes, server has 8GB of RAM. Most of the time there''s about 1GB of free RAM. bash-3.00# mdb 0 Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp fctl qlc ssd lofs zfs random logindmux ptm cpc nfs ipc ]> arc::print{ anon = ARC_anon mru = ARC_mru mru_ghost = ARC_mru_ghost mfu = ARC_mfu mfu_ghost = ARC_mfu_ghost size = 0x8b72ae00 p = 0xfe41b00 c = 0xfe51b00 c_min = 0xfe51b00 c_max = 0x1bca36000 hits = 0x29fc378 misses = 0x32b8693 deleted = 0x5d625d0 skipped = 0x1389d5 hash_elements = 0x22da4 hash_elements_max = 0xbe25d hash_collisions = 0x2f54753 hash_chains = 0x985c Segmentation Fault bash-3.00# bash-3.00# mdb 0 Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp fctl qlc ssd lofs zfs random logindmux ptm cpc nfs ipc ]> ::kmastatcache buf buf buf memory alloc alloc name size in use total in use succeed fail ------------------------- ------ ------ ------ --------- --------- ----- kmem_magazine_1 16 375 48768 786432 1908294 0 kmem_magazine_3 32 376 28448 917504 1246964 0 kmem_magazine_7 64 417 33274 2146304 1704779 0 kmem_magazine_15 128 271 8946 1163264 1307460 0 kmem_magazine_31 256 64 1209 319488 725291 0 kmem_magazine_47 384 16 399 155648 217361 0 kmem_magazine_63 512 8 135 73728 63905 0 kmem_magazine_95 768 32 360 294912 913867 0 kmem_magazine_143 1152 81 455 532480 450249 0 kmem_slab_cache 56 176501 379030 21413888 5102165 0 kmem_bufctl_cache 24 915296 1268877 30662656 7584708 0 kmem_bufctl_audit_cache 128 0 0 0 0 0 kmem_va_8192 8192 592169 629216 859570176 1693115 0 kmem_va_16384 16384 130413 130448 2137260032 4533541 0 kmem_va_24576 24576 25 210 5505024 181465 0 kmem_va_32768 32768 28 160 5242880 103622 0 kmem_va_40960 40960 27 144 6291456 176034 0 kmem_va_49152 49152 4 35 1835008 29017 0 kmem_va_57344 57344 28 100 6553600 106836 0 kmem_va_65536 65536 440 488 31981568 15484 0 kmem_alloc_8 8 232577 1385154 11157504 100981528 0 kmem_alloc_16 16 48379 52832 851968 407819422 0 kmem_alloc_24 24 68298 82377 1990656 705360190 0 kmem_alloc_32 32 18649 129032 4161536 1760620942 0 kmem_alloc_40 40 166942 792309 31973376 193330254 0 kmem_alloc_48 48 317960 495001 23994368 493946685 0 kmem_alloc_56 56 59913 62785 3547136 45094251 0 kmem_alloc_64 64 8360 13970 901120 159881658 0 kmem_alloc_80 80 297457 351985 28549120 99137034 0 kmem_alloc_96 96 1998619 1999032 194953216 137727632 0 kmem_alloc_112 112 369605 682992 77709312 434889863 0 kmem_alloc_128 128 122477 184275 23961600 53130599 0 kmem_alloc_160 160 1543 4250 696320 256162141 0 kmem_alloc_192 192 377 1092 212992 2372982 0 kmem_alloc_224 224 335 1440 327680 119325910 0 kmem_alloc_256 256 139861 139996 36995072 114233111 0 kmem_alloc_320 320 8552 9000 2949120 20605159 0 kmem_alloc_384 384 50585 51072 19922944 96252224 0 kmem_alloc_448 448 213 252 114688 1275511 0 kmem_alloc_512 512 626 1350 737280 1705092 0 kmem_alloc_640 640 155 1944 1327104 674845891 0 kmem_alloc_768 768 36 60 49152 204824 0 kmem_alloc_896 896 177 207 188416 566538 0 kmem_alloc_1152 1152 2269 2478 2899968 104579113 0 kmem_alloc_1344 1344 105 126 172032 1573342 0 kmem_alloc_1600 1600 22 45 73728 182562 0 kmem_alloc_2048 2048 179 216 442368 878831 0 kmem_alloc_2688 2688 51 75 204800 861400 0 kmem_alloc_4096 4096 90 98 401408 1961901 0 kmem_alloc_8192 8192 3470 3480 28508160 16181656 0 kmem_alloc_12288 12288 35 44 540672 176207 0 kmem_alloc_16384 16384 65 69 1130496 144095 0 streams_mblk 64 6634 6731 434176 314926037 0 streams_dblk_16 128 325 630 81920 27137119 0 streams_dblk_80 192 923 1092 212992 173564322 0 streams_dblk_144 256 33 93 24576 18206939 0 streams_dblk_208 320 201 550 180224 5069250 0 streams_dblk_272 384 21 189 73728 2272180 0 streams_dblk_336 448 8 72 32768 1687014 0 streams_dblk_528 640 0 60 40960 1869745 0 streams_dblk_1040 1152 1 35 40960 2957380 0 streams_dblk_1488 1600 0 40 65536 113453 0 streams_dblk_1936 2048 3 192 393216 298324611 0 streams_dblk_2576 2688 23 24 65536 175466 0 streams_dblk_3920 4032 0 6 24576 960179 0 streams_dblk_8192 112 0 63 8192 1813518 0 streams_dblk_12112 12224 0 6 73728 102802 0 streams_dblk_16384 112 0 63 8192 60643 0 streams_dblk_20304 20416 0 6 122880 25679 0 streams_dblk_24576 112 0 63 8192 51544 0 streams_dblk_28496 28608 0 6 172032 18602 0 streams_dblk_32768 112 0 63 8192 1552487 0 streams_dblk_36688 36800 0 8 294912 851233 0 streams_dblk_40960 112 0 63 8192 882 0 streams_dblk_44880 44992 0 0 0 0 0 streams_dblk_49152 112 0 0 0 0 0 streams_dblk_53072 53184 0 0 0 0 0 streams_dblk_57344 112 0 0 0 0 0 streams_dblk_61264 61376 0 0 0 0 0 streams_dblk_65536 112 0 63 8192 13 0 streams_dblk_69456 69568 0 0 0 0 0 streams_dblk_73728 112 0 0 0 0 0 streams_dblk_esb 112 4012 4536 589824 514487088 0 streams_fthdr 264 0 0 0 0 0 streams_ftblk 232 0 0 0 0 0 multidata 248 0 0 0 0 0 multidata_pdslab 7112 0 0 0 0 0 multidata_pattbl 32 0 0 0 0 0 log_cons_cache 48 18 169 8192 110 0 taskq_ent_cache 56 2924 6525 368640 21745380 0 taskq_cache 216 89 111 24576 187 0 id32_cache 32 0 0 0 0 0 bp_map_16384 16384 0 32 524288 2719732 0 bp_map_32768 32768 0 16 524288 1432680 0 bp_map_49152 49152 0 10 524288 26932 0 bp_map_65536 65536 0 8 524288 144 0 bp_map_81920 81920 0 6 524288 70 0 bp_map_98304 98304 0 5 524288 41 0 bp_map_114688 114688 0 4 524288 13 0 bp_map_131072 131072 0 4 524288 29 0 memseg_cache 112 0 0 0 0 0 mod_hash_entries 24 221 678 16384 177617 0 ipp_mod 304 0 0 0 0 0 ipp_action 368 0 0 0 0 0 ipp_packet 64 0 0 0 0 0 sfmmuid_cache 176 69 138 24576 149233 0 sfmmu_tsbinfo_cache 64 68 254 16384 309862 0 sfmmu_tsb8k_cache 8192 0 0 0 0 0 sfmmu_tsb_cache 8192 24 28 229376 154898 0 sfmmu8_cache 312 118896 129038 40656896 12450074 0 sfmmu1_cache 88 510 8096 720896 611953 0 pa_hment_cache 64 50 254 16384 162653 0 ism_blk_cache 272 0 0 0 0 0 ism_ment_cache 32 0 0 0 0 0 seg_cache 72 5231 5989 434176 6128673 0 dev_info_node_cache 480 313 400 204800 1594 0 segkmem_ppa_4096 4096 0 8 32768 239 0 segkp_8192 8192 65 80 655360 62334 0 segkp_16384 16384 0 0 0 0 0 segkp_24576 24576 0 0 0 0 0 segkp_32768 32768 2004 2060 67502080 17178 0 segkp_40960 40960 0 0 0 0 0 umem_np_8192 8192 0 32 262144 1068 0 umem_np_16384 16384 0 16 262144 529 0 umem_np_24576 24576 0 0 0 0 0 umem_np_32768 32768 0 8 262144 467 0 umem_np_40960 40960 0 0 0 0 0 umem_np_49152 49152 0 0 0 0 0 umem_np_57344 57344 0 0 0 0 0 umem_np_65536 65536 0 8 524288 464 0 thread_cache 792 1534 1550 1269760 230916 0 lwp_cache 904 1534 1557 1417216 26374 0 turnstile_cache 64 1998 2159 139264 285230 0 cred_cache 148 129 212 32768 113567 0 rctl_cache 40 1009 1421 57344 957412 0 rctl_val_cache 64 1991 2540 163840 2105277 0 task_cache 104 47 156 16384 1639 0 cyclic_id_cache 64 8 127 8192 10 0 dnlc_space_cache 24 0 339 8192 173 0 vn_cache 240 2400324 2507745 662691840 6307891 0 file_cache 56 548 725 40960 15754899 0 stream_head_cache 400 295 340 139264 245787 0 queue_cache 656 749 804 548864 536767 0 syncq_cache 160 22 50 8192 453 0 qband_cache 64 2 127 8192 2 0 linkinfo_cache 48 13 169 8192 13 0 ciputctrl_cache 256 4 31 8192 8 0 serializer_cache 64 29 127 8192 630 0 as_cache 216 68 148 32768 149231 0 marker_cache 128 0 63 8192 701688 0 anon_cache 48 24808 54925 2662400 5659998 0 anonmap_cache 48 3144 3718 180224 2879378 0 segvn_cache 104 5231 5772 606208 5582882 0 flk_edges 48 0 169 8192 30 0 fdb_cache 104 0 234 24576 77961 0 timer_cache 136 1 59 8192 34 0 physio_buf_cache 248 0 32 8192 8366 0 snode_cache 152 389 477 73728 6222421 0 ufs_inode_cache 368 10891 10912 4063232 10949 0 directio_buf_cache 272 0 0 0 0 0 lufs_save 24 0 339 8192 25569 0 lufs_bufs 256 0 62 16384 27352 0 lufs_mapentry_cache 112 0 216 24576 708438 0 pcisch3_dvma_8192 8192 25 68 557056 38461904 0 mpt0_cache 480 12 48 24576 1598159 0 dv_node_cache 120 50 469 57344 7091 0 clnt_clts_endpnt_cache 88 0 0 0 0 0 md_stripe_parent 96 0 168 16384 1068770 0 md_stripe_child 312 0 26 8192 1245175 0 md_mirror_parent 160 0 50 8192 595729 0 md_mirror_child 304 0 26 8192 1243454 0 md_mirror_wow 16440 0 8 139264 560 0 pcisch2_dvma_8192 8192 16 16 131072 16 0 kcf_sreq_cache 48 0 0 0 0 0 kcf_areq_cache 272 0 0 0 0 0 kcf_context_cache 88 0 0 0 0 0 ipsec_actions 72 0 113 8192 1324 0 ipsec_selectors 72 0 0 0 0 0 ipsec_policy 72 0 0 0 0 0 ipsec_info 304 0 26 8192 1324 0 ip_minor_arena_1 1 138 256 256 134434 0 ipcl_conn_cache 464 63 90 49152 120958 0 ipcl_tcpconn_cache 1640 93 153 278528 22394 0 ire_cache 344 103 138 49152 6102 0 tcp_timercache 88 155 368 32768 4842107 0 tcp_sack_info_cache 80 41 202 16384 13038 0 tcp_iphc_cache 120 92 201 24576 21146 0 squeue_cache 136 4 42 8192 4 0 sctp_conn_cache 2208 1 11 24576 1 0 sctp_faddr_cache 168 0 0 0 0 0 sctp_set_cache 24 0 0 0 0 0 sctp_ftsn_set_cache 16 0 0 0 0 0 sctpsock 568 0 0 0 0 0 sctp_assoc 64 0 0 0 0 0 socktpi_cache 408 77 114 49152 63094 0 socktpi_unix_cache 408 6 38 16384 516 0 ncafs_cache 456 0 0 0 0 0 mac_impl_cache 752 0 0 0 0 0 dls_cache 168 0 0 0 0 0 soft_ring_cache 176 0 0 0 0 0 dls_vlan_cache 48 0 0 0 0 0 dls_link_cache 624 0 0 0 0 0 dld_ctl_1 1 0 0 0 0 0 dld_str_cache 248 0 32 8192 1 0 udp_cache 384 51 84 32768 120467 0 process_cache 3048 71 96 294912 80652 0 exacct_object_cache 40 0 203 8192 1510308 0 ch_private_cache 8208 4 8 73728 4 0 fctl_cache 112 0 72 8192 28 0 pcisch1_dvma_8192 8192 24 104 851968 41115591 0 tl_cache 432 41 72 32768 1139 0 keysock_1 1 0 0 0 0 0 spdsock_1 1 0 64 64 4 0 fnode_cache 176 5 42 8192 51 0 pipe_cache 320 34 75 24576 53158 0 fp2_cache 720 1 11 8192 17 0 fp0_cache 720 1 11 8192 15 0 fcp0_cache 1160 0 42 49152 49136516 0 fcp2_cache 1160 0 28 32768 48423491 0 kssl_cache 1560 0 0 0 0 0 namefs_inodes_1 1 27 64 64 27 0 port_cache 80 2 101 8192 2 0 pcisch0_dvma_8192 8192 16 36 294912 100564305 0 ip_minor_1 1 0 0 0 0 0 ar_minor_1 1 0 0 0 0 0 lnode_cache 32 2 254 8192 2 0 icmp_minor_1 1 0 0 0 0 0 pty_map 56 4 145 8192 17 0 dtrace_state_cache 2048 0 4 8192 1 0 qif_head_cache 264 0 0 0 0 0 mpt1_cache 480 0 16 8192 95 0 md_raid_parent 120 0 0 0 0 0 md_raid_child 1040 0 0 0 0 0 md_raid_cbufs 376 0 0 0 0 0 md_trans_parent 80 0 0 0 0 0 md_trans_child 248 0 0 0 0 0 authkern_cache 72 0 113 8192 26288490 0 authloopback_cache 72 0 113 8192 7812 0 authdes_cache_handle 80 0 0 0 0 0 rnode_cache 648 0 96 65536 699048 0 nfs_access_cache 56 0 10150 573440 6865576 0 client_handle_cache 32 0 254 8192 76 0 rnode4_cache 960 0 0 0 0 0 svnode_cache 40 0 0 0 0 0 nfs4_access_cache 56 0 0 0 0 0 client_handle4_cache 32 0 0 0 0 0 nfs4_ace4vals_cache 48 0 0 0 0 0 nfs4_ace4_list_cache 264 0 0 0 0 0 NFS_idmap_cache 48 0 0 0 0 0 lm_vnode 184 2 44 8192 11 0 lm_xprt 32 3 254 8192 3 0 lm_sysid 160 2 50 8192 24 0 lm_client 128 0 63 8192 43 0 lm_async 32 0 0 0 0 0 lm_sleep 96 0 0 0 0 0 lm_config 80 3 101 8192 1202 0 zio_buf_512 512 2388292 2388330 1304346624 176134688 0 zio_buf_1024 1024 18 96 98304 17058709 0 zio_buf_1536 1536 0 30 49152 2791254 0 zio_buf_2048 2048 0 20 40960 1051435 0 zio_buf_2560 2560 0 33 90112 1716360 0 zio_buf_3072 3072 0 40 122880 1902497 0 zio_buf_3584 3584 0 225 819200 3918593 0 zio_buf_4096 4096 3 34 139264 20336550 0 zio_buf_5120 5120 0 144 737280 8932632 0 zio_buf_6144 6144 0 36 221184 5274922 0 zio_buf_7168 7168 0 16 114688 3350804 0 zio_buf_8192 8192 0 11 90112 9131264 0 zio_buf_10240 10240 0 12 122880 2268700 0 zio_buf_12288 12288 0 8 98304 3258896 0 zio_buf_14336 14336 0 60 860160 15853089 0 zio_buf_16384 16384 142762 142793 2339520512 74889652 0 zio_buf_20480 20480 0 6 122880 1299564 0 zio_buf_24576 24576 0 5 122880 1063597 0 zio_buf_28672 28672 0 6 172032 712545 0 zio_buf_32768 32768 0 4 131072 1339604 0 zio_buf_40960 40960 0 6 245760 1736172 0 zio_buf_49152 49152 0 4 196608 609853 0 zio_buf_57344 57344 0 5 286720 428139 0 zio_buf_65536 65536 520 522 34209792 8839788 0 zio_buf_73728 73728 0 5 368640 284979 0 zio_buf_81920 81920 0 5 409600 133392 0 zio_buf_90112 90112 0 6 540672 96787 0 zio_buf_98304 98304 0 4 393216 133942 0 zio_buf_106496 106496 0 5 532480 91769 0 zio_buf_114688 114688 0 5 573440 72130 0 zio_buf_122880 122880 0 5 614400 52151 0 zio_buf_131072 131072 100 107 14024704 7326248 0 dmu_buf_impl_t 328 2531066 2531232 863993856 237052643 0 dnode_t 648 2395209 2395212 1635131392 83304588 0 arc_buf_hdr_t 128 142786 390852 50823168 155745359 0 arc_buf_t 40 142786 347333 14016512 160502001 0 zil_lwb_cache 208 28 468 98304 30507668 0 zfs_znode_cache 192 2388224 2388246 465821696 83149771 0 nfslog_small_rec 512 0 0 0 0 0 nfslog_medium_rec 8192 0 0 0 0 0 nfslog_large_rec 32768 0 0 0 0 0 exi_cache_handle 40 12 203 8192 7880 0 md_softpart_parent 88 0 0 0 0 0 md_softpart_child 304 0 0 0 0 0 ------------------------- ------ ------ ------ --------- --------- ----- Total [static] 1359872 393573 0 Total [hat_memload] 40656896 12450074 0 Total [kmem_msb] 58466304 21225043 0 Total [kmem_va] 3054239744 6839114 0 Total [kmem_default] 3616178176 125605033 0 Total [bp_map] 4194304 4179641 0 Total [kmem_tsb_default] 229376 154898 0 Total [hat_memload1] 720896 611953 0 Total [segkmem_ppa] 32768 239 0 Total [umem_np] 1310720 2528 0 Total [segkp] 68157440 79512 0 Total [pcisch3_dvma] 557056 38461904 0 Total [pcisch2_dvma] 131072 16 0 Total [ip_minor_arena] 256 134434 0 Total [pcisch1_dvma] 851968 41115591 0 Total [spdsock] 64 4 0 Total [namefs_inodes] 64 27 0 Total [pcisch0_dvma] 294912 100564305 0 ------------------------- ------ ------ ------ --------- --------- ----- vmem memory memory memory alloc alloc name in use total import succeed fail ------------------------- --------- ---------- --------- --------- ----- heap 1107183198208 4398046511104 0 547060 0 vmem_metadata 111730688 111935488 111935488 12927 0 vmem_seg 105259008 105259008 105259008 12849 0 vmem_hash 6124160 6144000 6144000 79 0 vmem_vmem 309720 362480 327680 97 0 static 1392640 1392640 1392640 162 0 static_alloc 24576 24576 24576 3 0 hat_memload 40656896 40656896 40656896 5120 0 kstat 300576 311296 245760 1355 0 kmem_metadata 62562304 120848384 120848384 26203 0 kmem_msb 58466304 58466304 58466304 25563 0 kmem_cache 228000 245760 245760 402 0 kmem_hash 3845632 3850240 3850240 2449 0 kmem_log 262880 270336 270336 6 0 kmem_firewall_va 0 0 0 0 0 kmem_firewall 0 0 0 0 0 mod_sysfile 641 8192 8192 19 0 kmem_oversize 139801616 145031168 145031168 4014986 0 kmem_va 7354638336 7354638336 7354638336 538418 0 kmem_default 7911145472 7911145472 7911145472 5783273 0 little_endian 1250976 1482752 1482752 399 0 big_endian 8721548 8962048 8962048 1547 0 bp_map 4194304 4194304 4194304 18 0 ksyms 4780480 5046272 5046272 284 0 ctf 120819 147456 147456 281 0 kmem_tsb 4194304 4194304 4194304 1 0 kmem_tsb_default 1327104 4194304 4194304 15791 0 hat_memload1 720896 720896 720896 108 0 segkmem_ppa 32768 32768 32768 2 0 umem_np 1310720 1310720 1310720 915 0 heap32 5398592 134217728 0 542 0 id32 0 0 0 0 0 module_data 2535106 5505024 5242880 390 0 promplat 0 0 0 519 0 heaptext 42295296 134217728 0 96 0 module_text 9263996 10854400 8732672 281 0 logminor_space 34 262137 0 126 0 taskq_id_arena 55 2147483647 0 152 0 heap_lp 1027604480 1099511627776 0 239 0 kmem_lp 1027604480 1027604480 1027604480 239 80 segkp 68386816 2147483648 0 40183 0 rctl_ids 27 32767 0 27 0 zoneid_space 0 9998 0 0 0 taskid_space 47 999999 0 1481 0 pool_ids 0 999998 0 0 0 contracts 48 2147483646 0 1486 0 regspec 9404416 5368709120 0 106 0 pcisch3_dvma 38404096 503316480 0 59522 0 pcisch2_dvma 131072 503316480 0 4 0 ip_minor_arena 256 262140 0 4 0 dld_ctl 0 4294967295 0 0 0 dld_minor_arena 1 4294967295 0 1 0 pcisch1_dvma 38715392 503316480 0 3593573 0 tl_minor_space 41 262138 0 983 0 keysock 0 4294967295 0 0 0 spdsock 64 4294967295 0 1 0 namefs_inodes 64 65536 0 1 0 pcisch0_dvma 75726848 503316480 0 13095 0 ip_minor 0 262142 0 0 0 ar_minor 0 262142 0 0 0 icmp_minor 0 262142 0 0 0 ptms_minor 4 16 0 17 0 devfsadm_event_channel 1 101 0 1 0 devfsadm_event_channel 1 2 0 1 0 syseventconfd_event_channel 0 101 0 0 0 syseventconfd_event_channel 1 2 0 1 0 syseventd_channel 4 101 0 4 0 syseventd_channel 1 2 0 1 0 dtrace 86299 4294967295 0 92680 0 dtrace_minor 0 4294967293 0 1 0 heaptext_holesrc_14 704512 2097152 0 12 0 heaptext_hole_14 684416 704512 704512 40 0 heaptext_holesrc_13 622592 2097152 0 6 0 heaptext_hole_13 590856 622592 622592 16 0 heaptext_holesrc_15 245760 2097152 0 19 0 heaptext_hole_15 236080 245760 245760 44 0 module_text_holesrc 1409024 4194304 0 40 0 heaptext_hole_0 1368876 1409024 1409024 98 0 heaptext_holesrc_16 466944 2097152 0 14 0 heaptext_hole_16 431156 466944 466944 33 0 heaptext_holesrc_12 409600 2097152 0 8 0 heaptext_hole_12 407784 409600 409600 12 0 lmsysid_space 3 16383 0 25 0 heaptext_holesrc_11 286720 2097152 0 1 0 heaptext_hole_11 284120 286720 286720 2 0 msqids 0 128 0 0 0 shmids 0 128 0 0 0 semids 0 128 0 0 0 logdmux_minor 0 256 0 0 0 ------------------------- --------- ---------- --------- --------- ----- This message posted from opensolaris.org
Robert Milkowski
2006-Sep-06 12:29 UTC
[zfs-discuss] Re: ZFS forces system to paging to the point it is unresponsive
It looks like I discovered a workaround. I''ve got another zpool within rg in SC. The other zpool does not have production data (yet) so I can switch it between nodes freely. By doing this every 3 minutes I can stay safe on free memory, at least so far. I guess it frees some ARC cache. What is not that clear however is that when I created another pool and just did export/import sometimes I get back some memory sometimes not. At the end it works worse than switching entire RG which actually means not only exporting zpool but also restarting nfsd. Looks like those two helps. This message posted from opensolaris.org
Mark Maybee
2006-Sep-06 17:39 UTC
[zfs-discuss] Re: ZFS forces system to paging to the point it is
Hmmm, interesting data. See comments in-line: Robert Milkowski wrote:> Yes, server has 8GB of RAM. > Most of the time there''s about 1GB of free RAM. > > bash-3.00# mdb 0 > Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp fctl qlc ssd lofs zfs random logindmux ptm cpc nfs ipc ] > >>arc::print > > { > anon = ARC_anon > mru = ARC_mru > mru_ghost = ARC_mru_ghost > mfu = ARC_mfu > mfu_ghost = ARC_mfu_ghost > size = 0x8b72ae00We are referencing about 2.2GB of data from the ARC.> p = 0xfe41b00 > c = 0xfe51b00We are trying to get down to our minimum target size of 16MB. So we are obviously feeling memory pressure and trying to react.> c_min = 0xfe51b00 > c_max = 0x1bca36000...> >::kmastat > > cache buf buf buf memory alloc alloc > name size in use total in use succeed fail------------------------- ------ ------ ------ --------- --------- ----- ...> vn_cache 240 2400324 2507745 662691840 6307891 0This is very interesting: 2.4 million vnodes are "active". ...> zio_buf_512 512 2388292 2388330 1304346624 176134688 0 > zio_buf_1024 1024 18 96 98304 17058709 0 > zio_buf_1536 1536 0 30 49152 2791254 0 > zio_buf_2048 2048 0 20 40960 1051435 0 > zio_buf_2560 2560 0 33 90112 1716360 0 > zio_buf_3072 3072 0 40 122880 1902497 0 > zio_buf_3584 3584 0 225 819200 3918593 0 > zio_buf_4096 4096 3 34 139264 20336550 0 > zio_buf_5120 5120 0 144 737280 8932632 0 > zio_buf_6144 6144 0 36 221184 5274922 0 > zio_buf_7168 7168 0 16 114688 3350804 0 > zio_buf_8192 8192 0 11 90112 9131264 0 > zio_buf_10240 10240 0 12 122880 2268700 0 > zio_buf_12288 12288 0 8 98304 3258896 0 > zio_buf_14336 14336 0 60 860160 15853089 0 > zio_buf_16384 16384 142762 142793 2339520512 74889652 0 > zio_buf_20480 20480 0 6 122880 1299564 0 > zio_buf_24576 24576 0 5 122880 1063597 0 > zio_buf_28672 28672 0 6 172032 712545 0 > zio_buf_32768 32768 0 4 131072 1339604 0 > zio_buf_40960 40960 0 6 245760 1736172 0 > zio_buf_49152 49152 0 4 196608 609853 0 > zio_buf_57344 57344 0 5 286720 428139 0 > zio_buf_65536 65536 520 522 34209792 8839788 0 > zio_buf_73728 73728 0 5 368640 284979 0 > zio_buf_81920 81920 0 5 409600 133392 0 > zio_buf_90112 90112 0 6 540672 96787 0 > zio_buf_98304 98304 0 4 393216 133942 0 > zio_buf_106496 106496 0 5 532480 91769 0 > zio_buf_114688 114688 0 5 573440 72130 0 > zio_buf_122880 122880 0 5 614400 52151 0 > zio_buf_131072 131072 100 107 14024704 7326248 0 > dmu_buf_impl_t 328 2531066 2531232 863993856 237052643 0 > dnode_t 648 2395209 2395212 1635131392 83304588 0 > arc_buf_hdr_t 128 142786 390852 50823168 155745359 0 > arc_buf_t 40 142786 347333 14016512 160502001 0 > zil_lwb_cache 208 28 468 98304 30507668 0 > zfs_znode_cache 192 2388224 2388246 465821696 83149771 0... Because of all of those vnodes, we are seeing a lot of extra memory being used by ZFS: - about 1.5GB for the dnodes - another 800MB for dbufs - plus 1.3GB for the "bonus buffers" (not accounted for in the arc) - plus about 400MB for znodes This totals to another 4GB + .6GB held in vnodes The question is who is holding these vnodes in memory... Could you do a >::dnlc!wc and let me know what it comes back with? -Mark
Robert Milkowski
2006-Sep-06 18:36 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
> ::dnlc!wc1048545 3145811 76522461>This message posted from opensolaris.org
Mark Maybee
2006-Sep-06 20:17 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
Robert Milkowski wrote:>>::dnlc!wc > > 1048545 3145811 76522461 >Well, that explains half your problem... and maybe all of it: We have a thread that *should* be trying to free up these entries in the DNLC, however it appears to be blocked: stack pointer for thread 2a10014fcc0: 2a10014edd1 [ 000002a10014edd1 turnstile_block+0x5e8() ] 000002a10014ee81 mutex_vector_enter+0x424(181bad0, fffe9bac45d5c95c, 6000e685bc8, 30001119340, 30001119340, 0) 000002a10014ef31 zfs_zinactive+0x24(300a9196f00, 6000e685bc8, 6000e685a58, 6000e685980, 300a9196f28, 300c11d7b40) 000002a10014efe1 zfs_inactive+0x168(6000e6859d8, 60001001ee8, 2a10014f948, 2, 0, 300a9196f00) 000002a10014f091 fop_inactive+0x50(300c11d7b40, 60001001ee8, 2000, 60004511f00 , 1, 7b763864) 000002a10014f151 do_dnlc_reduce_cache+0x210(0, 1853da0, 1863c70, 6000175c868, 18ab838, 60cbd) 000002a10014f201 taskq_d_thread+0x88(60003f9f4a0, 300002878c0, 6000100b520, 0 , 1636a70535248, 60003f9f4d0) 000002a10014f2d1 thread_start+4(60003f9f4a0, 0, 60000057b48, ffbffc6f00000000 , 4558505f53544445, 5252000000000000) We are trying to obtain a mutex that is currently held by another thread trying to get memory. I suspect that the rest of the active vnodes are probably being held by the arc, as a side-effect of the fact that its holding onto the associated dnodes (and its holding onto these dnodes because they are in the same block as some still-dnlc-referenced vnode/dnode). Note that almost all of the arc memory is tied up in the MRU cache: > ARC_mru::print { list = 0x80 lsize = 0x200 size = 0x8a030400 hits = 0x16dcadb mtx = { _opaque = [ 0 ] } } Almost none of this is freeable, so the arc cannot shrink in size. -Mark
Mark Maybee
2006-Sep-06 22:32 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
Robert Milkowski wrote:> > > On Wed, 6 Sep 2006, Mark Maybee wrote: > >> Robert Milkowski wrote: >> >>>> ::dnlc!wc >>> >>> >>> 1048545 3145811 76522461 >>> >> Well, that explains half your problem... and maybe all of it: > > > > After I reduced vdev prefetch from 64K to 8K for last few hours system > is working properly without workaround and free memory stays at about 1GB. > > Reducing vdev prefetch to 8K alse reduced read thruoutput 10x. > > I belive this is somehow related - maybe vdev cache was so aggressive (I > got 40-100MB/s of reads) and consuming memory so fast that thread which > is supposed to regain some memory couldn''t keep up?I suppose, although the data volume doesn''t seem that high... maybe you are just operating at the hairy edge here. Anyway, I have filed a bug to track this issue: 6467963 do_dnlc_reduce_cache() can be blocked by ZFS_OBJ_HOLD_ENTER() -Mark
Jürgen Keil
2006-Sep-07 11:30 UTC
[zfs-discuss] Re: Re: Re: ZFS forces system to paging to the point it is
> We are trying to obtain a mutex that is currently held > by another thread trying to get memory.Hmm, reminds me a bit on the zvol swap hang I got some time ago: http://www.opensolaris.org/jive/thread.jspa?threadID=11956&tstart=150 I guess if the other thead is stuck trying to get memory, then it is allocating the memory with KM_SLEEP, while holding a mutex? This message posted from opensolaris.org
Philippe Magerus - SUN Service - Luxembourg
2006-Sep-07 13:05 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
Hi, This same dump has now shown up as a P1 pts-kernel esc of which I am the lucky owner. I noticed that arc.size is far smaller than the sum of all zio... caches. This of course might be caused by : 6456888 zpool scrubbing leads to memory exhaustion and system hang Except that there is no resilvering going on. There is actually no mirroring/raidz configured. I have not created a new bug for this yet. One of the shares has ~40M files, and this might explain the huge amount of accessed znodes. As for the huge number of cached znodes, dnlc size is not the only issue, as we have 1M dncl entries for 2.3M znodes (dnodes and vnodes) : there should be a tunable for max number of cached znodes/dnodes as there is in other file systems. Default should not go beyond ncsize. Created 6468211 node/dnode caches should not be allowed to grow without bounds. As for arc.c_max, it should be settable via /etc/system. Created 6468214 arc.c_max should be tunable. And last but not least, since the dump has the same size as system ram, zio... caches are dumped as well. On big systems, this will lead to dump failures, hampering the resolution of non zfs issues as well as zfs ones. zio... caches should be skipped by the dump process. Philippe Mark Maybee wrote:> Robert Milkowski wrote: > >> >> >> On Wed, 6 Sep 2006, Mark Maybee wrote: >> >>> Robert Milkowski wrote: >>> >>>>> ::dnlc!wc >>>> >>>> >>>> >>>> 1048545 3145811 76522461 >>>> >>> Well, that explains half your problem... and maybe all of it: >> >> >> >> >> After I reduced vdev prefetch from 64K to 8K for last few hours >> system is working properly without workaround and free memory stays >> at about 1GB. >> >> Reducing vdev prefetch to 8K alse reduced read thruoutput 10x. >> >> I belive this is somehow related - maybe vdev cache was so aggressive >> (I got 40-100MB/s of reads) and consuming memory so fast that thread >> which is supposed to regain some memory couldn''t keep up? > > > I suppose, although the data volume doesn''t seem that high... maybe you > are just operating at the hairy edge here. Anyway, I have filed a bug > to track this issue: > > 6467963 do_dnlc_reduce_cache() can be blocked by ZFS_OBJ_HOLD_ENTER() > > -Mark > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- _____________________________________________________________________ Philippe Magerus philippe.magerus at sun.com PTS Emea Kernel (+352) 49.11.33.73 http://luxweb.luxembourg/~philippm
Mark Maybee
2006-Sep-07 13:40 UTC
[zfs-discuss] Re: Re: Re: ZFS forces system to paging to the point it is
J?rgen Keil wrote:>>We are trying to obtain a mutex that is currently held >>by another thread trying to get memory. > > > Hmm, reminds me a bit on the zvol swap hang I got > some time ago: > > http://www.opensolaris.org/jive/thread.jspa?threadID=11956&tstart=150 > > I guess if the other thead is stuck trying to get memory, then > it is allocating the memory with KM_SLEEP, while holding > a mutex? >Yup, this is essentially another instance of this problem. -Mark
Robert Milkowski
2006-Sep-07 14:50 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
Hello Mark, Thursday, September 7, 2006, 12:32:32 AM, you wrote: MM> Robert Milkowski wrote:>> >> >> On Wed, 6 Sep 2006, Mark Maybee wrote: >> >>> Robert Milkowski wrote: >>> >>>>> ::dnlc!wc >>>> >>>> >>>> 1048545 3145811 76522461 >>>> >>> Well, that explains half your problem... and maybe all of it: >> >> >> >> After I reduced vdev prefetch from 64K to 8K for last few hours system >> is working properly without workaround and free memory stays at about 1GB. >> >> Reducing vdev prefetch to 8K alse reduced read thruoutput 10x. >> >> I belive this is somehow related - maybe vdev cache was so aggressive (I >> got 40-100MB/s of reads) and consuming memory so fast that thread which >> is supposed to regain some memory couldn''t keep up?MM> I suppose, although the data volume doesn''t seem that high... maybe you MM> are just operating at the hairy edge here. Anyway, I have filed a bug MM> to track this issue: MM> 6467963 do_dnlc_reduce_cache() can be blocked by ZFS_OBJ_HOLD_ENTER() Well, it was working so far and then in less than 5 minutes free memory went to "0" and system was unresponsive I couldn''t log in. So I guess exporting/importing pool and in addition lowering vdev prefetch to 8K is needed here. Hope it will stay longer that way. :( -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Matthew Ahrens
2006-Sep-07 19:00 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
Philippe Magerus - SUN Service - Luxembourg wrote:> there should be a tunable for max number of cached znodes/dnodes as > there is in other file systems....> As for arc.c_max, it should be settable via /etc/system.No, there should not be tunables. The system should simply work. We need to diagnose down to the root cause of this problem, not simply introduce workarounds.> zio... caches should be skipped by the dump process.Yep, there is a bug on this: 4894692 "caching data in heap inflates crash dump" --matt
Robert Milkowski
2006-Sep-12 20:19 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
Hello Philippe, It was recommended to lower ncsize and I did (to default ~128K). So far it works ok for last days and staying at about 1GB free ram (fluctuating between 900MB-1,4GB). Do you think it''s a long term solution or with more load and more data the problem can surface again even with current ncsize value? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Mark Maybee
2006-Sep-13 14:53 UTC
[zfs-discuss] Re: Re: ZFS forces system to paging to the point it is
Robert Milkowski wrote:> Hello Philippe, > > It was recommended to lower ncsize and I did (to default ~128K). > So far it works ok for last days and staying at about 1GB free ram > (fluctuating between 900MB-1,4GB). > > Do you think it''s a long term solution or with more load and more data > the problem can surface again even with current ncsize value? >Robert, I don''t think this should be impacted too much by load/data, as long as the DNLC is able to evict, you should be in good shape. We are still working on a fix for the root cause of this issue however. -Mark