Hello, I have a FreeBSD 10.1 system with a raidz2 zfs configuration with 2ssd's for l2arc . It is running '10.1-STABLE FreeBSD 10.1-STABLE #0 r278805' Currently I'm running tests before it can go to production, but I have the following issue. After a while the l2arc devices indicate 16.0E free space and it starts 'consuming' more than it can hold cache - - - - - - gpt/l2arc1 107G 16.0E 0 2 0 92.7K gpt/l2arc2 68.3G 16.0E 0 1 0 60.8K It ran good for a while, where data was removed from cache so it could be filled with newer data. (Free space was always around 200/300Mbytes). I've read about similar issues, which should be fixed in different commits, but I'm running the latest stable 10.1 kernel right now. (One of the last similar issue is: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197164 ) Another similar issue reported at FreeNAS https://bugs.freenas.org/issues/5347 suggested it would be a hardware issue, but I have 2 servers which experience the same problem. One has a Crucial M500 drive and the other a M550. Both have a 64G partition voor l2arc. What is really going on here? Regards, Frank de Bot
IIRC this was fixed by r273060, if your remove your cache device and then add it back I think you should be good. On 16/02/2015 00:23, Frank de Bot (lists) wrote:> Hello, > > I have a FreeBSD 10.1 system with a raidz2 zfs configuration with 2ssd's > for l2arc . It is running '10.1-STABLE FreeBSD 10.1-STABLE #0 r278805' > Currently I'm running tests before it can go to production, but I have > the following issue. After a while the l2arc devices indicate 16.0E free > space and it starts 'consuming' more than it can hold > > cache - - - - - - > gpt/l2arc1 107G 16.0E 0 2 0 92.7K > gpt/l2arc2 68.3G 16.0E 0 1 0 60.8K > > It ran good for a while, where data was removed from cache so it could > be filled with newer data. (Free space was always around 200/300Mbytes). > > I've read about similar issues, which should be fixed in different > commits, but I'm running the latest stable 10.1 kernel right now. (One > of the last similar issue is: > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197164 ) > Another similar issue reported at FreeNAS > https://bugs.freenas.org/issues/5347 suggested it would be a hardware > issue, but I have 2 servers which experience the same problem. One has a > Crucial M500 drive and the other a M550. Both have a 64G partition voor > l2arc. > > What is really going on here? > > > Regards, > > > Frank de Bot > _______________________________________________ > freebsd-stable at freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
Frank de Bot (lists) wrote:> Hello, > > I have a FreeBSD 10.1 system with a raidz2 zfs configuration with 2ssd's > for l2arc . It is running '10.1-STABLE FreeBSD 10.1-STABLE #0 r278805' > Currently I'm running tests before it can go to production, but I have > the following issue. After a while the l2arc devices indicate 16.0E free > space and it starts 'consuming' more than it can hold >I've tried to 'debug' with dtrace, I found out different things: - l2arc_write_buffers sometimes caused the vdev->vdev_stat.vs_alloc to grow larger than vdev->vdev_asize. - l2arc_dev->l2ad_end is larger than vdev->vdev_asize - At some point l2arc_eviction isn't doing anything, but l2ard_dev->l2ad_evict is higher than l2ard_dev->l2ad_hand . taddr is matching l2ard_dev->l2ad_evict . I would assume it should evict that space. l2arc_write_buffers will continue because there seems te be room enough, I guess this would be caused by vdev_asize - vs_alloc is negative and indicating a 16.0E freespace. It could be that I'm assuming wrong things or interpret things wrong. Please let me know. Frank de Bot