Hi all, Please feel free to direct me to a list that is more suitable. We are trying to set up a fileserver solution for a web application that we are building. This fileserver is running FreeBSD 10.2 and ZFS. Files are written over CIFS with Samba running on the fileserver host. However, we are seeing en exponential decrease in performance to write to the file server when the number of files in the directory grows (when it goes up to ~6000 files it becomes unusable and the write time has gone from a fraction of a second to ten seconds). We ran the same setup on a Linux machine with an ext4 file system which did NOT suffer from this performance degradation. Our first reaction was to remove Samba from the equation. I ran a test where i tried to copy a folder with a large amount of files and then ran a test with the same folder as a zip. So, cp -r folder_with_lots_of_files copy_of_folder_with_lots_of_files gives an iostat output that looks like this for the zpool (zpool iostat frosting 1): pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- frosting 48.5G 299G 2 0 267K 8.56K frosting 48.5G 299G 401 0 50.2M 0 frosting 48.6G 299G 384 94 47.9M 7.79M frosting 48.6G 299G 471 0 58.9M 0 frosting 48.6G 299G 492 0 61.4M 0 frosting 48.6G 299G 393 0 49.0M 0 frosting 48.6G 299G 426 0 53.3M 0 frosting 48.6G 299G 421 147 52.5M 9.71M frosting 48.6G 299G 507 0 63.4M 0 frosting 48.6G 299G 376 0 47.0M 0 frosting 48.6G 299G 447 0 55.8M 0 frosting 48.6G 299G 433 13 54.2M 1.62M frosting 48.6G 299G 431 85 53.8M 6.95M frosting 48.6G 299G 288 0 36.1M 0 frosting 48.6G 299G 329 0 41.2M 0 frosting 48.6G 299G 340 0 42.4M 0 frosting 48.6G 299G 398 9 49.8M 1.14M frosting 48.6G 299G 324 126 40.4M 7.08M frosting 48.6G 299G 391 0 48.9M 0 frosting 48.6G 299G 261 0 32.5M 0 frosting 48.6G 299G 314 0 39.3M 0 frosting 48.6G 299G 317 0 39.6M 0 frosting 48.6G 299G 346 79 43.3M 6.36M Are these "holes" in write speed normal. Since this is the exact symptom we are getting when the network writes start to be slow. If I instead copy a large single file, I get this IO behavior: capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- frosting 50.1G 298G 7 0 953K 34.5K frosting 50.1G 298G 224 215 27.9M 26.8M frosting 50.2G 298G 224 364 27.8M 38.6M frosting 50.2G 298G 225 57 27.9M 7.23M frosting 50.3G 298G 173 477 21.5M 56.1M frosting 50.3G 298G 219 0 27.3M 0 frosting 50.3G 298G 265 353 33.0M 44.0M frosting 50.3G 298G 294 172 36.6M 18.3M frosting 50.3G 298G 237 436 29.4M 54.2M frosting 50.4G 298G 257 108 31.9M 9.69M frosting 50.4G 298G 211 382 26.1M 47.5M frosting 50.4G 298G 305 162 38.0M 16.4M frosting 50.4G 298G 253 369 31.5M 45.9M frosting 50.5G 297G 176 177 21.8M 18.0M frosting 50.5G 297G 197 167 24.6M 20.9M frosting 50.6G 297G 248 375 30.9M 42.8M frosting 50.6G 297G 322 605 39.9M 68.0M frosting 50.6G 297G 164 36 20.4M 1.57M frosting 50.6G 297G 259 96 32.2M 12.0M which looks more like what I would expect and is also similiar to the IO behavior we get if I copy the folder with many files on an ext4 file system. Any help or tips for getting this to work would be highly appreciated! Cheers, Albert Cervin
make sure atime if off for starters on the filesystem On 24 November 2015 at 14:00, Albert Cervin <albert at acervin.com> wrote:> Hi all, > > Please feel free to direct me to a list that is more suitable. > > We are trying to set up a fileserver solution for a web application that we > are building. This fileserver is running FreeBSD 10.2 and ZFS. Files are > written over CIFS with Samba running on the fileserver host. > > However, we are seeing en exponential decrease in performance to write to > the file server when the number of files in the directory grows (when it > goes up to ~6000 files it becomes unusable and the write time has gone from > a fraction of a second to ten seconds). > > We ran the same setup on a Linux machine with an ext4 file system which did > NOT suffer from this performance degradation. > > Our first reaction was to remove Samba from the equation. I ran a test > where i tried to copy a folder with a large amount of files and then ran a > test with the same folder as a zip. > > So, > > cp -r folder_with_lots_of_files copy_of_folder_with_lots_of_files > > gives an iostat output that looks like this for the zpool (zpool iostat > frosting 1): > > pool alloc free read write read write > ---------- ----- ----- ----- ----- ----- ----- > frosting 48.5G 299G 2 0 267K 8.56K > frosting 48.5G 299G 401 0 50.2M 0 > frosting 48.6G 299G 384 94 47.9M 7.79M > frosting 48.6G 299G 471 0 58.9M 0 > frosting 48.6G 299G 492 0 61.4M 0 > frosting 48.6G 299G 393 0 49.0M 0 > frosting 48.6G 299G 426 0 53.3M 0 > frosting 48.6G 299G 421 147 52.5M 9.71M > frosting 48.6G 299G 507 0 63.4M 0 > frosting 48.6G 299G 376 0 47.0M 0 > frosting 48.6G 299G 447 0 55.8M 0 > frosting 48.6G 299G 433 13 54.2M 1.62M > frosting 48.6G 299G 431 85 53.8M 6.95M > frosting 48.6G 299G 288 0 36.1M 0 > frosting 48.6G 299G 329 0 41.2M 0 > frosting 48.6G 299G 340 0 42.4M 0 > frosting 48.6G 299G 398 9 49.8M 1.14M > frosting 48.6G 299G 324 126 40.4M 7.08M > frosting 48.6G 299G 391 0 48.9M 0 > frosting 48.6G 299G 261 0 32.5M 0 > frosting 48.6G 299G 314 0 39.3M 0 > frosting 48.6G 299G 317 0 39.6M 0 > frosting 48.6G 299G 346 79 43.3M 6.36M > > Are these "holes" in write speed normal. Since this is the exact symptom we > are getting when the network writes start to be slow. > > If I instead copy a large single file, I get this IO behavior: > > capacity operations bandwidth > pool alloc free read write read write > ---------- ----- ----- ----- ----- ----- ----- > frosting 50.1G 298G 7 0 953K 34.5K > frosting 50.1G 298G 224 215 27.9M 26.8M > frosting 50.2G 298G 224 364 27.8M 38.6M > frosting 50.2G 298G 225 57 27.9M 7.23M > frosting 50.3G 298G 173 477 21.5M 56.1M > frosting 50.3G 298G 219 0 27.3M 0 > frosting 50.3G 298G 265 353 33.0M 44.0M > frosting 50.3G 298G 294 172 36.6M 18.3M > frosting 50.3G 298G 237 436 29.4M 54.2M > frosting 50.4G 298G 257 108 31.9M 9.69M > frosting 50.4G 298G 211 382 26.1M 47.5M > frosting 50.4G 298G 305 162 38.0M 16.4M > frosting 50.4G 298G 253 369 31.5M 45.9M > frosting 50.5G 297G 176 177 21.8M 18.0M > frosting 50.5G 297G 197 167 24.6M 20.9M > frosting 50.6G 297G 248 375 30.9M 42.8M > frosting 50.6G 297G 322 605 39.9M 68.0M > frosting 50.6G 297G 164 36 20.4M 1.57M > frosting 50.6G 297G 259 96 32.2M 12.0M > > which looks more like what I would expect and is also similiar to the IO > behavior we get if I copy the folder with many files on an ext4 file > system. > > Any help or tips for getting this to work would be highly appreciated! > > Cheers, > Albert Cervin > _______________________________________________ > freebsd-stable at freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org" >
On 11/24/2015 9:00 AM, Albert Cervin wrote:> However, we are seeing en exponential decrease in performance to write to > the file server when the number of files in the directory grows (when it > goes up to ~6000 files it becomes unusable and the write time has gone from > a fraction of a second to ten seconds).Have you tried adjusting vfs.zfs.arc_meta_limit to a higher value ? Do you have any memory pressures on your server ? Have a look at this thread https://lists.freebsd.org/pipermail/freebsd-fs/2013-February/016492.html ---Mike -- ------------------- Mike Tancsa, tel +1 519 651 3400 Sentex Communications, mike at sentex.net Providing Internet services since 1994 www.sentex.net Cambridge, Ontario Canada http://www.tancsa.com/
On Tue, Nov 24, 2015 at 8:00 AM, Albert Cervin <albert at acervin.com> wrote:> Hi all, > > Please feel free to direct me to a list that is more suitable. > > We are trying to set up a fileserver solution for a web application that we > are building. This fileserver is running FreeBSD 10.2 and ZFS. Files are > written over CIFS with Samba running on the fileserver host. > > However, we are seeing en exponential decrease in performance to write to > the file server when the number of files in the directory grows (when it > goes up to ~6000 files it becomes unusable and the write time has gone from > a fraction of a second to ten seconds). > > We ran the same setup on a Linux machine with an ext4 file system which did > NOT suffer from this performance degradation. >I should hope not. ext4 vs zfs comparison isn't fair for either.> > Are these "holes" in write speed normal. Since this is the exact symptom we > are getting when the network writes start to be slow. >Totally normal. You'll want to reference: https://wiki.freebsd.org/ZFSTuningGuide In particular for that issue see: vfs.zfs.txg.timeout and tuning related to NFS. Performance is also heavily dependent on pool structure and io characteristics. For example, a pool of 3 2 disk mirrors is in general going to be much faster than 1 6 disk raidz2. -- Adam