JeffleXu
2021-Aug-17 13:08 UTC
[Virtio-fs] [PATCH v4 0/8] fuse,virtiofs: support per-file DAX
On 8/17/21 6:09 PM, Miklos Szeredi wrote:> On Tue, 17 Aug 2021 at 11:32, Dr. David Alan Gilbert > <dgilbert at redhat.com> wrote: >> >> * Miklos Szeredi (miklos at szeredi.hu) wrote: >>> On Tue, 17 Aug 2021 at 04:22, Jeffle Xu <jefflexu at linux.alibaba.com> wrote: >>>> >>>> This patchset adds support of per-file DAX for virtiofs, which is >>>> inspired by Ira Weiny's work on ext4[1] and xfs[2]. >>> >>> Can you please explain the background of this change in detail? >>> >>> Why would an admin want to enable DAX for a particular virtiofs file >>> and not for others? >> >> Where we're contending on virtiofs dax cache size it makes a lot of >> sense; it's quite expensive for us to map something into the cache >> (especially if we push something else out), so selectively DAXing files >> that are expected to be hot could help reduce cache churn. > > If this is a performance issue, it should be fixed in a way that > doesn't require hand tuning like you suggest, I think. > > I'm not sure what the ext4/xfs case for per-file DAX is. Maybe that > can help understand the virtiofs case as well. >Some hints why ext4/xfs support per-file DAX can be found [1] and [2]. "Boaz Harrosh wondered why someone might want to turn DAX off for a persistent memory device. Hellwig said that the performance "could suck"; Williams noted that the page cache could be useful for some applications as well. Jan Kara pointed out that reads from persistent memory are close to DRAM speed, but that writes are not; the page cache could be helpful for frequent writes. Applications need to change to fully take advantage of DAX, Williams said; part of the promise of adding a flag is that users can do DAX on smaller granularities than a full filesystem." In summary, page cache is preferable in some cases, and thus more fine grained way of DAX control is needed. As for virtiofs, Dr. David Alan Gilbert has mentioned that various files may compete for limited DAX window resource. Besides, supporting DAX for small files can be expensive. Small files can consume DAX window resource rapidly, and if small files are accessed only once, the cost of mmap/munmap on host can not be ignored. [1] https://lore.kernel.org/lkml/20200428002142.404144-1-ira.weiny at intel.com/ [2] https://lwn.net/Articles/787973/ -- Thanks, Jeffle
Vivek Goyal
2021-Aug-17 14:54 UTC
[Virtio-fs] [PATCH v4 0/8] fuse,virtiofs: support per-file DAX
On Tue, Aug 17, 2021 at 09:08:35PM +0800, JeffleXu wrote:> > > On 8/17/21 6:09 PM, Miklos Szeredi wrote: > > On Tue, 17 Aug 2021 at 11:32, Dr. David Alan Gilbert > > <dgilbert at redhat.com> wrote: > >> > >> * Miklos Szeredi (miklos at szeredi.hu) wrote: > >>> On Tue, 17 Aug 2021 at 04:22, Jeffle Xu <jefflexu at linux.alibaba.com> wrote: > >>>> > >>>> This patchset adds support of per-file DAX for virtiofs, which is > >>>> inspired by Ira Weiny's work on ext4[1] and xfs[2]. > >>> > >>> Can you please explain the background of this change in detail? > >>> > >>> Why would an admin want to enable DAX for a particular virtiofs file > >>> and not for others? > >> > >> Where we're contending on virtiofs dax cache size it makes a lot of > >> sense; it's quite expensive for us to map something into the cache > >> (especially if we push something else out), so selectively DAXing files > >> that are expected to be hot could help reduce cache churn. > > > > If this is a performance issue, it should be fixed in a way that > > doesn't require hand tuning like you suggest, I think. > > > > I'm not sure what the ext4/xfs case for per-file DAX is. Maybe that > > can help understand the virtiofs case as well. > > > > Some hints why ext4/xfs support per-file DAX can be found [1] and [2]. > > "Boaz Harrosh wondered why someone might want to turn DAX off for a > persistent memory device. Hellwig said that the performance "could > suck"; Williams noted that the page cache could be useful for some > applications as well. Jan Kara pointed out that reads from persistent > memory are close to DRAM speed, but that writes are not; the page cache > could be helpful for frequent writes. Applications need to change to > fully take advantage of DAX, Williams said; part of the promise of > adding a flag is that users can do DAX on smaller granularities than a > full filesystem." > > In summary, page cache is preferable in some cases, and thus more fine > grained way of DAX control is needed.In case of virtiofs, we are using page cache on host. So this probably is not a factor for us. Writes will go in page cache of host.> > > As for virtiofs, Dr. David Alan Gilbert has mentioned that various files > may compete for limited DAX window resource. > > Besides, supporting DAX for small files can be expensive. Small files > can consume DAX window resource rapidly, and if small files are accessed > only once, the cost of mmap/munmap on host can not be ignored.W.r.r access pattern, same applies to large files also. So if a section of large file is accessed only once, it will consume dax window as well and will have to be reclaimed. Dax in virtiofs provides speed gain only if map file once and access it multiple times. If that pattern does not hold true, then dax does not seem to provide speed gains and in fact might be slower than non-dax. So if there is a pattern where we know some files are accessed repeatedly while others are not, then enabling/disabling dax selectively will make sense. Question is how many workloads really know that and how will you make that decision. Do you have any data to back that up. W.r.t small file, is that a real concern. If that file is being accessed mutliple times, then we will still see the speed gain. Only down side is that there is little wastage of resources because our minimum dax mapping granularity is 2MB. I am wondering can we handle that by supporting other dax mapping granularities as well. say 256K and let users choose it. Thanks Vivek> > > [1] > https://lore.kernel.org/lkml/20200428002142.404144-1-ira.weiny at intel.com/ > [2] https://lwn.net/Articles/787973/ > > -- > Thanks, > Jeffle >