Hi, I'm testing out ceph_vms vs a cephfs mount with a cifs export. I currently have 3 active ceph mds servers to maximise throughput and when I have configured a cephfs mount with a cifs export, I'm getting a reasonable benchmark results. However, when I tried some benchmarking with the ceph_vms module, I only got a 3rd of the comparable write throughput. I'm just wondering if this is expected, or if there is an obvious configuration setup that I'm missing. Kind regards, Tom
Hi Tom, On Wed, 23 May 2018 09:15:15 +0200, Thomas Bennett via samba wrote:> Hi, > > I'm testing out ceph_vms vs a cephfs mount with a cifs export.I take it you mean the Ceph VFS module (vfs_ceph)?> I currently have 3 active ceph mds servers to maximise throughput and > when I have configured a cephfs mount with a cifs export, I'm getting > a reasonable benchmark results.Keep in mind that increasing the number of active MDS servers doesn't necessarily mean that you'll see better performance, especially if the client workload is spread across the full filesystem tree, rather than isolated into the corresponding sharded MDS subdirectories.> However, when I tried some benchmarking with the ceph_vms module, I > only got a 3rd of the comparable write throughput. > > I'm just wondering if this is expected, or if there is an obvious > configuration setup that I'm missing.My initial assumption would be that the improved CephFS kernel mount performance is mostly due to the Linux page cache, which is utilised for buffered client I/O. Cheers, David
On Wed, May 23, 2018 at 02:13:30PM +0200, David Disseldorp via samba wrote:> Hi Tom, > > On Wed, 23 May 2018 09:15:15 +0200, Thomas Bennett via samba wrote: > > > Hi, > > > > I'm testing out ceph_vms vs a cephfs mount with a cifs export. > > I take it you mean the Ceph VFS module (vfs_ceph)? > > > I currently have 3 active ceph mds servers to maximise throughput and > > when I have configured a cephfs mount with a cifs export, I'm getting > > a reasonable benchmark results. > > Keep in mind that increasing the number of active MDS servers doesn't > necessarily mean that you'll see better performance, especially if the > client workload is spread across the full filesystem tree, rather than > isolated into the corresponding sharded MDS subdirectories. > > > However, when I tried some benchmarking with the ceph_vms module, I > > only got a 3rd of the comparable write throughput. > > > > I'm just wondering if this is expected, or if there is an obvious > > configuration setup that I'm missing. > > My initial assumption would be that the improved CephFS kernel mount > performance is mostly due to the Linux page cache, which is utilised > for buffered client I/O.Interesting. You might be able to improve the Samba vfs_ceph performance by tuning the Samba write cache size (although for SMB2 leases this isn't used).
Hi David,> Hi, > > > > I'm testing out ceph_vms vs a cephfs mount with a cifs export. > > I take it you mean the Ceph VFS module (vfs_ceph)? >Yes. Keyboard slip :)> > I currently have 3 active ceph mds servers to maximise throughput and > > when I have configured a cephfs mount with a cifs export, I'm getting > > a reasonable benchmark results. > > Keep in mind that increasing the number of active MDS servers doesn't > necessarily mean that you'll see better performance, especially if the > client workload is spread across the full filesystem tree, rather than > isolated into the corresponding sharded MDS subdirectories. >Thanks. I've not figured out mds ranks yet - my naieve assumption was that they where sharing all the work load, but see that's not the case. I'm doing some pretty simple benchmarking with an iozone and sysbench fileio workloads writing to a mounted directory. However, the benchmark workloads were identical for each test scenario, so I was expecting at minimum - equivalent performance.> However, when I tried some benchmarking with the ceph_vms module, I > > only got a 3rd of the comparable write throughput. > > > > I'm just wondering if this is expected, or if there is an obvious > > configuration setup that I'm missing. > > My initial assumption would be that the improved CephFS kernel mount > performance is mostly due to the Linux page cache, which is utilised > for buffered client I/O. >I'm writing large data (double my machines memory) onto the mount point - so I'm expecting caching to have minimal effect on my testing. I'm also flush the page cache before performing reading tests just to be sure. Thanks for the feedback. Cheers, Tom