similar to: Samba async performance - bottleneck or bug?

Displaying 20 results from an estimated 7000 matches similar to: "Samba async performance - bottleneck or bug?"

2019 Jul 19
3
Samba async performance - bottleneck or bug?
Hi David, Thanks for your reply. > Hmm, so this "async" (sync=disabled?) ZFS tunable means that it > completely ignores O_SYNC and O_DIRECT and runs the entire workload in > RAM? I know nothing about ZFS, but that sounds like a mighty dangerous > setting for production deployments. Yes, you are correct - sync writes will flush to RAM, just like async, will stay in RAM for
2019 Jul 19
0
Samba async performance - bottleneck or bug?
Hi, On Thu, 18 Jul 2019 19:04:47 +0000, douxevip via samba wrote: > Hi, > > I have a ZFS dataset that has sync writes disabled (setting sync=disabled) which means that it will only do async writes, and sync requests get converted to async writes. The ZFS dataset is hosted on a single Samsung 840 Pro 512GB SATA SSD. > I have this same dataset served as a Samba share, using Proxmox VE
2019 Aug 06
1
Samba async performance - bottleneck or bug?
Hi David, > You're still using direct I/O with fio, which will likely disallow > client side caching with oplocks/leases. Is there a way to bypass this with settings in smb.conf at all and transform all writes to async? > I'd recommend checking that your (cifs.ko?) client is using a relatively > modern SMB2+ dialect and that leases are enabled on both sides. Yes, I
2019 Jul 25
0
Samba async performance - bottleneck or bug?
Hi, On Fri, 19 Jul 2019 23:26:55 +0000, douxevip wrote: > So to summarize, this is the situation: > > 1) I run a fio benchmark requesting, small, random, async writes. Command is "fio --direct=1 --sync=0 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --size=32k --time_based". I run this command on both the host, as on the Samba
2013 Jan 31
4
[RFC][PATCH 2/2] Btrfs: implement unlocked dio write
This idea is from ext4. By this patch, we can make the dio write parallel, and improve the performance. We needn''t worry about the race between dio write and truncate, because the truncate need wait untill all the dio write end. And we also needn''t worry about the race between dio write and punch hole, because we have extent lock to protect our operation. I ran fio to test the
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod <mailinglists at smcleod.net> wrote: > Hi Raghavendra, > > > On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowdapp at redhat.com> > wrote: > > Aggregating large number of small writes by write-behind into large writes > has been merged on master: > https://github.com/gluster/glusterfs/issues/364 > >
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.1 Starting 1 process Jobs: 1
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>: > Hi Gandalf, > > We have multiple tuning to do for small-files which decrease the time for > negative lookups , meta-data caching, parallel readdir. Bumping the server > and client event threads will help you out in increasing the small file > performance. > > gluster v set <vol-name> group
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Firstly, it isn't necessary to hold lock of vblk->vq_lock when notifying hypervisor about queued I/O. Secondly, virtqueue_notify() will cause world switch and it may take long time on some hypervisors(such as, qemu-arm), so it isn't good to hold the lock and block other vCPUs. On arm64 quad core VM(qemu-kvm), the patch can increase I/O performance a lot with VIRTIO_RING_F_EVENT_IDX
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Firstly, it isn't necessary to hold lock of vblk->vq_lock when notifying hypervisor about queued I/O. Secondly, virtqueue_notify() will cause world switch and it may take long time on some hypervisors(such as, qemu-arm), so it isn't good to hold the lock and block other vCPUs. On arm64 quad core VM(qemu-kvm), the patch can increase I/O performance a lot with VIRTIO_RING_F_EVENT_IDX
2017 Oct 10
0
small files performance
I just tried setting: performance.parallel-readdir on features.cache-invalidation on features.cache-invalidation-timeout 600 performance.stat-prefetch performance.cache-invalidation performance.md-cache-timeout 600 network.inode-lru-limit 50000 performance.cache-invalidation on and clients could not see their files with ls when accessing via a fuse mount. The files and directories were there,
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP guests. If one CPU is doing virtqueue kick and another CPU touches the vblk->lock it will have to spin until virtqueue kick completes. This patch reduces system% CPU utilization in SMP guests that are running multithreaded I/O-bound workloads. The improvements are small but show as iops and SMP are increased. Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP guests. If one CPU is doing virtqueue kick and another CPU touches the vblk->lock it will have to spin until virtqueue kick completes. This patch reduces system% CPU utilization in SMP guests that are running multithreaded I/O-bound workloads. The improvements are small but show as iops and SMP are increased. Khoa Huynh
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2012 Jul 12
3
[PATCH v2] Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not perform well enough. Here is a scenario in fio jobs: We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file, and all of them will race on add_to_page_cache_lru(), and if one thread successfully puts its page into the page cache, it takes the responsibility to read the page''s data. And
2012 Jul 10
6
[PATCH RFC] Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not perform well enough. Here is a scenario in fio jobs: We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file, and all of them will race on add_to_page_cache_lru(), and if one thread successfully puts its page into the page cache, it takes the responsibility to read the page''s data. And
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote: > On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote: >> This patch series is the follow up on the discussions we had before about >> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation >> for virito devices (https://patchwork.kernel.org/patch/10417371/). There >> were suggestions
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote: > On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote: >> This patch series is the follow up on the discussions we had before about >> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation >> for virito devices (https://patchwork.kernel.org/patch/10417371/). There >> were suggestions
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote: > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: >> >> On 18/11/2015 06:47, Ming Lin wrote: >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) >>> } >>> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0; >>> - cq->head = new_head;