similar to: Slow Read Performance With Samba GlusterFS VFS

Displaying 20 results from an estimated 3000 matches similar to: "Slow Read Performance With Samba GlusterFS VFS"

2013 Aug 21
1
Gluster 3.4 Samba VFS writes slow in Win 7 clients
Hello? We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to test samba performance in windows client. two glusterfs server nodes export share with name of "gvol": hardwares: brick use a raid 5 logic disk with 8 * 2T SATA HDDs 10G network connection one linux client mount the "gvol" with cmd: [root at localhost current]# mount.cifs //192.168.100.133/gvol
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD? Regards, Bartosz > Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47: > > Hi gluster users, > I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon, Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default. Thanks & Regards On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> wrote: > Hi gluster users, > I've spent several
2012 Oct 11
0
samba performance downgrade with glusterfs backend
Hi folks, We found that samba performance downgrade a lot with glusterfs backend. volume info as followed, Volume Name: vol1 Type: Distribute Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: pana53:/data/ Options Reconfigured: auth.allow: 192.168.* features.quota: on nfs.disable: on Use dd (bs=1MB) or iozone (block=1MB) to test write performance, about 400MB/s. #dd
2023 Aug 22
1
Increase data length for SMB2 write and read requests for Windows 10 clients
On Tue, Aug 22, 2023 at 08:50:03AM +0200, Ralph Boehme wrote: >On 8/21/23 22:55, Jeremy Allison wrote: >>On Mon, Aug 21, 2023 at 03:19:59PM +0200, Ralph Boehme wrote: >>>On 8/21/23 11:53, Jones Syue ??? via samba wrote: >>>>>OH - that's *really* interesting ! I wonder how it is >>>>>changing the SMB3+ redirector to do this ? >>>>
2023 Aug 22
1
Increase data length for SMB2 write and read requests for Windows 10 clients
On 8/21/23 22:55, Jeremy Allison wrote: > On Mon, Aug 21, 2023 at 03:19:59PM +0200, Ralph Boehme wrote: >> On 8/21/23 11:53, Jones Syue ??? via samba wrote: >>>> OH - that's *really* interesting ! I wonder how it is >>>> changing the SMB3+ redirector to do this ? >>> >>> It looks like applications could do something and give a hint to SMB3+
2023 Aug 21
1
Increase data length for SMB2 write and read requests for Windows 10 clients
On Mon, Aug 21, 2023 at 03:19:59PM +0200, Ralph Boehme wrote: >On 8/21/23 11:53, Jones Syue ??? via samba wrote: >>>OH - that's *really* interesting ! I wonder how it is >>>changing the SMB3+ redirector to do this ? >> >>It looks like applications could do something and give a hint to SMB3+ >>redirector, so far not quite sure how to make it, >>per
2017 Oct 27
5
Poor gluster performance on large files.
Hi gluster users, I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as
2004 Jun 26
1
OCFS Performance on a Hitachi SAN
I've been reading this group for a while and I've noticed a variety of comments regarding running OCFS on top of path-management packages such as EMC's Powerpath, and it brought to mind a problem I've been having. I'm currently testing a six-node cluster connected to a Hitachi 9570V SAN storage array, using OCFS 1.0.12. I have six LUNs presented to the hosts using HDLM,
2013 Mar 18
2
How to evaluate the glusterfs performance with small file workload?
Hi guys I have met some troubles when I want to evaluate the glusterfs performance with small file workload. 1: What kind of benchmark should I use to test the small file operation ? As we all know, we can use iozone tools to test the large file operation, while for the sake of memory cache, if we testing small file operation with iozone, the result will not correct.
2023 Aug 21
2
Increase data length for SMB2 write and read requests for Windows 10 clients
Hello Jeremy, > OH - that's *really* interesting ! I wonder how it is > changing the SMB3+ redirector to do this ? It looks like applications could do something and give a hint to SMB3+ redirector, so far not quite sure how to make it, per process monitor (procmon) could show that write I/O size seems could be pass from the application layers,
2023 Aug 18
1
Increase data length for SMB2 write and read requests for Windows 10 clients
Hello Ivan, 'FastCopy' has an option to revise max I/O size and works for SMB :) it is a tool for file transferring and could be installed to win10, download here: https://fastcopy.jp/ This is an example for writing, a job would write a file named '1GB.img' from a local disk 'H:' to a remote SMB mounted net disk 'Z:', open 'FastCopy' tool and specify
2023 Aug 18
1
Increase data length for SMB2 write and read requests for Windows 10 clients
On Fri, Aug 18, 2023 at 04:25:28PM +0000, Jones Syue ??? wrote: >Hello Ivan, > >'FastCopy' has an option to revise max I/O size and works for SMB :) >it is a tool for file transferring and could be installed to win10, >download here: https://fastcopy.jp/ > >This is an example for writing, a job would write a file named '1GB.img' >from a local disk
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
On Fri, May 21, 2010 at 15:37:45AM -0400, Josef Bacik wrote: > On Fri, May 21, 2010 at 11:21:11AM -0400, Christoph Hellwig wrote: >> On Wed, May 19, 2010 at 04:24:51PM -0400, Josef Bacik wrote: >> > Btrfs cannot handle having logically non-contiguous requests submitted. For >> > example if you have >> > >> > Logical: [0-4095][HOLE][8192-12287]
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL, IHAC who would like to use Sun Fire X4500 to be the NFS server for the backend services, and would like to see the potential performance gain comparing to their existing systems. However the outputs of the I/O stress test with iozone show the mixed results as follows: * The read performance sharply degrades (almost down to 1/20, i.e from 2,000,000 down to 100,000) when the
2023 Aug 22
1
Increase data length for SMB2 write and read requests for Windows 10 clients
On 8/22/23 18:02, Jeremy Allison wrote: > On Tue, Aug 22, 2023 at 08:50:03AM +0200, Ralph Boehme wrote: >> I don't get it. Iirc 8 MB is the default max io size the kernel client >> will use which is also, iirc, the limit of the protocol. > > Don't get what ? The FastCopy.exe above is using 8MB write sizes. sure, so? FastCopy.exe uses 8 meg, Explorer and Robocopy use
2011 Dec 08
0
folder with no permissions
Hi Matt, Can you please provide us with more information? 1. what version of glusterfs are you using 2. Was the iozone run as root or user? a. if user, did it have the required permissions? 3. steps to reproduce the problem 4. Any other errors related to stripe in the clinet log? With regards, Shishir ________________________________________ From: gluster-users-bounces at gluster.org
2014 Jun 09
0
Performance optimization of glusterfs with samba-glusterfs-vfs plugin
Hi guys, I have spent many days on debuging read/write performance of glusterfs with samba-gluster-vfs plugin recently. The write performance is okay, but the read speed is always dispirited. The testing environment is like this: OS: centos 6.4 hosts: two hosts, one for samba server which glusterfs with samba-glusterfs-vfs runs on it, the other for samba client glusterfs version:
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2007 Nov 19
0
Solaris 8/07 Zfs Raidz NFS dies during iozone test on client host
Hi, Well I have a freshly built system with ZFS raidz. Intel P4 2.4 Ghz 1GB Ram Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller (2) Intel Dual Port 1Gbit nics I have (5) 300GB disks in a Raidz1 with Zfs. I''ve created a couple of FS on this. /export/downloads /export/music /export/musicraw I''ve shared these out as well. First with ZFS ''zfs