Displaying 20 results from an estimated 10000 matches similar to: "Occasional loss of connection between Windows clients and Samba under stress"
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL,
IHAC who would like to use Sun Fire X4500 to be the NFS server for the
backend services, and would like to see the potential performance gain
comparing to their existing systems. However the outputs of the I/O
stress test with iozone show the mixed results as follows:
* The read performance sharply degrades (almost down to 1/20, i.e
from 2,000,000 down to 100,000) when the
2017 Sep 07
2
3.10.5 vs 3.12.0 huge performance loss
It is sequential write with file size 2GB. Same behavior observed with
3.11.3 too.
On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srangana at redhat.com> wrote:
> On 09/06/2017 05:48 AM, Serkan ?oban wrote:
>>
>> Hi,
>>
>> Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
>> 100 clients are writing each has 5 threads total 500 threads.
2016 Jan 25
2
How to make performance test in samba4
Hi everybody,
I have several VMs running on Xenserver 6.5
I have 2 samba servers (1 DC & 1 Fileserver). They are both running Debian
Linux 8.2 Jessie.
I'm using the samba debian package (4.1.17)
My VM has 8G RAM and 4 vCPU
How can i test if the performances are quite good or not? with
reading/writing
Could I have better performance with the latest samba release?
thanks
Pierre
--
2017 Sep 06
0
3.10.5 vs 3.12.0 huge performance loss
On 09/06/2017 05:48 AM, Serkan ?oban wrote:
> Hi,
>
> Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
> 100 clients are writing each has 5 threads total 500 threads.
> With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s
> With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s
> I did not change any volume
2017 Sep 11
0
3.10.5 vs 3.12.0 huge performance loss
Here are my results:
Summary: I am not able to reproduce the problem, IOW I get relatively
equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0
Next steps:
- Could you pass along your volfile (both for a brick and also the
client vol file (from
/var/lib/glusterd/vols/<yourvolname>/patchy.tcp-fuse.vol and a brick vol
file from the same place)
- I want to check
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello:
Sorry for asking iozone ques in this mailing list but couldn't find
any mailing list on iozone...
In IOZone, is there a way to configure # of outstanding requests
client sends to server side? Something on the lines of IOMeter option
"Number of outstanding requests".
Thanks a lot!
2008 Feb 19
1
ZFS and small block random I/O
Hi,
We''re doing some benchmarking at a customer (using IOzone) and for some
specific small block random tests, performance of their X4500 is very
poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically,
the test is the IOzone multithreaded throughput test of an 8GB file size
and 8KB record size, with the server physmem''d to 2GB.
I noticed a couple of peculiar
2015 Apr 14
3
VM Performance using KVM Vs. VMware ESXi
Hi All
We are currently testing our product using KVM as the hypervisor. We are
not using KVM as a bare-metal hypervisor. We use it on top of a RHEL
installation. So basically RHEL acts as our host and using KVM we deploy
guests on this system.
We have all along tested and shipped our application image for VMware
ESXi installations , So this it the first time we are trying our
application
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone
[1] on it results in an oops [2]. remove_suid is called, accessing
offset 14 of a NULL pointer.
Let me know if you''d like me to test any fix, do further debugging or
get more information.
Thanks,
Daniel
--- [1]
# mkfs.btrfs /dev/sda4
# mount /dev/sda4 /mnt
/mnt# iozone -a .
--- [2]
[ 899.118926] BUG: unable to
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD?
Regards,
Bartosz
> Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47:
>
> Hi gluster users,
> I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon,
Can you please turn OFF client-io-threads as we have seen degradation of
performance with io-threads ON on sequential read/writes, random
read/writes. Server event threads is 1 and client event threads are 2 by
default.
Thanks & Regards
On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com>
wrote:
> Hi gluster users,
> I've spent several
2017 Oct 27
5
Poor gluster performance on large files.
Hi gluster users,
I've spent several months trying to get any kind of high performance out of
gluster. The current XFS/samba array is used for video editing and
300-400MB/s for at least 4 clients is minimum (currently a single windows
client gets at least 700/700 for a single client over samba, peaking to 950
at times using blackmagic speed test). Gluster has been getting me as low
as
2013 Aug 21
1
Gluster 3.4 Samba VFS writes slow in Win 7 clients
Hello?
We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to test samba performance in windows client.
two glusterfs server nodes export share with name of "gvol":
hardwares:
brick use a raid 5 logic disk with 8 * 2T SATA HDDs
10G network connection
one linux client mount the "gvol" with cmd:
[root at localhost current]# mount.cifs //192.168.100.133/gvol
2011 Jan 08
1
how to graph iozone output using OpenOffice?
Hi all,
Can anyone please steer me in the right direction with this one? I've
searched the net, but couldn't find a clear answer.
How do I actually generate graphs from iozone, using OpenOffice? Every
website I've been to simply mentions that iozone can output an xls
file which can be used in MS Excel to generate a 3D graph. But, I
can't see how it's actually done. Can anyone
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks,
I would appreciate it if someone can help me understand some weird
results I''m seeing with trying to do performance testing with an SSD
offloaded ZIL.
I''m attempting to improve my infrastructure''s burstable write capacity
(ZFS based WebDav servers), and naturally I''m looking at implementing
SSD based ZIL devices.
I have a test machine with the
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote:
> Hello,
>
> so the current domain configuration:
> <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'
2004 Jun 26
1
OCFS Performance on a Hitachi SAN
I've been reading this group for a while and I've noticed a variety of comments regarding running OCFS on top of path-management packages such as EMC's Powerpath, and it brought to mind a problem I've been having.
I'm currently testing a six-node cluster connected to a Hitachi 9570V SAN storage array, using OCFS 1.0.12. I have six LUNs presented to the hosts using HDLM,
2010 Oct 04
1
samba 3.3 - poor performance (compared to NFS)
I have a system that I'm vetting as a NAS server. It has a 2.0TB XFS filesystem mounted on /storage and I'm doing benchmarks using nfs3, nfs4, and samba. I'm testing via iozone by mounting the filesystem from my "nas client" box and then running iozone on the mounted filesystem. NFS seems pretty fast - ie, several orders of magnitude faster than samba, and I'm
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list,
I thought I'd just share my experiences with this 3Ware card, and see
if anyone might have any suggestions.
System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM
installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID
1 plus 2 hot spare config. The array is properly initialized, write
cache is on, as is queueing (and supported by the drives). StoreSave
2017 Sep 06
2
3.10.5 vs 3.12.0 huge performance loss
Hi,
Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
100 clients are writing each has 5 threads total 500 threads.
With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s
With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s
I did not change any volume options in both configs.
Any thoughts?
Serkan