similar to: Gluster 3.4 Samba VFS writes slow in Win 7 clients

Displaying 20 results from an estimated 2000 matches similar to: "Gluster 3.4 Samba VFS writes slow in Win 7 clients"

2016 Jan 25
2
How to make performance test in samba4
Hi everybody, I have several VMs running on Xenserver 6.5 I have 2 samba servers (1 DC & 1 Fileserver). They are both running Debian Linux 8.2 Jessie. I'm using the samba debian package (4.1.17) My VM has 8G RAM and 4 vCPU How can i test if the performances are quite good or not? with reading/writing Could I have better performance with the latest samba release? thanks Pierre --
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it: https://bugzilla.redhat.com/show_bug.cgi?id=1491059 https://bugzilla.redhat.com/show_bug.cgi?id=1491060 Please see the above bugs for full detail. In summary, my issue was related to glusterd's pid handling of pid files when is starts self-heal and bricks. The issues are: a. brick pid file leaves stale pid and brick fails
2017 Aug 06
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
Hi, I have a distributed volume which runs on Fedora 26 systems with glusterfs 3.11.2 from gluster.org repos: ---------- [root at taupo ~]# glusterd --version glusterfs 3.11.2 gluster> volume info gv2 Volume Name: gv2 Type: Distribute Volume ID: 6b468f43-3857-4506-917c-7eaaaef9b6ee Status: Started Snapshot Count: 0 Number of Bricks: 6 Transport-type: tcp Bricks: Brick1:
2017 Jun 23
1
Introducing minister
Hi All, Kubernetes and Openshift have amazing projects called minikube and minishift which make it very easy to setup those distributed systems for easy development. As the Gluster ecosystem grows, we have more external projects which require easy setup of multi node Gluster cluster. Hence, along those lines, I introduce to you...minister (mini + Glu"ster"). Please do check out the
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster 1.3.10 I have three storage bricks and one client Everytime i run iozone across this setup, i seem to get a bad file descriptor around the 4k mark. Any thoughts why? I'm sure more info is wanted, i'm just not sure what else to include at this point. thanks [root at green gluster]# cat
2017 May 31
1
Glusterfs 3.10.3 has been tagged
Glusterfs 3.10.3 has been tagged. Packages for the various distributions will be available in a few days, and with that a more formal release announcement will be made. - Tagged code: https://github.com/gluster/glusterfs/tree/v3.10.3 - Release notes: https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.3.md Thanks, Raghavendra Talur NOTE: Tracker bug for 3.10.3 will be
2017 Nov 06
1
Gluster Developer Conversations - Nov 28 at 15:00 UTC
Awesome! You're on the list. Anyone else want to present? - amye On Fri, Nov 3, 2017 at 3:01 AM, Raghavendra Talur <rtalur at redhat.com> wrote: > I propose a talk > > "Life of a gluster client process" > > We will have a look at one complete life cycle of a client process > which includes: > * mount script and parsing of args > * contacting glusterd
2017 Oct 27
5
Poor gluster performance on large files.
Hi gluster users, I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as
2013 Feb 27
1
Slow read performance
Help please- I am running 3.3.1 on Centos using a 10GB network. I get reasonable write speeds, although I think they could be faster. But my read speeds are REALLY slow. Executive summary: On gluster client- Writes average about 700-800MB/s Reads average about 70-80MB/s On server- Writes average about 1-1.5GB/s Reads average about 2-3GB/s Any thoughts? Here are some additional details:
2017 Jul 27
2
gluster-heketi-kubernetes
Hi Talur, I've successfully got Gluster deployed as a DaemonSet using k8s spec file glusterfs-daemonset.json from https://github.com/heketi/heketi/tree/master/extras/kubernetes but then when I try deploying heketi using heketi-deployment.json spec file, I end up with a CrashLoopBackOff pod. # kubectl get pods NAME READY STATUS RESTARTS AGE
2017 Sep 07
2
3.10.5 vs 3.12.0 huge performance loss
It is sequential write with file size 2GB. Same behavior observed with 3.11.3 too. On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srangana at redhat.com> wrote: > On 09/06/2017 05:48 AM, Serkan ?oban wrote: >> >> Hi, >> >> Just do some ingestion tests to 40 node 16+4EC 19PB single volume. >> 100 clients are writing each has 5 threads total 500 threads.
2017 Jun 29
2
Persistent storage for docker containers from a Gluster volume
On 28-Jun-2017 5:49 PM, "mabi" <mabi at protonmail.ch> wrote: Anyone? -------- Original Message -------- Subject: Persistent storage for docker containers from a Gluster volume Local Time: June 25, 2017 6:38 PM UTC Time: June 25, 2017 4:38 PM From: mabi at protonmail.ch To: Gluster Users <gluster-users at gluster.org> Hello, I have a two node replica 3.8 GlusterFS
2008 Mar 26
3
HW experience
Hi, we would like to establish a small Lustre instance and for the OST planning to use standard Dell PE1950 servers (2x QuadCore + 16 GB Ram) and for the disk a JBOD (MD1000) steered by the PE1950 internal Raid controller (Raid-6). Any experience (good or bad) with such a config ? thanxs, Martin
2017 Nov 01
2
Gluster Developer Conversations - Nov 28 at 15:00 UTC
Hi all! Based on the popularity of wanting more lightning talks at Gluster Summit, we'll be trying something new: Gluster Developer Conversations. This will be a one hour meeting on November 28th at UTC 15:00, with five 5 minute lightning talks and time for discussion in between. The meeting will be recorded, and I'll be posting the individual talks separately in our community channels.
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone [1] on it results in an oops [2]. remove_suid is called, accessing offset 14 of a NULL pointer. Let me know if you''d like me to test any fix, do further debugging or get more information. Thanks, Daniel --- [1] # mkfs.btrfs /dev/sda4 # mount /dev/sda4 /mnt /mnt# iozone -a . --- [2] [ 899.118926] BUG: unable to
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello: Sorry for asking iozone ques in this mailing list but couldn't find any mailing list on iozone... In IOZone, is there a way to configure # of outstanding requests client sends to server side? Something on the lines of IOMeter option "Number of outstanding requests". Thanks a lot!
2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon, Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default. Thanks & Regards On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> wrote: > Hi gluster users, > I've spent several
2015 Apr 14
3
VM Performance using KVM Vs. VMware ESXi
Hi All We are currently testing our product using KVM as the hypervisor. We are not using KVM as a bare-metal hypervisor. We use it on top of a RHEL installation. So basically RHEL acts as our host and using KVM we deploy guests on this system. We have all along tested and shipped our application image for VMware ESXi installations , So this it the first time we are trying our application
2008 Feb 19
1
ZFS and small block random I/O
Hi, We''re doing some benchmarking at a customer (using IOzone) and for some specific small block random tests, performance of their X4500 is very poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically, the test is the IOzone multithreaded throughput test of an 8GB file size and 8KB record size, with the server physmem''d to 2GB. I noticed a couple of peculiar