similar to: Gluster-users Digest, Vol 59, Issue 15 - GlusterFS performance

Displaying 20 results from an estimated 4000 matches similar to: "Gluster-users Digest, Vol 59, Issue 15 - GlusterFS performance"

2013 Nov 07
0
GlusterFS with NFS client hang up some times
I have the following setup with GlusterFS. Server: 4 - CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz - RAM: 32G - HDD: 1T, 7200 RPM (x 10) - Network card: 1G x 4 (bonding) OS: Centos 6.4 - File system: XFS > Disk /dev/sda: 1997.1 GB, 1997149306880 bytes > 255 heads, 63 sectors/track, 242806 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes >
2008 Dec 04
1
page cache keeps growing untill system runs out of memory on a MIPS platform
Hi, I have samba-3.0.28a crosscompiled and running on a MIPS platform. The development system has about 150MB of free RAM after system bootup and no swap space. The system also has an USB interface, to which an external USB hard disk is connected. When I try to transfer huge files above (100MB) from a client on to the USB hard disk, I find that the page cache eats up almost about 100MB and
2012 Apr 17
1
Help needed with NFS issue
I have four NFS servers running on Dell hardware (PE2900) under CentOS 5.7, x86_64. The number of NFS clients is about 170. A few days ago, one of the four, with no apparent changes, stopped responding to NFS requests for two minutes every half an hour (approx). Let's call this "the hang". It has been doing this for four days now. There are no log messages of any kind pertaining
2017 Jun 01
0
Who's using OpenStack Cinder & Gluster? [ Was Re: [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder]
Joe, Agree with you on turning this around into something more positive. One aspect that would really help us decide on our next steps here is the actual number of deployments that will be affected by the removal of the gluster driver in Cinder. If you are running or aware of a deployment of OpenStack Cinder & Gluster, can you please respond on this thread or to me & Niels in private
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote: > I've had a few systems with a lot of RAM and very busy filesystems > come up with filesystem errors that took a manual 'fsck -y' after what > should have been a clean reboot. This is particularly annoying on > remote systems where I have to talk someone else through the recovery. > > Is there some time
2011 Oct 05
1
Performance tuning questions for mail server
Hi, I have a fedora15 x86_64 host with one fedora15 guest running amavis+spamassassin+postfix and performance is horrible. The host is a quad-core E13240 with 16GB and 3 1TB Seagate ST31000524NS and all partitions are ext4. I've allocated 4 processors and 8GB of RAM to this guest. I really hoped someone could help me identify areas in which performance can be improved at both the guest and
2012 Oct 01
3
Tunning - cache write (database)
Hi, First, sorry if this isn''t the place to get this kind of help... If not, I appreciate some link , forum, where I can try get some answers... My problem: * Using btrfs + compression , flush of 60 MB/s take 4 minutes.... (on this 4 minutes they keep constatly I/O of +- 4MB/s no disks) (flush from Informix database) The enviroment : * Virtualized environment * OpenSuse 12.1 64bits,
2017 Dec 18
0
Gluster consulting
Thanks for the replies Joe. Yes, it does seem that Gluster is a very in-demand expertise. And it's hard to justify the cost of Red Hat's commercial offering without first putting a POC in place to confirm viability. Thanks again, HB On Mon, Dec 18, 2017 at 12:08 PM, Joe Julian <joe at julianfamily.org> wrote: > Yeah, unfortunately that's all that have come forward as
2016 Apr 13
3
Bug#820862: xen-hypervisor-4.4-amd64: Xen VM on Jessie freezes often with INFO: task jbd2/xvda2-8:111 blocked for more than 120 seconds
Package: xen-hypervisor-4.4-amd64 Version: 4.4.1-9+deb8u4 Severity: grave Justification: renders package unusable Dear Maintainer, * What led up to the situation? Running Backup Exec or a copy command to NFS-Share causes the VM regurarly to freeze. First message on VM-Console: ---------------------------------------------------------------------------------------------------------------
2008 Nov 21
2
Bug#506407: /etc/init.d/xend doesn't terminate all processes
Package: xen-utils-3.2-1 Version: 3.2.1-2 Severity: normal Hi, # ps awux | grep /usr/lib/xen <nothing> # /etc/init.d/xend start Starting XEN control daemon: xend. # /etc/init.d/xend stop Stopping XEN control daemon: xend. # ps awux | grep /usr/lib/xen root 29826 0.2 0.1 2160 848 ? S 09:37 0:00 /usr/lib/xen-3.2-1/bin/xenstored --pid-file /var/run/xenstore.pid root
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
On Jan 6, 2015, at 4:28 PM, Fran Garcia <franchu.garcia at gmail.com> wrote: > > On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote: >> I've had a few systems with a lot of RAM and very busy filesystems >> come up with filesystem errors that took a manual 'fsck -y' after what >> should have been a clean reboot. This is particularly annoying on
2009 Jan 27
1
paravirtualized vs HVM disk interference (85% vs 15%)
Hi, We have found that exist a huge degradation in performance when doing I/O to a disk images contained in single files from a paravirtualized domain and from an HVM at the same time. The problem was found in a Xen box with Fedora 8 x86_64 binaries installed (Xen 3.1.0 + dom0 Linux 2.6.21). The test hardware was a rack mounted server with two 2.66 Ghz Xeon X5355 (4 cores each one, 128 Kb L1
2016 Dec 07
0
vm.dirty_ratio
Dear, Please check if there is also error log contained OOM kill? Xlord -----Original Message----- From: CentOS-virt [mailto:centos-virt-bounces at centos.org] On Behalf Of Gokan Atmaca Sent: Wednesday, December 7, 2016 9:16 PM To: Discussion about the virtualization on CentOS <centos-virt at centos.org> Subject: [CentOS-virt] vm.dirty_ratio Hello I get the following error on the server.
2016 Dec 07
1
vm.dirty_ratio
> Please check if there is also error log contained OOM kill? It does not appear. On Wed, Dec 7, 2016 at 4:41 PM, -=X.L.O.R.D=- <xlord.sl at gmail.com> wrote: > Dear, > Please check if there is also error log contained OOM kill? > > Xlord > > -----Original Message----- > From: CentOS-virt [mailto:centos-virt-bounces at centos.org] On Behalf Of Gokan > Atmaca >
2013 Feb 27
4
GlusterFS performance
Hello! I have GlusterFS installation with parameters: - 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf) - Distributed-replicated volume with 4 bricks and 2x4 redundancy formula. - Replicated volume with 2 bricks and 2x2 formula. I found some trouble: if I try to copy huge amount of files (94000 files, 3Gb size), this process takes terribly long time (from 20 to 40 minutes). I
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
Hello, ? ? While there are probably other interesting parameters and options in gluster itself, for us the largest difference with this speedtest and also for our website (real world performance) was the negative-timeout value during mount. Only 1 seems to solve so many problems, is there anyone knowledgeable why this is the case?? ? This would better be default I suppose ...? ? I'm still
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
Hello Vijay, ? ? What do you mean exactly? What info is missing? ? PS: I already found out that for this particular test all the difference is made by :?negative-timeout=600 , when removing it, it's much much slower again. ? ? Regards Jo ? -----Original message----- From:Vijay Bellur <vbellur at redhat.com> Sent:Tue 11-07-2017 18:16 Subject:Re: [Gluster-users] Gluster native mount is
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I just did a mount like this (added the bold): ? mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www ?Results: ? root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000
2015 Jan 06
2
reboot - is there a timeout on filesystem flush?
I've had a few systems with a lot of RAM and very busy filesystems come up with filesystem errors that took a manual 'fsck -y' after what should have been a clean reboot. This is particularly annoying on remote systems where I have to talk someone else through the recovery. Is there some time limit on the cache write with a 'reboot' (no options) command or is ext4 that
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list, I thought I'd just share my experiences with this 3Ware card, and see if anyone might have any suggestions. System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1 plus 2 hot spare config. The array is properly initialized, write cache is on, as is queueing (and supported by the drives). StoreSave