similar to: performance

Displaying 20 results from an estimated 100000 matches similar to: "performance"

2006 Jan 15
0
Samba Migration
Hi all, I have a Samba system running on a standard purchased SuSE Linux 9.3 box. I want to transfer the data to a new Samba running on a standard Suse Linux 10.0 purchased box version, on a x86_64 cluster platform. The new configuration is as follows: I have one central SuSE Linux 10.0 server cluster for all data. It exports as NFS-Server several shares to a number of NFS-Clients. (The
2017 Aug 09
1
Gluster performance with VM's
Hi, community Please, help me with my trouble. I have 2 Gluster nodes, with 2 bricks on each. Configuration: Node1 brick1 replicated on Node0 brick0 Node0 brick1 replicated on Node1 brick0 Volume Name: gm0 Type: Distributed-Replicate Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1:
2017 Nov 27
1
ls performance on directories with small number of items
Also note, Sam's example is comparing apples and orchards. Feeding one person from an orchard is not as efficient as feeding one person an apple, but if you're feeding 10000 people... Also in question with the NFS example, how long until that chown was flushed? How long until another client could see those changes? That is ignoring the biggie, what happens when the NFS server goes down?
2012 Jul 30
0
ocfs2 read only and unable to access data
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20120731/5f1e8a49/attachment.html
2017 Nov 27
0
ls performance on directories with small number of items
Hi Aaron, We also find that Gluster is perhaps, not the most performant when performing actions on directories containing large numbers of files. For example, with a single NFS server on the client side a recursive chown on (many!) files took about 18 seconds, our simple two replica gluster servers took over 15 minutes. Having said that, while I'm new to the gluster world, things seem to be
2017 Sep 21
2
Performance drop from 3.8 to 3.10
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial drop in read/write perfomance env: - 3 node, replica 3 cluster - Private dedicated Network: 1Gx3, bond: balance-alb - was able to down the volume for the upgrade and reboot each node - Usage: VM Hosting (qemu) - Sharded Volume - sequential read performance in VM's has dropped from 700Mbps to 300mbs - Seq Write
2019 Dec 27
0
GFS performance under heavy traffic
Hi David, Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first. Also, the gluster client should remount in order to bump the gluster op-version. What kind of workload do you have ? I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups . You
2017 Sep 22
0
Performance drop from 3.8 to 3.10
Could you disable cluster.eager-lock and try again? -Krutika On Thu, Sep 21, 2017 at 6:31 PM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote: > Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial > drop in read/write perfomance > > env: > > - 3 node, replica 3 cluster > > - Private dedicated Network: 1Gx3, bond: balance-alb >
2017 Jul 25
0
[Questions] About small files performance
Dear all Recently, i did some work to test small files performance for gnfsv3 transport. Following is my scenario. #####environment##### ==2 cluster nodes(nodeA/nodeB)== each is equipped with E5-2650*2, 128G memory and 10GB*2 netcard nodeA: 10.254.3.77 10.128.3.77 nodeB: 10.254.3.78 10.128.3.78 ==2 stress nodes(clientA/clientB)== each is equipped with E5-2650*2, 128G memory and 10GB*2
2007 Aug 02
3
Dovecot strong or not for a big Webmail architecture
Hello, Do you think Dovecot could handle millions of active users in a big architecture ? This cluster could be for example (each server is a bi quad Xeon 2.66 Ghz) : - 40 Dovecot servers - 4 LVS - 20 Apache+PHP - 2 Openldap - 20 Postfix + ClamAV + SpamAssassin - 1 NFS Netapps For millions of user it could be multiple different clusters of 40 Dovecot servers with 1 Netapp for each cluster. I
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 5:42 AM, Ondrej Valousek wrote: Removing NFS or NFS Ganesha from the equation, not very impressed on my own setup either. For the writes it's doing, that's alot of CPU usage in top. Seems bottle-necked via a single execution core somewhere trying to facilitate read / writes to the other bricks. Writes to the gluster FS from within one of the gluster participating bricks:
2011 Mar 07
2
connection speeds between nodes
Hi All, I've been asked to setup a 3d renderfarm at our office , at the start it will contain about 8 nodes but it should be build at growth. now the setup i had in mind is as following: All the data is already stored on a StorNext SAN filesystem (quantum ) this should be mounted on a centos server trough fiber optics , which in its turn shares the FS over NFS to all the rendernodes
2003 Nov 12
1
samba (vs. nfs) in all unix environment
Hi, I'm sorry if this is a very FAQ, I've been googling around and searchin' the list archive and I'll gladly accept RTFMs with somehow precise URLs (including URLs to the list archives). I'm on the drawing board (no equipment yet) for a server farm that will have a SteelEye linux cluster behind to provide (among other services) with networked file access. The setup is
2018 Jan 07
0
performance.readdir-ahead on volume folders not showing with ls command 3.13.1-1.el7
with performance.readdir-ahead on on the volume maked folders on mounts invisible to ls command but it will show files fine it shows folders fine with ls on bricks what am I missing? maybe some settings are incompatible guess over-tuning happened vm1:/t1 /home/t1 glusterfs
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, I've done some similar tests and experience similar performance issues (see my 'gluster for home directories?' thread on the list). If I read your mail correctly, you are comparing an NFS mount of the brick disk against a gluster mount (using the fuse client)? Which options do you have set on the NFS export (sync or async)? >From my tests, I concluded that the issue was not
2018 Mar 18
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/18/2018 6:13 PM, Sam McLeod wrote: Even your NFS transfers are 12.5 or so MB per second or less. 1) Did you use fdisk and LVM under that XFS filesystem? 2) Did you benchmark the XFS with something like bonnie++? (There's probably newer benchmark suites now.) 3) Did you benchmark your Network transfer speeds? Perhaps your NIC negotiated a lower speed. 3) I've done XFS tuning
2013 Oct 06
0
Options to turn off/on for reliable virtual machinewrites & write performance
In a replicated cluster, the client writes to all replicas at the same time. This is likely while you are only getting half the speed for writes as its going to two servers and therefore maxing your gigabit network. That is, unless I am misunderstanding how you are measuring the 60MB/s write speed. I don't have any advice on the other bits...sorry. Todd -----Original Message----- From:
2019 Dec 28
1
GFS performance under heavy traffic
Hi David, It seems that I have misread your quorum options, so just ignore that from my previous e-mail. Best Regards, Strahil NikolovOn Dec 27, 2019 15:38, Strahil <hunter86_bg at yahoo.com> wrote: > > Hi David, > > Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
2017 Sep 06
0
Slow performance of gluster volume
Do you see any improvement with 3.11.1 as that has a patch that improves perf for this kind of a workload Also, could you disable eager-lock and check if that helps? I see that max time is being spent in acquiring locks. -Krutika On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi <rightkicktech at gmail.com> wrote: > Hi Krutika, > > Is it anything in the profile indicating what is
2018 Mar 13
0
Expected performance for WORM scenario
Well, it might be close to the _synchronous_ nfs, but it is still well behind of the asynchronous nfs performance. Simple script (bit extreme I know, but helps to draw the picture): #!/bin/csh set HOSTNAME=`/bin/hostname` set j=1 while ($j <= 7000) echo ahoj > test.$HOSTNAME.$j @ j++ end rm -rf test.$HOSTNAME.* Takes 9 seconds to execute on the NFS share, but 90 seconds on