similar to: gluster across internet

Displaying 20 results from an estimated 80000 matches similar to: "gluster across internet"

2012 Dec 11
4
Gluster machines slowing down over time
I have 2 gluster servers in replicated mode on EC2 with ~4G RAM CPU and RAM look fine but over time the system becomes sluggish, particularly networking. I notice when sshing into the machine takes ages and running remote commands with capistrano takes longer and longer. Any kernel settings people typically use? Thanks, Tom -------------- next part -------------- An HTML attachment was
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
Hi, I've setup Gluster Geo Replication according the manual, # sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave config log-level DEBUG #sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave start #sudo gluster volume geo-replication flvol ssh://root at
2012 Feb 18
1
Re: iscsi with gluster i want to make it
Hi Viraj, Gluster Ver. 3.x supports replication over WAN but currently very limited. I assume it will expand as time moves on. As for ISCSI. I doubt Glusterfs will ever support ISCSI. ISCSI operates at the block level. Gluster only works on the filesystem level. The only way to have iscsi on Gluster would be to export an iscsi target that is a file on gluster. As for snapshot and
2011 Jan 19
2
tuning gluster performance vs. nfs
I have been tweaking and researching for a while now and can't seem to get "good" performance out of Gluster. I'm using Gluster to replace an NFS server (c1.xlarge) that serves files to an array of web servers, all in EC2. In my tests Gluster is significantly slower than NFS on average. I'm using a distributed replicated volume on two (m1.large) bricks: Volume Name: ebs
2017 Sep 30
0
Gluster high inode usage on root EC2 volume
Hello, I have one Gluster server in a cluster of four that serves a single volume. Three out of the four servers have fine inode usages - hovering around 11-12%, but one server is using every inode available causing us to get no space left on disk. I've listed out the files, there were a ton being used in /var/lib/misc/glusterfsd/farmcommand/... related to CHANGELOGs which I assuemd was
2010 Aug 11
2
glusterfs on 32 bit - experiences?
I was wondering about general stability of glusterfs on 32 bit x86 Linux. I have it running without problems on some lightly used 32 bit systems, but this scares me a bit if I decided to use it in production[1]: While the 3.x versions of Gluster will compile on 32bit systems we do not QA or test on 32-bit systems. We strongly suggest you do NOT run Gluster in a 32-bit environment. I was
2012 Aug 02
0
Pre-populating volumes
Hey guys, I'm a server admin working on some high-availability web server setups with Amazon's EC2. We're running Gluster to keep our assets sync'd up between regions globally using replicated volumes with one brick in each geographic region. Each brick is a single EBS drive, so it's easy to snapshot and create a new volume from the snapshot, taking maybe 30 seconds total for
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
You mounting it to the local bricks? struggling with same performance issues try using this volume setting http://lists.gluster.org/pipermail/gluster-users/2018-January/033397.html performance.stat-prefetch: on might be it seems like when it gets to cache it is fast - those stat fetch which seem to come from .gluster are slow On Sun, Feb 4, 2018 at 3:45 AM, Artem Russakovskii <archon810 at
2018 Feb 27
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Any updates on this one? On Mon, Feb 5, 2018 at 8:18 AM, Tom Fite <tomfite at gmail.com> wrote: > Hi all, > > I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2 > boxes, distributed-replicate) My testing shows the same thing -- running a > find on a directory dramatically increases lstat performance. To add > another clue, the performance degrades
2018 Mar 26
0
rhev/gluster questions , possibly restoring from a failed node
In my lab, one of my RAID cards started acting up and took one of my three gluster nodes offline (two nodes with data and an arbiter node). I'm hoping it's simply the backplane, but during that time troubleshooting and waiting for parts, the hypervisors was fenced. Since the firewall was replaced and now several VMs are not starting correctly, fsck, scandisk and xfs_repair on the
2018 Apr 18
1
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Nithya, Amar, Any movement here? There could be a significant performance gain here that may also affect other bottlenecks that I'm experiencing which make gluster close to unusable at times. Sincerely, Artem -- Founder, Android Police <http://www.androidpolice.com>, APK Mirror <http://www.apkmirror.com/>, Illogical Robot LLC beerpla.net | +ArtemRussakovskii
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users 01 | 02 mirrored --| 03 | 04 mirrored --| distributed 05 | 06 mirrored --| 1) Would this command work for that? glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01 clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01 clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01 clustr-06:/mnt/data01 So the
2018 Feb 04
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
An update, and a very interesting one! After I started stracing rsync, all I could see was lstat calls, quite slow ones, over and over, which is expected. For example: lstat("uploads/2016/10/nexus2cee_DSC05339_thumb-161x107.jpg", {st_mode=S_IFREG|0664, st_size=4043, ...}) = 0 I googled around and found https://gist.github.com/nh2/1836415489e2132cf85ed3832105fcc1, which is seeing this
2018 Jan 04
0
So how badly will Gluster be affected by the Intel 'fix'
On 01/04/2018 12:58 PM, WK wrote: > I'm reading that the new kernel will slow down context switches. That is > of course a big deal with FUSE mounts. > > Has anybody installed the new kernels yet and observed any performance > degradation? We are in the process of testing the same out. Hopefully later next week we would be able to post the numbers that we observe. Other
2012 Dec 13
1
Rebalance may never finish, Gluster 3.2.6
Hi Guys, I have a rebalance that is going so slow it may never end. Particulars on system: 3 nodes 6 bricks, ~55TB about 10%full. The use of data is very active during the day and less so at night. All are CentOS 6.3, x86_64, Gluster 3.2.6 [root at node01 ~]# gluster volume rebalance data01 status rebalance step 2: data migration in progress: rebalanced 1378203 files of size 308570266988
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
Hello, ? ? While there are probably other interesting parameters and options in gluster itself, for us the largest difference with this speedtest and also for our website (real world performance) was the negative-timeout value during mount. Only 1 seems to solve so many problems, is there anyone knowledgeable why this is the case?? ? This would better be default I suppose ...? ? I'm still
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it: https://bugzilla.redhat.com/show_bug.cgi?id=1491059 https://bugzilla.redhat.com/show_bug.cgi?id=1491060 Please see the above bugs for full detail. In summary, my issue was related to glusterd's pid handling of pid files when is starts self-heal and bricks. The issues are: a. brick pid file leaves stale pid and brick fails
2017 Jun 30
0
Multi petabyte gluster
>Thanks for the reply. We will mainly use this for archival - near-cold storage. Archival usage is good for EC >Anything, from your experience, to keep in mind while planning large installations? I am using 3.7.11 and only problem is slow rebuild time when a disk fails. It takes 8 days to heal a 8TB disk.(This might be related with my EC configuration 16+4) 3.9+ versions has some
2018 Feb 27
0
Gluster performance / Dell Idrac enterprise conflict
What is your gluster setup? Please share volume details where vms ate stored. It could be that the slow host is having arbiter volume. Alex On Feb 26, 2018 13:46, "Ryan Wilkinson" <ryanwilk at gmail.com> wrote: > Here is info. about the Raid controllers. Doesn't seem to be the culprit. > > Slow host: > Name PERC H710 Mini (Embedded) > Firmware Version
2018 Feb 16
0
Fwd: gluster performance
I am forwarding this for Ryan, @Ryan - did you join the gluster users mailing list yet? That may be why you are having issues sending messages. ----- Forwarded Message ----- From: "Ryan Wilkinson" <ryan at centriserve.net> To: Bturner at redhat.com Sent: Wednesday, February 14, 2018 4:46:10 PM Subject: gluster performance I have a 3 host gluster replicated cluster that is