similar to: Performance optimization tips Gluster 3.3? (small files / directory listings)

Displaying 20 results from an estimated 13000 matches similar to: "Performance optimization tips Gluster 3.3? (small files / directory listings)"

2012 Jun 23
4
Can't run KVM Virtual Machines on a Gluster volume
I just built a 2 node(4 bricks), Distributed-Replicated and everything mounts fine. Each node mounts using GlusterFS client on its hostname (mount -t glusterfs hostname:VOLUME /virtual-machines) When creating a new Virtual Machine using virt-manager it creates the file on the storage, but when trying to power it On, it doesn't work and gives back an error message.(See below. Yes the folder has
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers: [2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637 [2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all, I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out. Soon after loading up about 100 MB of small files (about 300kb each), the
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong. What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons No RAID (individual
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all after being a silent reader for some time and not very successful in getting good performance out of our test set-up, I'm finally getting to the list with questions. Right now, we are operating a web server serving out 4MB files for a distributed computing project. Data is requested from all over the world at a rate of about 650k to 800k downloads a day. Each data file is usually
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've been playing with a variety of configs (AFR, AFR with Unify) on a two server setup. Everything seems to work well, but performance (creating files, reading files, appending to files) is very slow. Using the same configs on two non-software raid machines shows significant performance increases. Before I go a
2017 Aug 20
2
Glusterd not working with systemd in redhat 7
Hi! I am having same issue but I am running Ubuntu v16.04. It does not mount during boot, but works if I mount it manually. I am running the Gluster-server on the same machines (3 machines) Here is the /tc/fstab file /dev/sdb1 /data/gluster ext4 defaults 0 0 web1.dasilva.network:/www /mnt/glusterfs/www glusterfs defaults,_netdev,log-level=debug,log-file=/var/log/gluster.log 0 0
2012 Aug 13
1
Problem with too many small files
I am not sure how it works on Gluster but to mitigate the problem with listing a lot of small files wouldn't it be suitable to keep on every node a copy of the directory tree. I think Isilon does that and there is probably a lot to be learned from them which seems quiet mature technology. Could also have another interesting thing added in the future, local SSD to keep the file system metadata
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
On Mon, Aug 21, 2017 at 2:49 AM, Cesar da Silva <thunderlight1 at gmail.com> wrote: > Hi! > I am having same issue but I am running Ubuntu v16.04. > It does not mount during boot, but works if I mount it manually. I am > running the Gluster-server on the same machines (3 machines) > Here is the /tc/fstab file > > /dev/sdb1 /data/gluster ext4 defaults 0 0 > >
2012 Mar 10
1
High CPU Usage After Glusterfs install
Hi Guys, I have 2 servers with a fresh install of glusterfs and I am seeing a very high CPU load.? I am trying to just do a very basic config to get this started and for the life of me, I don't know what could be causing it.? The CPU goes up to 100% across all 4 CPU's on each gluster node and I am seeing timeouts coming from the vms that I am testing with.? I simply copied the
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
Hi! Please see bellow. Note that web1.dasilva.network is the address of the local machine where one of the bricks is installed and that ties to mount. [2017-08-20 20:30:40.359236] I [MSGID: 100030] [glusterfsd.c:2476:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.11.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2017-08-20 20:30:40.973249] I [MSGID: 106478]
2013 Feb 27
4
GlusterFS performance
Hello! I have GlusterFS installation with parameters: - 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf) - Distributed-replicated volume with 4 bricks and 2x4 redundancy formula. - Replicated volume with 2 bricks and 2x2 formula. I found some trouble: if I try to copy huge amount of files (94000 files, 3Gb size), this process takes terribly long time (from 20 to 40 minutes). I
2012 Nov 14
2
Avoid Split-brain and other stuff
Hi! I just gave GlusterFS a try and experienced two problems. First some background: - I want to set up a file server with synchronous replication between branch offices, similar to Windows DFS-Replication. The goal is _not_ high-availability or cluster-scaleout, but just having all files locally available at each branch office. - To test GlusterFS, I installed two virtual machines
2012 May 04
1
'Transport endpoint not connected'
This should be a pretty easy issue to reproduce, at least it seems to happen to me very often. (gluster-3.2.5) After storage backend(s) have been rebooted, the client mounts are often broken until you unmount and remount. Example from this morning: I had rebooted storage servers to upgrade them to ubuntu 12.04. Now at the client side: $ ls /gluster/scratch ls: cannot access /gluster/scratch:
2012 Mar 02
2
Write performance in a replicated/distributed setup with KVM?
This has probably been discussed before, but since I'm new on the list I hope You have patience with me. I have a four brick distributed/replicated setup. The computers are multi-core 16GB memory and 2*2.0TB in raid1 SATA-disks locally. The nodes are connected by 1 GB ethernet. All nodes have glusterfs 3.3beta2 installed and they are running debian 6 64bit. The underlying filesystems are
2012 Dec 11
4
Gluster machines slowing down over time
I have 2 gluster servers in replicated mode on EC2 with ~4G RAM CPU and RAM look fine but over time the system becomes sluggish, particularly networking. I notice when sshing into the machine takes ages and running remote commands with capistrano takes longer and longer. Any kernel settings people typically use? Thanks, Tom -------------- next part -------------- An HTML attachment was
2012 Oct 18
1
GlusterFS failover with UCarp
Hi, we've successfully configured GlusterFS mirroring across two identical nodes [1]. We're running the file share under a Virtual IP address using UCarp. We have different clients connected using NFS, CIFS and GlusterFS. When we simulate a node failure, by unplugging it, it takes about 5 seconds for the CIFS and GlusterFS clients to refresh the connection and continue operation. The
2012 Nov 14
1
Howto find out volume topology
Hello, I would like to find out the topology of an existing volume. For example, if I have a distributed replicated volume, what bricks are the replication partners? Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2012 Dec 03
1
"gluster peer status" messed up
I have three machines, all Ubuntu 12.04 running gluster 3.3.1. storage1 192.168.6.70 on 10G, 192.168.5.70 on 1G storage2 192.168.6.71 on 10G, 192.168.5.71 on 1G storage3 192.168.6.72 on 10G, 192.168.5.72 on 1G Each machine has two NICs, but on each host, /etc/hosts lists the 10G interface on all machines. storage1 and storage3 were taken away for hardware changes, which included
2012 Jun 01
3
Striped replicated volumes in Gluster 3.3.0
Hi all, I'm very happy to see the release of 3.3.0. One of the features I was waiting for are striped replicated volumes. We plan to store KVM images (from a OpenStack installation) on it. I read through the docs and found the following phrase: "In this release, configuration of this volume type is supported only for Map Reduce workloads." What does that mean exactly? Hopefully not,