similar to: How reliable is XFS under Gluster?

Displaying 20 results from an estimated 30000 matches similar to: "How reliable is XFS under Gluster?"

2013 Jul 15
4
GlusterFS 3.4.0 and 3.3.2 released!
Hi All, 3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS 3.4.0 can be downloaded from [1] and release notes are available at [2]. Upgrade instructions can be found at [3]. If you would like to propose bug fix candidates or minor features for inclusion in 3.4.1, please add them at [4]. 3.3.2 packages can be downloaded from [5]. A big note of thanks to everyone who helped in
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi, Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .???? Thanks & Regards, Bobby Jacob Senior Technical Systems Engineer | eGroup P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Aug 22
2
Error when creating volume
Hello, I've removed a volume and I can't re-create it : gluster volume create gluster-export gluster-6:/export gluster-5:/export gluster-4:/export gluster-3:/export /export or a prefix of it is already part of a volume I've formatted the partition and reinstalled the 4 gluster servers and the error still appears. Any idea ? Thanks. -- -------------- next part --------------
2013 Nov 01
1
Gluster "Cheat Sheet"
Greetings, One of the best things I've seen at conferences this year has been a bookmark distributed by the RDO folks with most common and/or useful commands for OpenStack users. Some people at Red Hat were wondering about doing the same for Gluster, and I thought it would be a great idea. Paul Cuzner, the author of the gluster-deploy project, took a first cut, pasted below. What do you
2013 Dec 17
2
Quick start guide
Hi all, I am trying to follow the instructions from the Quick Start guide, but I am running into problems when issuing the following command: mkfs.xfs -i size=512 /dev/sdb1 /dev/sdb1: No such file or directory If I create the sdb1 folder in the /dev folder I get the following error when issuing the mkfs.xfs command mkfs.xfs: cannot open /dev/sdb1: Is a directory Any assistance would be most
2013 Sep 11
1
Possible memory leak ?
Hi, I am using gluster 3.3.1 on Centos 6, installed from the glusterfs-3.3.1-1.el6.x86_64.rpm rpms. I am seeing the Committed_AS memory continually increasing and the processes using the memory are glusterfsd instances. see http://imgur.com/K3dalTW for graph. Both nodes are exhibiting the same behaviour, I have tried the suggested echo 2 > /proc/sys/vm/drop_caches but it made no
2013 Dec 12
3
Is Gluster the wrong solution for us?
We are about to abandon GlusterFS as a solution for our object storage needs. I'm hoping to get some feedback to tell me whether we have missed something and are making the wrong decision. We're already a year into this project after evaluating a number of solutions. I'd like not to abandon GlusterFS if we just misunderstand how it works. Our use case is fairly straight forward.
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. --
2013 Jun 03
2
recovering gluster volume || startup failure
Hello Gluster users: sorry for long post, I have run out of ideas here, kindly let me know if i am looking at right places for logs and any suggested actions.....thanks a sudden power loss casued hard reboot - now the volume does not start Glusterfs- 3.3.1 on Centos 6.1 transport: TCP sharing volume over NFS for VM storage - VHD Files Type: distributed - only 1 node (brick) XFS (LVM)
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi, When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file. Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4 This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2013 Aug 28
1
GlusterFS extended attributes, "system" namespace
Hi, I'm running GlusterFS 3.3.2 and I'm having trouble getting geo-replication to work. I think it is a problem with extended attributes. I'll using ssh with a normal user to perform the replication. On the server log in /var/log/glusterfs/geo-replication/VOLNAME/ssh?.log I'm getting an error "ReceClient: call ?:?:? (xtime) failed on peer with OSError". On the
2013 Jun 13
2
incomplete listing of a directory, sometimes getdents loops until out of memory
Hello, We're having an issue with our distributed gluster filesystem: * gluster 3.3.1 servers and clients * distributed volume -- 69 bricks (4.6T each) split evenly across 3 nodes * xfs backend * nfs clients * nfs.enable-ino32: On * servers: CentOS 6.3, 2.6.32-279.14.1.el6.centos.plus.x86_64 * cleints: CentOS 5.7, 2.6.18-274.12.1.el5 We have a directory containing 3,343 subdirectories. On
2013 Dec 10
4
Structure needs cleaning on some files
Hi All, When reading some files we get this error: md5sum: /path/to/file.xml: Structure needs cleaning in /var/log/glusterfs/mnt-sharedfs.log we see these errors: [2013-12-10 08:07:32.256910] W [client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote operation failed: No such file or directory [2013-12-10 08:07:32.257436] W [client-rpc-fops.c:526:client3_3_stat_cbk]
2013 Sep 13
1
glusterfs-3.4.1qa2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.1qa2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.1qa2.tar.gz This release is made off jenkins-release-42 -- Gluster Build System
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi, I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ? Thanks & Regards, Bobby Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts, We're running glusterfs 3.3 and we have met file permission probelems after gluster volume rebalance. Files got stick permissions T--------- after rebalance which break our client normal fops unexpectedly. Any one known this issue? Thank you for your help. -- ??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Jun 20
1
Rev Your (RDMA) Engines for the RDMA GlusterFest
It's that time again ? we want to test the GlusterFS 3.4 beta before we unleash it on the world. Like our last test fest, we want you to put the latest GlusterFS beta through real-world usage scenarios that will show you how it compares to previous releases. Unlike the last time, we want to focus this round of testing on Infiniband and RDMA hardware. For a description of how to do this, see
2013 Oct 23
3
Samba vfs_glusterfs Quota Support?
Hi All, I'm setting up a gluster cluster that will be accessed via smb. I was hoping that the quotas. I've configured a quota on the path itself: # gluster volume quota gfsv0 list path limit_set size ---------------------------------------------------------------------------------- /shares/testsharedave 10GB 8.0KB And I've
2013 May 03
1
GlusterFS VS OpenVZ kernel
Hi, We have a problem with glusterFS. We are using it on Centos 5 machine with OpenVZ kernel. Gluster daemon and gluster clients run on the host, not in the container. Recently we noticed a problem when we upgraded the OpenVZ kernel. After the upgrade there are strange errors in case of accessing the gluster volume. There are a few folders that can't be seen with 'ls' but if you
2013 Apr 29
1
Replicated and Non Replicated Bricks on Same Partition
Gluster-Users, We currently have a 30 node Gluster Distributed-Replicate 15 x 2 filesystem. Each node has a ~20TB xfs filesystem mounted to /data and the bricks live on /data/brick. We have been very happy with this setup, but are now collecting more data that doesn't need to be replicated because it can be easily regenerated. Most of the data lives on our replicated volume and is