similar to: Striped replicated volumes in Gluster 3.3.0

Displaying 20 results from an estimated 20000 matches similar to: "Striped replicated volumes in Gluster 3.3.0"

2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2009 Mar 11
1
Enterprise Application with O_DIRECT access
Hello everyone, I am learning and evaluating a glusterfs for film/video editing facilities. Some major film/video editing realtime applications are using the O_DIRECT file access for video/audio data files. The GLFS client via fuse mechanism is disallow the open file with O_DIRECT flag. I made a little sample code for read a file with O_DIRECT flag, and tried open the files on GLFS volumes. It
2013 Jul 26
5
[FEEDBACK] Governance of GlusterFS project
Hello everyone, We are in the process of formalizing the governance model of the GlusterFS project. Historically, the governance of the project has been loosely structured. This is an invitation to all of you to participate in this discussion and provide your feedback and suggestions on how we should evolve a formal model. Feedback from this thread will be considered to the extent possible in
2008 Oct 07
4
gluster over infiniband....
Hey guys, I am running gluster over infiniband, and I have a couple of questions. We have four servers, each with 1 disk that I am trying to access over infiniband using gluster. The servers look like they start okay, here are the last 10 or so lines of a client log (they are all identical): 2008-10-07 07:18:40 D [spec.y:196:section_sub] parser: child:stripe0->remote1 2008-10-07 07:18:40 D
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume on Glusterfs's nfs. But could success on Distributed-Replicate . Anyone know how or why ? 2013/9/5 higkoohk <higkoohk at gmail.com> > Thanks Vijay ! > > It run success after 'volume set images-stripe nfs.nlm off'. > > Now I can use Esxi with Glusterfs's nfs export . > > Many
2012 Oct 10
1
Change transport type on volume from tcp to rdma
Hello I have two peers setup and working with x2 bricks each. They have been working via tcp for the last 4-5 months. I just got two Infiniband cards and put the on the peers. I want to change the transport type to rdma instead of tcp but I don't see an easy way to do this. Can you please help me with proper instructions. Best Regards Ivan Dimitrov
2012 Sep 10
1
A problem with gluster 3.3.0 and Sun Grid Engine
Hi, We got a huge problem on our sun grid engine cluster with glusterfs 3.3.0. Could somebody help me? Based on my understanding, if a folder is removed and recreated on other client node, a program that tries to create a new file under the folder fails very often. We partially fixed this problem by "ls" the folder before doing anything in our command, however, Sun Grid Engine
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster 1.3.10 I have three storage bricks and one client Everytime i run iozone across this setup, i seem to get a bad file descriptor around the 4k mark. Any thoughts why? I'm sure more info is wanted, i'm just not sure what else to include at this point. thanks [root at green gluster]# cat
2011 May 06
2
single storage server
I have a single storage server which exports /data to number of clients. Is it ok to access the data on the storage server directly (ie not via glusterfs mount) ? (I know this causes problems when there are multiple servers ). This would simplify some configurations. Nick
2011 Oct 18
2
gluster rebalance taking three months
Hi guys, we have a rebalance running on eight bricks since July and this is what the status looks like right now: ===Tue Oct 18 13:45:01 CST 2011 ==== rebalance step 1: layout fix in progress: fixed layout 223623 There are roughly 8T photos in the storage,so how long should this rebalance take? What does the number (in this case) 22362 represent? Our gluster infomation: Repository
2018 Apr 02
2
Proposal to make Design Spec and Document for a feature mandatory.
Hi all, A better documentation about the feature, and also information about how to use the features are one of the major ask of the community when they want to use glusterfs, or want to contribute by helping get the features, bug fixes for features, etc. Finally, we have taken some baby steps to get that ask of having better design and documentation resolved. We had discussed this in our
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar, Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release. Thanks, Eva (865) 574-6894 From: Amar Tumballi <atumball at redhat.com> Date: Wednesday, January 31, 2018 at 12:15 PM To: Eva Freer
2018 Jan 02
3
2018 - Plans and Expectations on Gluster Community
Hi All, First of all, happy new year 2018! Hope all of your wishes come true this year, and hope you will have time for contributing to Gluster Project this year too :-) As a contributor and one of the maintainers of the project I would like to propose below plans for Gluster Project, and please share your feedback, and comments on them. - *Improved Automation to reduce the process burden*
2012 May 04
1
'Transport endpoint not connected'
This should be a pretty easy issue to reproduce, at least it seems to happen to me very often. (gluster-3.2.5) After storage backend(s) have been rebooted, the client mounts are often broken until you unmount and remount. Example from this morning: I had rebooted storage servers to upgrade them to ubuntu 12.04. Now at the client side: $ ls /gluster/scratch ls: cannot access /gluster/scratch:
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi, I think we have a workaround for until we have a fix in the code. The following worked on my system. Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You might need to create the *filter* directory in this path.) Make sure the file has execute permissions. On my system: [root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/ [root at rhgsserver1 3.12.5]# l total 4.0K
2018 Jan 02
0
2018 - Plans and Expectations on Gluster Community
Hi Amar, If can say something about the development of GlusterFS - is that there are 2 missing things: 1. Breakage between releases. I'm "stuck" using GlusterFS 3.8 because someone support to enable NFS-Ganesha. from the gluster command has been vanished without anything mentioned in the error message what other commands replaces it. Judging from other people's answers - the
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer, Our analysis is that this issue is caused by https://review.gluster.org/17618. Specifically, in 'gd_set_shared_brick_count()' from https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c . But even if we fix it today, I don't think we have a release planned immediately for shipping this. Are you planning to fix the code and re-compile? Regards,
2018 Apr 13
0
Proposal to make Design Spec and Document for a feature mandatory.
All, Thanks to Nigel, this is now deployed, and any new patches referencing github (ie, new features) need the 'DocApproved' and 'SpecApproved' label. Regards, Amar On Mon, Apr 2, 2018 at 10:40 AM, Amar Tumballi <atumball at redhat.com> wrote: > Hi all, > > A better documentation about the feature, and also information about how > to use the features are one
2011 May 31
2
Files are duplicated after renaming (with glusterfs+zfs-fuse)
Hi all, I installed glusterfs (version 3.1.3) with zfs-fuse (0.6.9) as the underlying filesystem. After renaming a file, I found the file duplicated. Following is my test scenario. root at ubuntu:/# zpool create tank /dev/sdb root at ubuntu:/# gluster volume create test-volume ubuntu:/tank/exp1 ubuntu:/exp2 root at ubuntu:/# gluster volume start test-volume root at ubuntu:/# mount -t glusterfs
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
Nithya, I will be out of the office for ~10 days starting tomorrow. Is there any way we could possibly resolve it today? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:26 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at ornl.gov>, "gluster-users at