similar to: gluster WITHOUT nfs

Displaying 20 results from an estimated 20000 matches similar to: "gluster WITHOUT nfs"

2010 Jan 03
2
Where is log file of GlusterFS 3.0?
I not found log file of Gluster 3.0! In the past, I install well with GlusterFS 2.06, and Log file of server and Client placed in /var/log/glusterfs/... But after install GlusterFS 3.0( on Centos5.4 64 bit), (4 server + 1 client), I start glusterFS servers and client, and type *df -H* at client, result is : "Transport endpoint is not connected" *I want to detect BUG, but I not found
2010 Nov 11
1
NFS Mounted GlusterFS, secondary groups not working
Howdy, I have a GlusterFS 3.1 volume being mounted on a client using NFS. From the client I created a directory under the mount point and set the permissions to root:groupa 750 My user account is a member of groupa on the client, yet I am unable to list the contents of the directory: $ ls -l /gfs/dir1 ls: /gfs/dir1/: Permission denied $ ls -ld /gfs/dir1 rwxr-x--- 9 root groupa 73728 Nov 9
2012 Sep 10
1
A problem with gluster 3.3.0 and Sun Grid Engine
Hi, We got a huge problem on our sun grid engine cluster with glusterfs 3.3.0. Could somebody help me? Based on my understanding, if a folder is removed and recreated on other client node, a program that tries to create a new file under the folder fails very often. We partially fixed this problem by "ls" the folder before doing anything in our command, however, Sun Grid Engine
2011 Mar 30
1
Disabling NFS
Howdy, With 3.1.3 the option was added to disable the builtin Gluster NFS. Does that mean that the following scenario should work: 1. Disable Gluster NFS "gluster volume set <VOLUME> nfs.disable on" 2. Restart the gluster servers for good measure 3. On one of the gluster servers, mount the volume using the gluster fuse client: mkdir /export/users # In /etc/fstab add
2017 Jun 29
3
Some bricks are offline after restart, how to bring them online gracefully?
Hi all, Gluster and Ganesha are amazing. Thank you for this great work! I?m struggling with one issue and I think that you might be able to help me. I spent some time by playing with Gluster and Ganesha and after I gain some experience I decided that I should go into production but I?m still struggling with one issue. I have 3x node CentOS 7.3 with the most current Gluster and Ganesha from
2010 Nov 13
3
Gluster At SC10 ?
Howdy, are any of the Gluster folks going to SC10 next week? Mike
2017 Aug 07
2
Slow write times to gluster disk
Hi Soumya, We just had the opportunity to try the option of disabling the kernel-NFS and restarting glusterd to start gNFS. However the gluster demon crashes immediately on startup. What additional information besides what we provide below would help debugging this? Thanks, Pat -------- Forwarded Message -------- Subject: gluster-nfs crashing on start Date: Mon, 7 Aug 2017 16:05:09
2017 Aug 08
0
Slow write times to gluster disk
----- Original Message ----- > From: "Pat Haley" <phaley at mit.edu> > To: "Soumya Koduri" <skoduri at redhat.com>, gluster-users at gluster.org, "Pranith Kumar Karampuri" <pkarampu at redhat.com> > Cc: "Ben Turner" <bturner at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>, "Raghavendra
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
Hi, I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes each with 16 bricks in a single cluster. By default I have paused scrub process to have it run manually. for the first time, i was trying to run scrub-on-demand and it was running fine, but after some time, i decided to pause scrub process due to high CPU usage and user reporting folder listing taking time. But scrub
2017 Aug 08
1
Slow write times to gluster disk
Soumya, its [root at mseas-data2 ~]# glusterfs --version glusterfs 3.7.11 built on Apr 27 2016 14:09:20 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3
2006 Mar 20
9
jEdit Snippets for Ruby on Rails
----------------------------------------------------- Announcing: jEdit Snippets for Ruby on Rails ----------------------------------------------------- I thought I''d "give a little back to the community" and whip up some SuperAbbrev files for ruby and rhtml that mimic all of the Textmate Rails bundle snippets. Note: This was totally inspired by Textmate and the syncPEOPLE
2010 Nov 10
1
ACL with GlusterFS 3.1?
Howdy, Are access control lists (ACL, i.e. setfacl / getfacl) supported in GlusterFS 3.1? If yes, beyond mounting the bricks with "defaults,acl" what do I need to do to enable ACL for both NFS and native Gluster clients? Google isn't returning anything useful on this topic. Thanks, Mike ================================= Mike Hanby mhanby at uab.edu UAB School of Engineering
2006 Jun 03
9
MergeJS - Easily merge, compress, cache, and version your javascript!
After reading Cal Henderson''s article on Vitamin Serving Javascript Fast<http://synthesis.sbecker.net/articles/2006/06/03/www.thinkvitamin.com/features/webapps/serving-javascript-fast>I was immediately inspired to create a plugin to easily facilitate this in Ruby on Rails. I whipped up most of it right then, and finally got around to polishing it for release today. Told myself I
2011 Apr 04
1
rdma or tcp?
Is there a document with some guidelines for setting up bricks with tcp or rdma transport? I'm looking at a new deployment where the storage cluster hosts connect via 10GigE, but clients are on 1GigE. Over time, there will be 10GigE clients, but the majority will remain on 1GigE. In this setup, should the storage bricks use tcp or rdma? If tcp is the better choice, and at some point in the
2010 Oct 21
1
Some client problems with TCP-only NFS in Gluster 3.1
I see that the built-in NFS support registers mountd in portmap only with tcp and not udp. While this makes sense for a TCP-only NFS implementation, it does cause problems for some clients: Ubuntu 10.04 and 7.04 mount just fine. Ubuntu 8.04 gives "requested NFS version or transport protocol is not supported", unless you specify "-o mountproto=tcp" as a mount option, in
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04). I've created a replicated volume with the 4 machines. Then on the client machine i've executed: mount -t glusterfs gluster01:/volume01 /mnt/gluster And everything works ok. The main problem occurs in every client machine that I do: umount /mnt/gluster and the mount -t glusterfs gluster01:/volume01 /mnt/gluster The client
2017 Nov 13
2
snapshot mount fails in 3.12
Hi, quick question about snapshot mounting: Were there changes in 3.12 that were not mentioned in the release notes for snapshot mounting? I recently upgraded from 3.10 to 3.12 on CentOS (using centos-release-gluster312). The upgrade worked flawless. The volume works fine too. But mounting a snapshot fails with those two error messages: [2017-11-13 08:46:02.300719] E
2013 Jun 03
2
recovering gluster volume || startup failure
Hello Gluster users: sorry for long post, I have run out of ideas here, kindly let me know if i am looking at right places for logs and any suggested actions.....thanks a sudden power loss casued hard reboot - now the volume does not start Glusterfs- 3.3.1 on Centos 6.1 transport: TCP sharing volume over NFS for VM storage - VHD Files Type: distributed - only 1 node (brick) XFS (LVM)
2008 Jun 27
3
Glusterfs could not open spec file
Dear Team, I have installed and configured gluster in one server and client. one time it was worked fine, again later it is not working. my configuration files. server [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol volume rhel2 type storage/posix # POSIX FS translator option directory /opt # Export this directory end-volume volume rhel2 type
2010 Nov 27
1
GlusterFS replica question
Hi, For small lab environment I want to use GlusterFS with only ONE node. After some time I would like to add the second node as the redundant node (replica). Is it possible in GlusterFS 3.1 without downtime? Cheers PK