similar to: Gluster update | need your support

Displaying 20 results from an estimated 3000 matches similar to: "Gluster update | need your support"

2009 Jan 07
12
glusterfs alternative ? :P
I know that this is not the appropriate place :). You know someone can alternative to gluserfs ?:) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090107/63b68a0d/attachment.html>
2009 Jan 07
3
What version has the HA Translator?
Does the stable version of GlusterFS, GlusterFS 1.3-SUSKE RELEASE, have the High Availably (HA) Translator or is it just in v1.4? John This email was independently scanned for viruses by McAfee anti-virus software and none were found
2008 Dec 20
14
building 1.4.0rc6
I am trying to build the latest release candidate and have run into a bit of a problem. When I run ./configure, I get: GlusterFS configure summary =========================== FUSE client : no Infiniband verbs : no epoll IO multiplex : yes Berkeley-DB : no libglusterfsclient : yes mod_glusterfs : no () argp-standalone : no I am going to need the gluster FUSE client now
2008 Jun 27
3
Glusterfs could not open spec file
Dear Team, I have installed and configured gluster in one server and client. one time it was worked fine, again later it is not working. my configuration files. server [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol volume rhel2 type storage/posix # POSIX FS translator option directory /opt # Export this directory end-volume volume rhel2 type
2008 Oct 17
6
GlusterFS compared to KosmosFS (now called cloudstore)?
Hi. I'm evaluating GlusterFS for our DFS implementation, and wondered how it compares to KFS/CloudStore? These features here look especially nice ( http://kosmosfs.sourceforge.net/features.html). Any idea what of them exist in GlusterFS as well? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Jul 21
1
1.3.10 compile failure on FC9 w/ Gluster-patched Fuse
Hello all, I am attempting to compile Gluster 1.3.10 on a newly-installed Fedora Core 9 machine, without success. The only non-stock package on the otherwise vanilla system is the Gluster version of Fuse. ----------------------- # rpm -qa | grep fuse fuse-devel-2.7.3glfs10-1.i386 fuse-2.7.3glfs10-1.i386 fuse-libs-2.7.3glfs10-1.i386 # rpmbuild -ta glusterfs-1.3.10.tar.gz (..snip..) make[5]:
2008 Dec 16
5
Self-heal's behavior: problem on "replace" -- it leaves garbage.
Hi. I'm using GlusterFS v1.3.12 (glusterfs-1.3.12.tar.gz) via FUSE (fuse-2.7.3glfs10.tar.gz) on CentOS 5.2 x86_64 (Linux kernel 2.6.18-92.el5) now. The nodes are HP Proliant DL360 G5 (as GlusterFS Client) and DL180 G5 (as GlusterFS Servers). And the connections are all TCP/IP on Gigabit ethernet. Then, I tested self-heal and I found a technical problem about "replace" -- self-heal
2009 Jan 14
4
locks feature not loading ? (2.0.0rc1)
Hi all, I upgraded from 1.4.0rc3 to 2.0.0rc1 in my test environment, and while the upgrade itself went smoothly, i appear to be having problems with the (posix-)locks feature. :( The feature is clearly declared in the server config file, and according to the DEBUG-level logs, it is loaded successfully at runtime ; however, when Gluster attempts to lock an object (for the purposes of AFR
2008 Aug 01
8
Sharing home directories between two symetric nodes?
Hello, I just discovered Gluster a couple of weeks ago and went through the initial documentation and got it compiled. It looks very promising both for my home network and for work. For now I'm concentrating at home - We have to Ubuntu 8.04 desktops, one for me and one for my wife. We generally try to keep them off when not used but at any time any one of them could be up or down. I was
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've been playing with a variety of configs (AFR, AFR with Unify) on a two server setup. Everything seems to work well, but performance (creating files, reading files, appending to files) is very slow. Using the same configs on two non-software raid machines shows significant performance increases. Before I go a
2008 Dec 16
4
GlusterFS process take very many memory
Hello!!! I try use GLusterFS + openvz, but gfs process every 1 minute memory usare increase at ~2MB. How i can fix this? P.S. sorry about my bad english. Cluster information: 1) 3 nodes (server-client), conf: ############## # local data # ############## volume vz type storage/posix option directory /home/local end-volume volume vz-locks type features/posix-locks subvolumes vz end-volume
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2008 Nov 18
1
gluster, where have you been all my life?
Hi All I've been looking for something like Gluster for a while and stumbled on it today via the wikipedia pages on Filesystems etc. I have a few very very simple questions that might even be too simple to be on the FAQ, but if you think any of them are decent please add them there. I think it might help if I start with what I want to achieve, then ask the questions. We want to build a high
2008 Nov 09
3
Still problem with trivial self heal
Hi! I have trivial problem with self healing. Maybe somebody will be able to tell mi what am I doing wrong, and why do the files not heal as I expect. Configuration: Servers: two nodes A, B --------- volume posix type storage/posix option directory /ext3/glusterfs13/brick end-volume volume brick type features/posix-locks option mandatory on subvolumes posix end-volume volume server
2008 Aug 15
6
Add/remove new server volumes on the fly?
Hi list, 1. While learning GlusterFS, I'was wondering if it's possible to add server volumes to increase space capacity of my cluster "on the FLY"? I mean, a hot upgrade. 2. Second, when using "files replicating strategy" (scheduler), is it possible to remove a server node without stopping the hole cluster (ex. hardware maintaining reasons, add more disk/ram to the
2008 Oct 02
0
FW: Why does glusterfs not automatically fix these kinds of problems?
Fwd'ing this since it seems my reply and your response didn't actually go to the mailing list. -----Original Message----- From: Keith Freedman [mailto:freedman at FreeFormIT.com] Sent: Thursday, October 02, 2008 2:02 PM To: Will Rouesnel Subject: RE: [Gluster-users] Why does glusterfs not automatically fix these kinds of problems? At 08:47 PM 10/1/2008, you wrote: >Unison operates on
2008 Oct 27
1
Transport endpoint is not connected
Hi, I am the next scenario: ############################################################################# SERVER SIDE? (64 bit architecture) ?############################################################################# Two Storage Machines with: HARDWARE DELL PE2900 III Intel Quad Core Xeon E5420 2,5Ghz, 2x6Mb cache, Bus 1333FSB RAM 4 GB FB 667Mhz (2x2Gb) 8 HDD 1 TB, Near Line SAS 3,5"
2009 Jun 11
2
Issue with files on glusterfs becoming unreadable.
elbert at host1:~$ dpkg -l|grep glusterfs ii glusterfs-client 1.3.8-0pre2 GlusterFS fuse client ii glusterfs-server 1.3.8-0pre2 GlusterFS fuse server ii libglusterfs0 1.3.8-0pre2 GlusterFS libraries and translator modules I have 2 hosts set up to use AFR with
2008 Oct 31
3
Problem with xlator
?Hi, I have the next scenario: ############################################################################# SERVER SIDE? (64 bit architecture) ?############################################################################# Two Storage Machines with: HARDWARE DELL PE2900 III Intel Quad Core Xeon E5420 2,5Ghz, 2x6Mb cache, Bus 1333FSB RAM 4 GB FB 667Mhz (2x2Gb) 8 HDD 1 TB,
2008 Jun 04
1
balancing redundancy with space utilization
Currently it would seem that AFR will simply copy everything to every brick in the AFR. If I did something like ... volume afr-example type cluster/afr subvolumes brick1 brick2 brick3 brick4 brick5 brick6 brick7 brick8 end-volume I would wind up with 8 copies of every file. Clearly, this is too many. What I would rather have is maybe 3 copies of each file distributed randomly across