Displaying 20 results from an estimated 10000 matches similar to: "Building a new storage setup"
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume
on Glusterfs's nfs.
But could success on Distributed-Replicate .
Anyone know how or why ?
2013/9/5 higkoohk <higkoohk at gmail.com>
> Thanks Vijay !
>
> It run success after 'volume set images-stripe nfs.nlm off'.
>
> Now I can use Esxi with Glusterfs's nfs export .
>
> Many
2008 Oct 27
1
Transport endpoint is not connected
Hi,
I am the next scenario:
#############################################################################
SERVER SIDE? (64 bit architecture)
?#############################################################################
Two Storage Machines with:
HARDWARE
DELL PE2900 III Intel Quad Core Xeon E5420 2,5Ghz, 2x6Mb cache, Bus
1333FSB
RAM 4 GB FB 667Mhz (2x2Gb)
8 HDD 1 TB, Near Line SAS 3,5"
2011 Jun 29
1
Possible new bug in 3.1.5 discovered
"May you live in interesting times"
Is this a curse or a blessing? :)
I've just tested a 3.1.5 GlusterFS native client against a 3.1.3 storage pool using this volume:
Volume Name: pfs-rw1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: jc1letgfs16-pfs1:/export/read-write/g01
Brick2: jc1letgfs13-pfs1:/export/read-write/g01
2013 Nov 23
1
Maildir issue.
We brought up a test cluster to investigate GlusterFS.
Using the Quick Start instructions, we brought up a 2 server 1 brick
replicating setup and mounted to it from a third box with the fuse mount
(all ver 3.4.1)
# gluster volume info
Volume Name: mailtest
Type: Replicate
Volume ID: 9e412774-b8c9-4135-b7fb-bc0dd298d06a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
2008 Nov 20
1
My ignorance and Fuse (or glusterfs)
I have a very simple test setup of 2 servers each working as a glusterfs-server and glusterfs-client to the other in an afr capacity.
The gluster-c and gluster-s both start up with no errors and are handshaking properly..
One one server, I get the expected behaviour: I touch a file in the export dir and it magically appears in the others mount point. On the other server however, the file
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive.
The archive stores webpages collected by our spiders.
The test setup consists of three data machines, each exporting a volume
of about 3.7TB and one nameserver machine.
File layout is such that each host has it's own directory, for example the
GlusterFS website would be located in:
2009 Apr 28
1
glusterfs and samba (file-max limit reached)
Recently a gluster I setup got mounted on a server that exports it
through samba. It appears to work till a point. Unexpectedly on heavy
usage the nodes happen to reach the max file descriptors opened limit
really easily.
Anybody else has experience on it? Is that kind of usage supported.
currently one node seems to have surpassed about 3M open files even if
the samba server claims to have
2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
and now mount it via mount -t gluster and I can see everything. I am
still seeing the following error in /var/log/glusterfs/nfs.log
[2011-02-04 13:09:16.404851] E
[client-handshake.c:1079:client_query_portmap_cbk]
bhl-volume-client-98: failed to get the port number for remote
subvolume
[2011-02-04 13:09:16.404909] I
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount:
ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
One of the processes usually dies pretty quickly like this:
[608] open
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've
been playing with a variety of configs (AFR, AFR with Unify) on a two server
setup. Everything seems to work well, but performance (creating files,
reading files, appending to files) is very slow. Using the same configs on
two non-software raid machines shows significant performance increases.
Before I go a
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2
2009 Jan 14
4
locks feature not loading ? (2.0.0rc1)
Hi all,
I upgraded from 1.4.0rc3 to 2.0.0rc1 in my test environment, and while
the upgrade itself went smoothly, i appear to be having problems with
the (posix-)locks feature. :( The feature is clearly declared in the
server config file, and according to the DEBUG-level logs, it is loaded
successfully at runtime ; however, when Gluster attempts to lock an
object (for the purposes of AFR
2008 Oct 07
4
gluster over infiniband....
Hey guys,
I am running gluster over infiniband, and I have a couple of questions.
We have four servers, each with 1 disk that I am trying to access over infiniband using gluster. The servers look like they start okay, here are the last 10 or so lines of a client log (they are all identical):
2008-10-07 07:18:40 D [spec.y:196:section_sub] parser: child:stripe0->remote1
2008-10-07 07:18:40 D
2008 Nov 04
1
fuse_setlk_cbk error
I'm building a two node cluster to run vserver systems on. I've setup
glusterfs with this config:
# node a
volume data-posix
type storage/posix
option directory /export/cluster
end-volume
volume data1
type features/posix-locks
subvolumes data-posix
end-volume
volume data2
type protocol/client
option transport-type tcp/client
option remote-host
2010 Apr 22
1
Transport endpoint not connected
Hey guys,
I've recently implemented gluster to share webcontent read-write between
two servers.
Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
Fuse : 2.7.2-1ubuntu2.1
Platform : ubuntu 8.04LTS
I used the following command to generate my configs:
/usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export
2008 Dec 18
3
Feedback and Questions on afr+unify
Hi,
I just installed and configured a couple of machines with glusterfs
(1.4.0-rc3). It seems to work great. Thanks for the amazing software.!
I've been looking for something like this for years.
I have some feedback and questions. My configuration is a bit
complicated. I have two machines each with two disks and each of which
with two partitions that I wanted to use (i.e. 8
2012 Sep 10
1
A problem with gluster 3.3.0 and Sun Grid Engine
Hi,
We got a huge problem on our sun grid engine cluster with glusterfs
3.3.0. Could somebody help me?
Based on my understanding, if a folder is removed and recreated on
other client node, a program that tries to create a new file under the
folder fails very often.
We partially fixed this problem by "ls" the folder before doing
anything in our command, however, Sun Grid Engine
2008 Oct 31
3
Problem with xlator
?Hi,
I have the next scenario:
#############################################################################
SERVER SIDE? (64 bit architecture)
?#############################################################################
Two Storage Machines with:
HARDWARE
DELL PE2900 III Intel Quad Core Xeon E5420 2,5Ghz, 2x6Mb cache,
Bus
1333FSB
RAM 4 GB FB 667Mhz (2x2Gb)
8 HDD 1 TB,
2008 Nov 04
1
Reexporting glusterfs to nfs fail
Hi,
I have a machine that must to reexport glusterfs to nfs.
CONFIGURATION
2 glusterfs servers
|
?|
? |
1 gluterfs client
1 nfs server
?|
?|
?|
1 nfs client
#***********************************************
# GLUSTERFS SERVER
?#***********************************************
# Export with glusterfs
$ glusterfs -f /etc/glusterfs/glusterfs-server.vol
$ cat
2012 Feb 18
1
Gluster NFS and symlink
Hi list,
Is there a configuration for gluster to have symlinks working with gluster nfs exports?
When I try to create a symlink on a glusterfs nfs mount I get:
ln: creating symbolic link `test' to `httpdocs': Unknown error 526
From nfs.log:
[2012-02-18 01:27:27.541155] E [client3_1-fops.c:173:client3_1_symlink_cbk] 0-dcm-gluster-backup1-client-0: remote operation failed: Operation not