Displaying 12 results from an estimated 12 matches similar to: "Strange server locks isuess with 2.0.7 - updating"
2011 Sep 26
1
Is gluster suitable and production ready for email/web servers?
I've been leaning towards actually deploying gluster in one of my
projects for a while and finally a probable candidate project came up.
However, researching into the specific use case, it seems that gluster
isn't really suitable for load profiles that deal with lots of
concurrent small files. e.g.
http://www.techforce.com.br/news/linux_blog/glusterfs_tuning_small_files
2010 Jun 22
0
Performance questions/tweaks on Gluster
I've now got Gluster 3.04 up and running on my servers.
Setup:
gluster1 - back end file server
gluster2 - ditto. Redundant. These are setup with the --raid 1 option
app1 - my app server mounts /data/export on /home
app2 - ditto
The appropriate conf files are attached.
My simple test involves reading an input file and writing it to the
Gluster home versus local /tmp.
These tests were
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day.
Below is gdb result:
(gdb) where
#0 0x0000003267432885 in raise () from /lib64/libc.so.6
#1 0x0000003267434065 in abort () from /lib64/libc.so.6
#2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6
#3 0x00000032674750c6 in malloc_printerr () from
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there,
Im running glusterfs version 3.1.0.
The client crashed after sometime with below stack.
2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1:
Subvolume 'distribute-1' came back up; going online.
[2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1:
data self-heal triggered. path:
/streaming/set3/work/reduce.12.1294902171.dplog.temp,
2007 Dec 09
38
libevent
Hello,
I have been looking at the Ruby/EventMachine. First let me say it look very
good. Reactor model with no threads makes for fast reliable server, and I
have read about marvelous Twisted framework for Python and am glad to see
something similar for Ruby.
I am writing network app with Ruby threads now and it very slow, and I try
new Ruby 1.9 with native threads that make it much slower.
2012 Sep 04
4
Advice on partitioning a Dell MD1200 disk array
Hi,
I've just got possession of a Dell PE R720 with 2 MD1200 disk
enclosures.
Both MD1200 are fully populated with 12 x 3 TB disks
The system will basically be a student file-server running CentOS 6.x
serving various size files from small c programs to multi gigabyte
audio and video files over GB ethernet.
The first MD1200 will be configured as the NFS disk. The requirements
are for 6
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi...
Started playing with gluster. And the heal functions is my "target" for
testing.
Short description of my test
----------------------------
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory: date >data.txt
* Append to file on one of the bricks: hostname >>data.txt
* Trigger a self-heal with: stat data.txt
=>
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2
2010 Mar 15
1
Glusterfs 3.0.X crashed on Fedora 11
the glusterfs 3.0.X crashed on Fedora 12, it got buffer overflow, seems fine on Fedora 11
Name : fuse
Arch : x86_64
Version : 2.8.1
Release : 4.fc12
Name : glibc
Arch : x86_64
Version : 2.11.1
Release : 1
complete log:
======================================================================================================
[root at test_machine06 ~]# glusterfsd
2010 Apr 22
1
Transport endpoint not connected
Hey guys,
I've recently implemented gluster to share webcontent read-write between
two servers.
Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
Fuse : 2.7.2-1ubuntu2.1
Platform : ubuntu 8.04LTS
I used the following command to generate my configs:
/usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export
2011 Jun 09
1
NFS problem
Hi,
I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0
Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
network.ping-timeout: 20
performance.cache-size: 512MB
2007 Nov 14
10
[GE users] Apple Leopard has dtrace -- anyone used the SGE probes/scripts yet?
Hi,
Chris (cc) and I try to get the SGE master monitor work with Apple Leopard
dtrace. Unfortunately we are stuck with the error msg below.
Anyone having an idea what could be the cause? What I can rule out as
cause is function inlining for the reasons explained below.
Background information on SGE master monitor implementation is under
http://wiki.gridengine.info/wiki/index.php/Dtrace