Displaying 20 results from an estimated 100 matches similar to: "Crashing (signal received: 11)"
2010 Nov 11
1
Possible split-brain
Hi all,
I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client:
[root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2012 Nov 30
2
"layout is NULL", "Failed to get node-uuid for [...] and other errors during rebalancing in 3.3.1
I started rebalancing my volume after updating from 3.2.7 to 3.3.1.
After a few hours, I noticed a large number of failures in the rebalance
status:
> Node Rebalanced-files size scanned failures
> status
> --------- ----------- ----------- ----------- -----------
> ------------
> localhost 0 0Bytes 4288805
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount:
ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
One of the processes usually dies pretty quickly like this:
[608] open
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone,
I'm playing with Gluster3.3b2, and everything is working fine when
uploading stuff through swift. However, when I enable quotas on Gluster,
I randomly get permission errors. Sometimes I can upload files, most
times I can't.
I'm mounting the partitions with the acl flag, I've tried wiping out
everything and starting from scratch, same result. As soon as I
2011 Jun 29
1
Possible new bug in 3.1.5 discovered
"May you live in interesting times"
Is this a curse or a blessing? :)
I've just tested a 3.1.5 GlusterFS native client against a 3.1.3 storage pool using this volume:
Volume Name: pfs-rw1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: jc1letgfs16-pfs1:/export/read-write/g01
Brick2: jc1letgfs13-pfs1:/export/read-write/g01
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello,
I''m trying to build a replica volume, on two servers.
The servers are: blade6 and blade7. (another blade1 in the peer, but with
no volumes)
The volume seems ok, but I cannot mount it from NFS.
Here are some logs:
[root@blade6 stor1]# df -h
/dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1
[root@blade7 stor1]# df -h
/dev/mapper/gluster_fast
2011 Mar 03
3
Mac / NFS problems
Hello,
Were having issues with macs writing to our gluster system.
Gluster vol info at end.
On a mac, if I make a file in the shell I get the following message:
smoke:hunter david$ echo hello > test
-bash: test: Operation not permitted
And the file is made but is zero size.
smoke:hunter david$ ls -l test
-rw-r--r-- 1 david realise 0 Mar 3 08:44 test
glusterfs/nfslog logs thus:
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04).
I've created a replicated volume with the 4 machines.
Then on the client machine i've executed:
mount -t glusterfs gluster01:/volume01 /mnt/gluster
And everything works ok.
The main problem occurs in every client machine that I do:
umount /mnt/gluster
and the
mount -t glusterfs gluster01:/volume01 /mnt/gluster
The client
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume
on Glusterfs's nfs.
But could success on Distributed-Replicate .
Anyone know how or why ?
2013/9/5 higkoohk <higkoohk at gmail.com>
> Thanks Vijay !
>
> It run success after 'volume set images-stripe nfs.nlm off'.
>
> Now I can use Esxi with Glusterfs's nfs export .
>
> Many
2011 Aug 24
1
Input/output error
Hi, everyone.
Its nice meeting you.
I am poor at English....
I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want
to change from gluster mount to nfs mount.
I have installed GlusterFS 3.2.1 one week ago,and replication 2 server.
OS:CentOS5.5 64bit
RPM:glusterfs-core-3.2.1-1
glusterfs-fuse-3.2.1-1
command
gluster volume create syncdata replica 2 transport tcp
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone:
We have a glusterfs clusters, version is 3.2.7. The volume info is as below:
Volume Name: gfs1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 94 x 3 = 282
Transport-type: tcp
We native mount the volume in all nodes. When we access the file
?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain.
While we can access the same file on
2012 Jun 16
5
Not real confident in 3.3
I do not mean to be argumentative, but I have to admit a little
frustration with Gluster. I know an enormous emount of effort has gone
into this product, and I just can't believe that with all the effort
behind it and so many people using it, it could be so fragile.
So here goes. Perhaps someone here can point to the error of my ways. I
really want this to work because it would be ideal
2012 Mar 12
0
Data consistency with Gluster 3.2.5
I have set up a replicated, four-node gluster config for a web farm. The
idea is that each web node is its own Gluster server, and will have its
own copy of the entire web root locally. It then serves the cluster to
itself via a mount. We're running it over dual GigE NICs bonded.
The problem I am having is when we switch live traffic to nodes in the
cluster, they almost immediately get
2013 Feb 26
0
Replicated Volume Crashed
Hi,
I have a gluster volume that consists of 22Bricks and includes a single
folder with 3.6 Million files. Yesterday the volume crashed and turned out
to be completely unresposible and I was forced to perform a hard reboot on
all gluster servers because they were not able to execute a reboot command
issued by the shell because they were that heavy overloaded. Each gluster
server has 12 CPU cores
2013 Nov 27
0
NFS client problems
I have create a 2 node replicated cluster with GlusterFS 3.4.1 on Centos 6.4. Mounting the volume locally on each server using native client works fine, however I am having issues with a separate client only server that I wish to use NFS to mount the gluster server volume.
Volume Name: glustervol
Type: Replicate
Volume ID: 6a5dde86-...
Status: Started
Number of Bricks: 1 x 2 = 2
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi...
Started playing with gluster. And the heal functions is my "target" for
testing.
Short description of my test
----------------------------
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory: date >data.txt
* Append to file on one of the bricks: hostname >>data.txt
* Trigger a self-heal with: stat data.txt
=>
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover
that when one of replicate node reboot and startup the glusterd daemon,the
gluster will crash cause by the other
replicate node cpu usage reach 100%.
Our gluster info:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Options Reconfigured:
performance.cache-size: 3GB
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there,
Im running glusterfs version 3.1.0.
The client crashed after sometime with below stack.
2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1:
Subvolume 'distribute-1' came back up; going online.
[2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1:
data self-heal triggered. path:
/streaming/set3/work/reduce.12.1294902171.dplog.temp,
2012 May 29
2
When self-healing is triggered?
Hi, When
self-healing is triggered? As you can see below it has been triggered
however I checked the logs and there was not any disconnection from the
FTP servers.So,
I can?t understand why it has been triggered. Client-7 comes online, so
may the image differ due some file corrupted? Or for some reason the
ftp server was not able to write in one of the replicated storages (client-6 and