Displaying 20 results from an estimated 8000 matches similar to: "copying failed once brick-replace is starting"
2008 Nov 18
1
gluster, where have you been all my life?
Hi All
I've been looking for something like Gluster for a while and stumbled on it
today via the wikipedia pages on Filesystems etc.
I have a few very very simple questions that might even be too simple to be
on the FAQ, but if you think any of them are decent please add them there.
I think it might help if I start with what I want to achieve, then ask the
questions. We want to build a high
2013 Oct 26
1
Crashing (signal received: 11)
I am seeing this crashing happening, I am working on the self healing errors as well, not sure if the two are related. I would appreciate any direction on trying to resolve the issue, I have clients dropping connection daily.
[2013-10-26 15:35:46.935903] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-ENTV04EP-replicate-9: background meta-data self-heal failed on /
[2013-10-26
2008 Dec 10
1
df returns weird values
Hi,
I'm starting to play with glusterfs, and I'm having a problem with the df
output.
The value seems to be wrong.
(on the client)
/var/mule-client$ du -sh
584K .
/var/mule-client$ df -h /var/mule-client/
Filesystem Size Used Avail Use% Mounted on
glusterfs 254G 209G 32G 88% /var/mule-client
(on the server)
/var/mule$ du -sh
584K .
Is it a known
2013 Oct 14
1
Error while running make on Linux machine - unable to install glusterfs
I am trying to install glusterfs on a Linux machine.
glusterfs version 3.3.2
*uname -orm*
*GNU/Linux 2.6.22-6.4.3-amd64-2527508 x86_64*
*
*
Ran* **./configure --prefix=mytempdir*
*
*
No errors reported here.
When I ran make, get this error. Any help appreciated. I am a newbie.
Thanks,
CR
*
make --no-print-directory --quiet all-recursive
Making all in argp-standalone
Making all in .
Making all
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount:
ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
One of the processes usually dies pretty quickly like this:
[608] open
2013 Nov 29
1
Self heal problem
Hi,
I have a glusterfs volume replicated on three nodes. I am planing to use
the volume as storage for vMware ESXi machines using NFS. The reason for
using tree nodes is to be able to configure Quorum and avoid
split-brains. However, during my initial testing when intentionally and
gracefully restart the node "ned", a split-brain/self-heal error
occurred.
The log on "todd"
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2013 Aug 28
1
volume on btrfs brick and copy-on-write
Hello
Is it possible to take advantage of copy-on-write implemented in btrfs if
all bricks are stored on it? If not is there any other mechanism (in
glusterfs) which supports CoW?
regards
--
Maciej Ga?kiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, macias at shellycloud.com
KRS: 0000440358 REGON: 101504426
-------------- next part --------------
An HTML attachment was
2013 Mar 20
2
Writing to the data brick path instead of fuse mount?
So I noticed if I create files in the data brick path, the files travel to
the other hosts too. Can I use the data brick path instead of using a fuse
mount instead. I'm running two machines with two replicas. What happens
if I do stripes? Some machines are clients as well as servers. Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all,
I accidentally removed the brick directory of a volume on one node, the
replica for this volume is 2.
now the situation is , there is no corresponding glusterfsd process on
this node, and 'glusterfs volume status' shows that the brick is offline,
like this:
Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513
Brick 192.168.64.12:/opt/gluster_data/eccp_glance
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone:
We have a glusterfs clusters, version is 3.2.7. The volume info is as below:
Volume Name: gfs1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 94 x 3 = 282
Transport-type: tcp
We native mount the volume in all nodes. When we access the file
?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain.
While we can access the same file on
2013 Apr 30
1
Volume heal daemon 3.4alpha3
gluster> volume heal dyn_coldfusion
Self-heal daemon is not running. Check self-heal daemon log file.
gluster>
Is there a specific log? When i check /var/log/glusterfs/glustershd.log
glustershd.log:[2013-04-30 15:51:40.463259] E
[afr-self-heald.c:409:_crawl_proceed] 0-dyn_coldfusion-replicate-0:
Stopping crawl for dyn_coldfusion-client-1 , subvol went down
Is there a specific log? When
2013 Jan 03
2
"Failed to perform brick order check,..."
Hi guys:
I have just installed gluster on a single instance, and the command:
gluster volume create gv0 replica 2 server.n1:/export/brick1
server.n1:/export/brick2
returns with:
"Failed to perform brick order check... do you want to continue ..? y/N"?
What is the meaning of this error message, and why does brick order matter?
--
Jay Vyas
http://jayunit100.blogspot.com
2017 Jul 25
0
recovering from a replace-brick gone wrong
Hi All,
I have a 4 node cluster with a 4 brick distribute replica 2 volume on
it running version 3.9.0-2 on CentOS 7. I use the cluster to provide
shared volumes in a virtual environment as our storage only serves
block storage.
For some reason I decided to make the bricks for this volume directly
on the block device rather than abstracting with LVM for easy space
management. The bricks have
2013 Feb 26
0
Replicated Volume Crashed
Hi,
I have a gluster volume that consists of 22Bricks and includes a single
folder with 3.6 Million files. Yesterday the volume crashed and turned out
to be completely unresposible and I was forced to perform a hard reboot on
all gluster servers because they were not able to execute a reboot command
issued by the shell because they were that heavy overloaded. Each gluster
server has 12 CPU cores
2017 Nov 17
0
Help with reconnecting a faulty brick
On 11/17/2017 03:41 PM, Daniel Berteaud wrote:
> Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit:
>
>> On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
>>> Any way in this situation to check which file will be healed from
>>> which brick before reconnecting ? Using some getfattr tricks ?
>> Yes, there are afr
2008 Nov 20
1
My ignorance and Fuse (or glusterfs)
I have a very simple test setup of 2 servers each working as a glusterfs-server and glusterfs-client to the other in an afr capacity.
The gluster-c and gluster-s both start up with no errors and are handshaking properly..
One one server, I get the expected behaviour: I touch a file in the export dir and it magically appears in the others mount point. On the other server however, the file
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all,
I am having problems with painfully slow directory listings on a freshly
created replicated volume. The configuration is as follows: 2 nodes with
3 replicated drives each. The total volume capacity is 5.6T. We would
like to expand the storage capacity much more, but first we need to figure
this problem out.
Soon after loading up about 100 MB of small files (about 300kb each), the
2008 Nov 04
1
fuse_setlk_cbk error
I'm building a two node cluster to run vserver systems on. I've setup
glusterfs with this config:
# node a
volume data-posix
type storage/posix
option directory /export/cluster
end-volume
volume data1
type features/posix-locks
subvolumes data-posix
end-volume
volume data2
type protocol/client
option transport-type tcp/client
option remote-host
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit:
> On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
> > Any way in this situation to check which file will be healed from
> > which brick before reconnecting ? Using some getfattr tricks ?
> Yes, there are afr xattrs that determine the heal direction for each
> file. The good copy