similar to: How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume

Displaying 20 results from an estimated 5000 matches similar to: "How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume"

2013 Sep 28
0
Gluster NFS Replicate bricks different size
I've mounted a gluster 1x2 replica through NFS in oVirt. The NFS share holds the qcow images of the VMs. I recently nuked a whole replica brick in an 1x2 array (for numerous other reasons including split-brain), the brick self healed and restored back to the same state as its partner. 4 days later, they've become inbalanced. The direct `du` of the /brick are showing different sizes by
2013 Aug 28
1
volume on btrfs brick and copy-on-write
Hello Is it possible to take advantage of copy-on-write implemented in btrfs if all bricks are stored on it? If not is there any other mechanism (in glusterfs) which supports CoW? regards -- Maciej Ga?kiewicz Shelly Cloud Sp. z o. o., Sysadmin http://shellycloud.com/, macias at shellycloud.com KRS: 0000440358 REGON: 101504426 -------------- next part -------------- An HTML attachment was
2013 Jul 02
1
files do not show up on gluster volume
I am trying to touch files on a mounted gluster mount point. gluster1:/gv0 24G 786M 22G 4% /mnt [root at centos63 ~]# cd /mnt [root at centos63 mnt]# ll total 0 [root at centos63 mnt]# touch hi [root at centos63 mnt]# ll total 0 The files don't show up after I ls them, but if I try to do a mv operation something very strange happens: [root at centos63 mnt]# mv /tmp/hi . mv:
2013 Jul 03
1
One Volume Per User - Possible with Gluster?
I've been looking into using Gluster to replace a system that we currently use for storing data for several thousand users. With our current networked file system, each user can create volumes and only that user has access to their volumes with authentication. I see that Gluster also offers a username/password auth system, which is great, but there are several issues about it that bother me:
2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All, I am facing issues restarting the gluster volume. When I start the volume after stopping it, the gluster fails to start the volume Below is the message that I get on CLI /root> gluster volume start _home volume start: _home: failed: Commit failed on localhost. Please check the log file for more details. Logs says that it was unable to start the brick [2013-08-08
2013 Sep 06
1
Gluster native client very slow
Hello, I'm testing a two nodes glusterfs distributed cluster (version 3.3.1-1) on Debian 7. The two nodes write on the same iscsi volume on a SAN. When I try to write an 1G file with dd, I have the following results : NFS : 107 Mbytes/s Gluster client : 8 Mbytes/sec My /etc/fstab on the client : /etc/glusterfs/cms.vol /data/cms glusterfs defaults 0 0 I'd like to use the gluster
2013 Dec 16
1
Gluster Management Console
I see references to the Gluster Management Console in some of the older (3.0 and 3.1) documentation. Is this feature available in version 3.4 as well? If so, could someone point me to documentation on how to access? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Sep 16
1
Gluster Cluster
Hi all I have a glusterfs cluster underpinning a KVM virtual server in a two host setup. Today the cluster just froze and stopped working. I have rebooted both nodes, brought up the storage again. I can see all the vm files there but when I try to start the vm the machine just hangs. How can I see if gluster is trying to synchronise files between the two servers? thanks Shaun -------------- next
2013 Oct 31
1
changing volume from Distributed-Replicate to Distributed
hi all, as the title says - i'm looking to change a volume from dist/repl -> dist. we're currently running 3.2.7. a few of questions for you gurus out there: - is this possible to do on 3.2.7? - is this possible to do with 3.4.1? (would involve upgrade) - are there any pitfalls i should be aware of? many thanks in advance, regards, paul -------------- next part -------------- An
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi, Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .???? Thanks & Regards, Bobby Jacob Senior Technical Systems Engineer | eGroup P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Jul 18
1
Gluster & PHP - stat problem?
Has anyone ever ran into a problem in which PHP's stat() call to a file on a Gluster-backed volume randomly fails, yet /usr/bin/stat never fails? Running strace against both reveals that the underlying system calls succeed. I realize this is probably a PHP problem since I cannot replicate with a non-PHP-based script; however, was hoping someone on this list might have seen this before. RHEL
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all, I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2. now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this: Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513 Brick 192.168.64.12:/opt/gluster_data/eccp_glance
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2013 Oct 01
1
Gluster on ZFS: cannot open empty files
Hi everyone, I've got glusterfs-server/glusterfs-client version 3.4.0final-ubuntu1~precise1 (from the semiosis PPA) running on Ubuntu 13.04. I'm trying to share ZFS (ZFS on Linux 0.6.2-1~precise from the zfs-stable PPA) using GlusterFS. When creating the ZFS filesystem and the Gluster volume, I accepted all the defaults and then: - I enabled deduplication for the ZFS filesystem (zfs set
2013 Jun 17
1
Ability to change replica count on an active volume
Hi, all As the title I found that gluster fs 3.3 has the ability to change replica count in the official document: http://www.gluster.org/community/documentation/index.php/WhatsNew3.3 But I couldnt find any manual about how to do it. Has this feature been added already, or will be supported soon? thanks. Wang Li -------------- next part -------------- An HTML attachment was scrubbed...
2013 Dec 15
2
puppet-gluster from zero: hangout?
Hey james and JMW: Can/Should we schedule a google hangout where james spins up a puppet-gluster based gluster deployment on fedora from scratch? Would love to see it in action (and possibly steal it for our own vagrant recipes). To speed this along: Assuming James is in England here , correct me if im wrong, but if so ~ Let me propose a date: Tuesday at 12 EST (thats 5 PM in london - which i
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts, We're running glusterfs 3.3 and we have met file permission probelems after gluster volume rebalance. Files got stick permissions T--------- after rebalance which break our client normal fops unexpectedly. Any one known this issue? Thank you for your help. -- ??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 May 13
0
Fwd: Seeing non-priv port + auth issue in the gluster brick log
Fwd.ing to Gluster users, in the hope that many more people can see this and hopefully can provide any clues thanx, deepak -------- Original Message -------- Subject: [Gluster-devel] Seeing non-priv port + auth issue in the gluster brick log Date: Sat, 11 May 2013 12:43:20 +0530 From: Deepak C Shetty <deepakcs at linux.vnet.ibm.com> Organization: IBM India Pvt. Ltd. To: Gluster
2013 Jul 09
2
Gluster Self Heal
Hi, I have a 2-node gluster with 3 TB storage. 1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes. 2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting. Please advice on how I can maintain
2013 Sep 23
1
Mounting a sub directory of a glusterfs volume
I am not sure if posting with the subject copied from the webpage of mail-list of an existing thread would loop my response under the same. Apologies if it doesn't. I am trying to figure a way to mount a directory within a gluster volume to a web server. This directory is enabled with quota to limit a users' usage. gluster config: Volume Name: test-volume features.limit-usage: