similar to: Changing volume log file location

Displaying 20 results from an estimated 11000 matches similar to: "Changing volume log file location"

2012 Sep 13
1
how to monitor glusterfs mounted client
Dear gluster experts, I want to ask a question about how to monitor active mounted client for glusterfs. Since I want to known how many clients mount a volume and I also want to known the clients hostname/ip. I walk through the glusterfs admin guide and don't know how. Any one helps? -- Yongtao Fu -------------- next part -------------- An HTML attachment was scrubbed... URL:
2015 Dec 07
0
Dovecot cluster using GlusterFS
We ran a load test using glusterfs and were able to deliver mail (I can't remember specifically how much per second, maybe 100 messages per second?) without any issues. We did use the glusterfs fuse client and not nfs, and used regular maildir. We developed a mail bot cluster that would deliver mail, and simultaneously receive and delete it with pop and IMAP and we ran into zero issues. We
2013 Aug 23
1
Slow writing on mounted glusterfs volume via Samba
Hi guys, I have configured gluster fs in replication mode in two ubuntu servers. Windows users use samba sharing to access the mounted volume. Basically my setup is that client machines on each site connect to its local file server so it has the fattest connection. Two files servers are connected via VPN tunnel which has really high bandwidth Right now it is very slow to write files to the
2013 Dec 23
0
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
Hi all How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume ; the client can write data that originally on offline bricks to other online bricks ; the distributed volume crash, even if one brick offline; it's so unreliable when the failed brick online ,how to join the original distribute volume; don't want the new write data can't
2013 Jul 02
1
problem expanding a volume
Hello, I am having trouble expanding a volume. Every time I try to add bricks to the volume, I get this error: [root at gluster1 sdb1]# gluster volume add-brick vg0 gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1 /export/brick2/sdb1 or a prefix of it is already part of a volume Here is the volume info: [root at gluster1 sdb1]# gluster volume info vg0 Volume Name: vg0 Type:
2012 Oct 05
0
No subject
for all three nodes: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data Which node are you trying to mount to /data? If it is not the gluster-data node, then it will fail if there is not a /data directory. In this case, it is a good thing, since mounting to /data on gluster-0-0 or gluster-0-1 would not accomplish what you need. To clarify, there
2012 Dec 17
1
stripped volume in 3.4.0qa5 with horrible read performance
Dear folks, I've been tried to use replicated stripped volumes with 3.3. unsuccessfully due to https://bugzilla.redhat.com/show_bug.cgi?id=861423 and I then proceed to try 3.4.0qa5. I then find out that the bug was solved and I could use replicated stripped volume with the new version. Amazingly, write performance was quite astonishing. The problem I'm facing now is in the read process:
2012 Oct 05
0
No subject
for all three nodes:=0A= =0A= Brick1: gluster-0-0:/mseas-data-0-0=0A= Brick2: gluster-0-1:/mseas-data-0-1=0A= Brick3: gluster-data:/data=0A= =0A= =0A= Which node are you trying to mount to /data? If it is not the=0A= gluster-data node, then it will fail if there is not a /data directory.=0A= In this case, it is a good thing, since mounting to /data on gluster-0-0=0A= or gluster-0-1 would not
2012 Oct 22
1
How to add new bricks to a volume?
Hi, dear glfs experts: I've been using glusterfs (version 3.2.6) for months,so far it works very well.Now I'm facing a problem of adding two new bricks to an existed replicated (rep=2) volume,which is consisted of only two bricks and is mounted by multiple clients.Can I just use the following commands to add new bricks without stopping the services which is using the volume as motioned?
2015 Dec 06
1
Dovecot cluster using GlusterFS
Hello Alessio and Gordon, thank you for answers. Dsync-based architecture looks promising, but I would preffer to stay with GlusterFS for now as I also use it as a storage for other components. So director is the way to go, I don't want to setup more than two nodes to keep this setup as simple as possible - so I will probably update to 2.2.19 and have director and backend on the same servers
2013 Aug 28
1
volume on btrfs brick and copy-on-write
Hello Is it possible to take advantage of copy-on-write implemented in btrfs if all bricks are stored on it? If not is there any other mechanism (in glusterfs) which supports CoW? regards -- Maciej Ga?kiewicz Shelly Cloud Sp. z o. o., Sysadmin http://shellycloud.com/, macias at shellycloud.com KRS: 0000440358 REGON: 101504426 -------------- next part -------------- An HTML attachment was
2013 Oct 31
1
changing volume from Distributed-Replicate to Distributed
hi all, as the title says - i'm looking to change a volume from dist/repl -> dist. we're currently running 3.2.7. a few of questions for you gurus out there: - is this possible to do on 3.2.7? - is this possible to do with 3.4.1? (would involve upgrade) - are there any pitfalls i should be aware of? many thanks in advance, regards, paul -------------- next part -------------- An
2015 Dec 05
6
Dovecot cluster using GlusterFS
Hello, I have recently setup mailserver solution using 2-node master-master setup (mainly based on MySQL M-M replication and GlusterFS with 2 replica volume) on Ubuntu 14.04 (Dovecot 2.2.9). Unfortunately even with shared-storage-aware setting: mail_nfs_index = yes mail_nfs_storage = yes mail_fsync = always mmap_disable = yes ..I have hit strange issues pretty soon especially when user was
2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All, I am facing issues restarting the gluster volume. When I start the volume after stopping it, the gluster fails to start the volume Below is the message that I get on CLI /root> gluster volume start _home volume start: _home: failed: Commit failed on localhost. Please check the log file for more details. Logs says that it was unable to start the brick [2013-08-08
2013 Jun 17
1
Ability to change replica count on an active volume
Hi, all As the title I found that gluster fs 3.3 has the ability to change replica count in the official document: http://www.gluster.org/community/documentation/index.php/WhatsNew3.3 But I couldnt find any manual about how to do it. Has this feature been added already, or will be supported soon? thanks. Wang Li -------------- next part -------------- An HTML attachment was scrubbed...
2012 Nov 26
1
Heal not working
Hi, I have a volume created of 12 bricks and with 3x replication (no stripe). We had to take one server (2 bricks per server, but configured such that first brick from every server, then second brick from every server so there should not be 1 server multiple times in any replica groups) for maintenance. The server was down for 40 minutes and after it came up I saw that gluster volume heal home0
2010 Sep 23
1
proposed new doco for "Gluster 3.1: Installing GlusterFS on OpenSolaris"
Hi all Reference: http://support.zresearch.com/community/documentation/index.php/Gluster_3.1:_Installing_GlusterFS_on_OpenSolaris I have found this guide to be too brief/terse, and have endeavoured to improve it via more of a recipie/howto approach - and possibly misunderstood the intent of the brief directions in the process. Please advise if there are any errors? Once the procedure is
2013 Jul 02
1
files do not show up on gluster volume
I am trying to touch files on a mounted gluster mount point. gluster1:/gv0 24G 786M 22G 4% /mnt [root at centos63 ~]# cd /mnt [root at centos63 mnt]# ll total 0 [root at centos63 mnt]# touch hi [root at centos63 mnt]# ll total 0 The files don't show up after I ls them, but if I try to do a mv operation something very strange happens: [root at centos63 mnt]# mv /tmp/hi . mv:
2012 Oct 23
1
Problems with striped-replicated volumes on 3.3.1
Good afternoon, I am playing around with GlusterFS 3.1 in CentOS 6 virtual machines to see if I can get of proof of concept for a bigger project. In my setup, I have 4 GlusterFS servers with two bricks each of 10GB with XFS (per your quick-start guide). So, I have a total of 8 bricks. When bu I have no problem with distributed-replicated volumes. However, when I set up a striped replicated
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. --