similar to: Hosed installation

Displaying 20 results from an estimated 20000 matches similar to: "Hosed installation"

2013 Oct 06
0
Options to turn off/on for reliable virtual machinewrites & write performance
In a replicated cluster, the client writes to all replicas at the same time. This is likely while you are only getting half the speed for writes as its going to two servers and therefore maxing your gigabit network. That is, unless I am misunderstanding how you are measuring the 60MB/s write speed. I don't have any advice on the other bits...sorry. Todd -----Original Message----- From:
2013 Sep 28
0
Gluster NFS Replicate bricks different size
I've mounted a gluster 1x2 replica through NFS in oVirt. The NFS share holds the qcow images of the VMs. I recently nuked a whole replica brick in an 1x2 array (for numerous other reasons including split-brain), the brick self healed and restored back to the same state as its partner. 4 days later, they've become inbalanced. The direct `du` of the /brick are showing different sizes by
2013 Oct 25
1
GlusterFS 3.4 Fuse client Performace
Dear GlusterFS Engineer, I have questions that my glusterfs server and fuse client perform properly on below specification. It can write only *65MB*/s through FUSE client to 1 glusterfs server (1 brick and no replica for 1 volume ) - NW bandwidth are enough for now. I've check it with iftop - However it can write *120MB*/s when I mount nfs on the same volume. Could anyone check if the
2012 Nov 26
1
Heal not working
Hi, I have a volume created of 12 bricks and with 3x replication (no stripe). We had to take one server (2 bricks per server, but configured such that first brick from every server, then second brick from every server so there should not be 1 server multiple times in any replica groups) for maintenance. The server was down for 40 minutes and after it came up I saw that gluster volume heal home0
2013 Jun 17
1
Ability to change replica count on an active volume
Hi, all As the title I found that gluster fs 3.3 has the ability to change replica count in the official document: http://www.gluster.org/community/documentation/index.php/WhatsNew3.3 But I couldnt find any manual about how to do it. Has this feature been added already, or will be supported soon? thanks. Wang Li -------------- next part -------------- An HTML attachment was scrubbed...
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all, I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2. now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this: Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513 Brick 192.168.64.12:/opt/gluster_data/eccp_glance
2013 Oct 07
1
glusterd service fails to start on one peer
I'm hoping that someone here can point me the right direction to help me solve a problem I am having. I've got 3 gluster peers and for some reason glusterd sill not start on one of them. All are running glusterfs version 3.4.0-8.el6 on Centos 6.4 (2.6.32-358.el6.x86_64). In /var/log/glusterfs/etc-glusterfs-glusterd.vol.log I see this error repeated 36 times (alternating between brick-0
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2013 Jul 02
1
problem expanding a volume
Hello, I am having trouble expanding a volume. Every time I try to add bricks to the volume, I get this error: [root at gluster1 sdb1]# gluster volume add-brick vg0 gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1 /export/brick2/sdb1 or a prefix of it is already part of a volume Here is the volume info: [root at gluster1 sdb1]# gluster volume info vg0 Volume Name: vg0 Type:
2014 Sep 10
1
Questions about gluster reblance
Hello, Recently I spent a bit time understanding rebalance since I want to know its performance given that there could be more and more bricks to be added into my glusterfs volume and there will be more and more files and directories in the existing glusterfs volume. During the test I saw something which I'm really confused about. Steps: SW versions: glusterfs 3.4.4 + centos 6.5 Inital
2013 Aug 23
1
Slow writing on mounted glusterfs volume via Samba
Hi guys, I have configured gluster fs in replication mode in two ubuntu servers. Windows users use samba sharing to access the mounted volume. Basically my setup is that client machines on each site connect to its local file server so it has the fattest connection. Two files servers are connected via VPN tunnel which has really high bandwidth Right now it is very slow to write files to the
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2014 Sep 05
2
glusterfs replica volume self heal dir very slow!!why?
Hi all? I do the following test? I create a glusterfs replica volume (replica count is 2 ) with two server node(server A and server B)? then mount the volume in client node? then? I shut down the network of server A node? in client node? I copy a dir?which has a lot of small files?? the dir size is 2.9GByte? when copy finish? I start the network of server A node? now?
2013 Oct 31
1
changing volume from Distributed-Replicate to Distributed
hi all, as the title says - i'm looking to change a volume from dist/repl -> dist. we're currently running 3.2.7. a few of questions for you gurus out there: - is this possible to do on 3.2.7? - is this possible to do with 3.4.1? (would involve upgrade) - are there any pitfalls i should be aware of? many thanks in advance, regards, paul -------------- next part -------------- An
2013 Dec 15
0
Introducing... JMWBot (the alter-ego of johnmark)
Yes, it's true. I've been up late hacking on Gluster (and Puppet-Gluster)... While waiting for my code to compile, patch review (*cough*), and for JMW (aka johnmark) to take care of a few todo items, I realized I had never written an IRC bot!! Now I never aspired to be the bot master that JoeJulian is, but I figured I needed this notch on my hacker belt... Therefore, I'd like to
2013 Jun 26
2
HI Guys
Hi ,? I recenlty configured the 2 node replica glusterfs , and I am having couple of issues? 1. As soon ?as a I reboot the node2 , the glusterfs on node1 is not available but when I reboot/shutdown node1 the glusterfs is available on node 0 , so please let me know if you guys have encountered the same issue 2. I am not able to mount the glusterfs mount at the time of reboot I had to do manually
2012 Nov 15
0
Why does geo-replication stop when a replica member goes down
Hi, We are testing glusterfs. We have a setup like this: Site A: 4 nodes, 2 bricks per node, 1 volume, distributed, replicated, replica count 2 Site B: 2 nodes, 2 bricks per node, 1 volume, distributed georeplication setup: master: site A, node 1. slave:site B, node 1, ssh replicasets on Site A: node 1, brick 1 + node 3, brick 1 node 2, brick 1 + node 4, brick 1 node 2, brick 2 + node 3, brick
2013 Jan 03
2
"Failed to perform brick order check,..."
Hi guys: I have just installed gluster on a single instance, and the command: gluster volume create gv0 replica 2 server.n1:/export/brick1 server.n1:/export/brick2 returns with: "Failed to perform brick order check... do you want to continue ..? y/N"? What is the meaning of this error message, and why does brick order matter? -- Jay Vyas http://jayunit100.blogspot.com
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi, I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ? Thanks & Regards, Bobby Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Dec 23
0
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
Hi all How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume ; the client can write data that originally on offline bricks to other online bricks ; the distributed volume crash, even if one brick offline; it's so unreliable when the failed brick online ,how to join the original distribute volume; don't want the new write data can't