Stefan Schloesser
2012-Jul-11 15:06 UTC
[Gluster-users] Monitor replicated Gluster setup ?
Hi, I would like setup a HA (fail-over) webserver consisting of apache, mysql and a filesystem. My Idea is to use GlusterFS as a replicated filesystem (for apache) and built-in mysql replication for the database. In the event of a failure I need to run a script to implement an ip switch. How should I monitor such a system? Does Gluster provide an integration with corosync as a resource? Shall write a home-grown script to check availability? Are there other ways to monitor such a setup? I guess it is not a good idea to keep the mysql data files on the glustered fs, or is that feasible? Thanks, Stefan Schl?sser -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120711/579df854/attachment.html>
On Wed, Jul 11, 2012 at 03:06:19PM +0000, Stefan Schloesser wrote:> My Idea is to use GlusterFS as a replicated filesystem (for apache) and > built-in mysql replication for the database. In the event of a failure > I need to run a script to implement an ip switch.To switch IP for what? glusterfs in replicated mode, using the native (FUSE) client, doesn't need this. The client talks to both backends, and if either backend fails, it continues to work.> How should I monitor such a system?I'm not sure if there's a proper API. As a starting point, try running 'gluster volume status' as root and parsing the results. e.g. here the bricks on one server are unavailable: $ sudo gluster volume status Status of volume: safe Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick dev-storage1:/disk/storage1/safe 24009 N N/A Brick dev-storage2:/disk/storage2/safe 24009 Y 1710 NFS Server on localhost 38467 Y 2034 Self-heal Daemon on localhost N/A Y 1736 NFS Server on 10.0.1.1 38467 Y 1631 Self-heal Daemon on 10.0.1.1 N/A Y 1637 Status of volume: fast Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick dev-storage1:/disk/storage1/fast 24010 N N/A Brick dev-storage2:/disk/storage2/fast 24010 Y 1720 NFS Server on localhost 38467 Y 2034 NFS Server on 10.0.1.1 38467 Y 1631 Status of volume: single1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick dev-storage1:/disk/storage1/single1 24011 N N/A NFS Server on localhost 38467 Y 2034 NFS Server on 10.0.1.1 38467 Y 1631 Status of volume: single2 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick dev-storage2:/disk/storage2/single2 24011 Y 2028 NFS Server on localhost 38467 Y 2034 NFS Server on 10.0.1.1 38467 Y 1631 It would be neat if this could be integrated into SNMP. There are some good tools I found which provide a framework to integrate mdraid and smartctl status: http://www.mad-hacking.net/software/index.xml http://downloads.mad-hacking.net/software/ It should be relatively straightforward to add gluster into this.> Does Gluster provide an integration > with corosync as a resource?Not that I'm aware of, as far as I can see it's not needed. Volume status information is synchronised between the peers in the cluster using some internal protocol, and I'm not exactly sure how it deals with split brain scenarios. Another project which *does* use corosync is sheepdog. This is completely different to glusterfs though - it's a distributed block level store for KVM.> I guess it is not a good idea to keep the mysql data files on the > glustered fs, or is that feasible?I'd say that's definitely a bad idea, especially if you had two different mysqld's talking to the same storage. Regards, Brian.
Stefan Schloesser
2012-Jul-12 13:06 UTC
[Gluster-users] Monitor replicated Gluster setup ?
> On 12 Jul 2012, Brian Candler wrote: > > On Thu, Jul 12, 2012 at 07:38:18AM +0000, Stefan Schloesser wrote: > > > The reason for the ip switch is the apache: if one fails the other > > > should take other the workload and continue operation, this is done > > > via the ip switch. > > > > Then that's just switching the public IP which apache listens on, and > > is nothing to do with glusterfs. Both servers can have the glusterfs > > volume mounted all the time. > > As Brian said, the shared IP here for Apache is independent of > Gluster in this particular configuration. I would recommend something > lightweight like UCARP so you can avoid running any of the Linux cluster stack > pieces: > --- > http://www.pureftpd.org/project/ucarp > > You're already attempting to avoid the cluster stuff anyway and keep > things simple by using Gluster. No point in adding all the HA bloat now just to > balance an IP address. And UCARP can do that for you! And don't forget to > give a nod to the OpenBSD developers who make such wonderful > technology in the first place. >Yeah - exactly "HA bloat" is my line of thinking. Though the man page to ucarp is somewhat limited ... :-( It seems to be the right direction, monitor ip and trigger a script if something happens. Though it won't note e.g. a crash of apache, will it? Just pinging a machine is maybe somewhat too limited ... or am I missing something here? Which unfortunately brings me back to "HA bloat". Stefan
Stefan Schloesser
2012-Jul-12 13:13 UTC
[Gluster-users] Monitor replicated Gluster setup ?
> On Thu, Jul 12, 2012 at 07:38:18AM +0000, Stefan Schloesser wrote: > > > To switch IP for what? glusterfs in replicated mode, using the > > > native > > > (FUSE) client, doesn't need this. The client talks to both > > > backends, and if either backend fails, it continues to work. > > [>] > > I am slightly confused here, I don't have a client (at least in the > > sense of a different machine), it's only 2 servers with each running > > an apache which uses the Filesystem (simply mounted). > > You mean Apache is reading the glusterfs bricks locally? That's wrong; any > writes would screw up replication. You should mount the glusterfs volume > via a FUSE mount, and have Apache access files through that mountpoint. > That's what I mean by a "client". The fact it happens to be on the same > server as where one of the glusterfs storage bricks runs is irrelevant. >I am mounting it via mount -t glusterfs -o log-level=WARNING,log-file=/var/log/gluster.log cluster-1:/shared /shared and sure, the apache will write to it .. I hope that's ok? Stefan