search for: purpleidea

Displaying 20 results from an estimated 20 matches for "purpleidea".

2013 Dec 15
0
Introducing... JMWBot (the alter-ego of johnmark)
...NTY, etc... * Really? Yeah, I think so. Test it out and let me know! * Can this be done for other people/channels than johnmark/#gluster? Yes! Please feel free to run your own bot, code is "open source" [2]. * I <3 puppet-gluster [3], where can I send $$$, resources and praise? /msg purpleidea in #gluster or @purpleidea on irc! Thanks! * Yikes! This code is terrible. Well it's not that bad. But it was meant as a dirty hack. Feel free to send patches or bug reports. * I didn't find this funny/cool/amusing or even stable. Sorry! It was written with good intentions. * THE BOT HAZ...
2013 Oct 03
1
(no subject)
Hello Friends, Please give me the link, How can i download the gluster storage platform iso. And how can i get the graphical console of gluster. -- *Thanks and Regards.* *Vishvendra Singh Chauhan* *+91-8750625343* http:// <http://linux-links.blogspot.com>linux-links.blogspot.com God First Work Hard Success is Sure... -------------- next part -------------- An HTML attachment was
2013 Oct 10
2
A "Wizard" for Initial Gluster Configuration
Hi, I'm writing a tool to simplify the initial configuration of a cluster, and it's now in a state that I find useful. Obviously the code is on the forge and can be found at https://forge.gluster.org/gluster-deploy If your interested in what it does, but don't have the time to look at the code I've uploaded a video to youtube http://www.youtube.com/watch?v=UxyPLnlCdhA Feedback
2013 Oct 01
2
Performance
Hi, On http://www.gluster.org/community/documentation/index.php/Translators/performance/io-cache I found the following: > volume io-cache > type performance/io-cache > option cache-size 64MB # default is 32MB > option priority *.h:3,*.html:2,*:1 # default is '*:0' > option cache-timeout 2 # default is 1 second > subvolumes <x>
2013 Sep 27
6
[Bug 10170] New: rsync should support reflink similar to cp --reflink
https://bugzilla.samba.org/show_bug.cgi?id=10170 Summary: rsync should support reflink similar to cp --reflink Product: rsync Version: 3.1.0 Platform: All OS/Version: All Status: NEW Severity: normal Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: samba at shubin.ca
2013 Jan 15
2
1024 char limit for auth.allow and automatically re-reading auth.allow without having to restart glusterd?
Hi, Anyone know if the 1024 char limit for auth.allow still exists in the latest production version (seems to be there in 3.2.5). Also anyone know if the new versions check if auth.allow has been updated without having to restart glusterd? Is there anyway to restart glusterd without killing it and restarting the process, is kill -1 (HUP) possible with it (also with the version i'm running?)
2013 Nov 05
1
setting up a dedicated NIC for replication
Hi all, I evaluate Glusterfs for storing virtual machines (KVM). I am looking for how configuring a dedicated network (vlan) for Gluster's replication. Because the configuration is based on only one DNS name, I don't know how to configure Gluster's nodes in order to: - Use production network, for hypervisors communications - Use replicated/heartbeat
2013 Nov 01
1
Gluster "Cheat Sheet"
Greetings, One of the best things I've seen at conferences this year has been a bookmark distributed by the RDO folks with most common and/or useful commands for OpenStack users. Some people at Red Hat were wondering about doing the same for Gluster, and I thought it would be a great idea. Paul Cuzner, the author of the gluster-deploy project, took a first cut, pasted below. What do you
2013 May 15
1
bluster install fails with puppet
Hey there, I try to install the latest gluster server onto a debian machine with puppet. Unfortunately this gives me: Starting glusterd service: glusterd. /usr/sbin/glusterd: option requires an argument -- 'f' Try `glusterd --help' or `glusterd --usage' for more information. invoke-rc.d: initscript glusterfs-server, action "start" failed. dpkg: error processing
2013 Dec 15
2
puppet-gluster from zero: hangout?
Hey james and JMW: Can/Should we schedule a google hangout where james spins up a puppet-gluster based gluster deployment on fedora from scratch? Would love to see it in action (and possibly steal it for our own vagrant recipes). To speed this along: Assuming James is in England here , correct me if im wrong, but if so ~ Let me propose a date: Tuesday at 12 EST (thats 5 PM in london - which i
2013 Dec 10
4
Structure needs cleaning on some files
Hi All, When reading some files we get this error: md5sum: /path/to/file.xml: Structure needs cleaning in /var/log/glusterfs/mnt-sharedfs.log we see these errors: [2013-12-10 08:07:32.256910] W [client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote operation failed: No such file or directory [2013-12-10 08:07:32.257436] W [client-rpc-fops.c:526:client3_3_stat_cbk]
2013 Oct 07
2
GlusterFS as underlying Replicated Disk for App Server
Hi All, We have a requirement for a common replicated filesystem between our two datacentres, mostly for DR and patching purposes when running Weblogic clusters. For those that are not acquainted, Weblogic has a persistent store that it uses for global transaction logs amongst other things. This store can be hosted on shared disk (usually NFS), or in recent versions within an Oracle DB.
2013 Sep 22
2
Problem wit glusterfs-server 3.4
Hi at all! i'm trying to use glusterfs for the first time and have the following problem: I want to have two nodes. On node1 I have a raid1-sytem running in /raid/storage Both nodes see the other and now I try to create a volume. While I create the first volume on a fresh system (node1) for the first time, gluster said: volume create: glustervolume: failed: /raid/storage/ or a prefix of it
2018 Feb 27
2
[Gluster-Maintainers] Release 4.0: RC1 tagged
On 02/26/2018 02:03 PM, Shyam Ranganathan wrote: > Hi, > > RC1 is tagged in the code, and the request for packaging the same is on > its way. > > We should have packages as early as today, and request the community to > test the same and return some feedback. > > We have about 3-4 days (till Thursday) for any pending fixes and the > final release to happen, so
2018 Feb 28
0
[Gluster-Maintainers] [Gluster-devel] Release 4.0: RC1 tagged
...package: > glusterfs-server-3.12.6-1.el7.x86_64 > base > | 3.6 kB 00:00:00 > centos-gluster312 > | 2.9 kB 00:00:00 > extras > | 3.4 kB 00:00:00 > purpleidea-vagrant-libvirt > | 3.0 kB 00:00:00 > updates > | 3.4 kB 00:00:00 > centos-gluster312/7/x86_64/primary_db > | 87 kB 00:00:00 > Loading mirror speeds fr...
2018 Feb 28
2
[Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged
...Dependency: glusterfs = 3.12.6-1.el7 for package: glusterfs-server-3.12.6-1.el7.x86_64 base | 3.6 kB 00:00:00 centos-gluster312 | 2.9 kB 00:00:00 extras | 3.4 kB 00:00:00 purpleidea-vagrant-libvirt | 3.0 kB 00:00:00 updates | 3.4 kB 00:00:00 centos-gluster312/7/x86_64/primary_db | 87 kB 00:00:00 Loading mirror speeds from cached hostfile * base: ce...
2013 Nov 12
2
Expanding legacy gluster volumes
Hi there, This is a hypothetical problem, not one that describes specific hardware at the moment. As we all know, gluster currently usually works best when each brick is the same size, and each host has the same number of bricks. Let's call this a "homogeneous" configuration. Suppose you buy the hardware to build such a pool. Two years go by, and you want to grow the pool. Changes
2020 Apr 24
1
[PATCH nbdkit] golang: Pass Plugin and Connection by reference not value.
For the current set of examples this doesn't matter. This also adds another example where we use the Connection to store data, just to check that actually works. Thanks: James @ purpleidea, Dan Berrangé --- plugins/golang/nbdkit-golang-plugin.pod | 10 +- plugins/golang/Makefile.am | 9 + plugins/golang/examples/disk/disk.go | 168 ++++++++++++++++++ .../golang/examples/dump-plugin/dumpplugin.go | 8 +- plugins/golang/examples/minimal/minimal.g...
2012 Jul 20
0
Gluster peers disconnecting
Dear Gluster, I'm running Gluster 3.3 on a four host setup. (Two bricks per host.) I'm attempting to use this as an rsnapshot backup system. Periodically, the rsync's seem to fail, and I think this is due to underlying gluster failures. I notice this in /var/log/messages: # Jul 19 23:47:15 annex1 GlusterFS[26815]: [2012-07-19 23:47:15.424909] C
2012 Jul 24
0
Gluster client disconnects
Dear gluster, I'm running gluster 3.3.0 on centos 6.3 I'm running a four host (eight brick) distribute-replicate. I'm using the first host to mount the volume, and run rsnapshot to backup data from other (non-gluster) hosts. Invariably, the mount seems to fail part way through the transfers. It seems this happens when there is higher load on the system. I've tried a lot of