similar to: Gluster and public/private LAN

Displaying 20 results from an estimated 10000 matches similar to: "Gluster and public/private LAN"

2013 Jan 30
6
New version of UFO - is there a new HOWTO?
I just installed glusterfs-swift 3.3.1 on a couple of Fedora 18 servers. This is based on swift 1.7.4 and has keystone in the config. I had experimented with the one based on swift 1.4.8 and tempauth and had some problems with it. The HOWTO I can find is still for the old one. Is there an updated one? I would also need to find some instructions on setting up keystone from scratch for
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete information than I did the first time around. The full rebalance log from the machine where I started the rebalance can be found at the following link. It is slightly redacted - one search/replace was made to replace an identifying word with REDACTED. https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2012 Dec 18
1
Infiniband performance issues answered?
In IRC today, someone who was hitting that same IB performance ceiling that occasionally gets reported had this to say [11:50] <nissim> first, I ran fedora which is not supported by Mellanox OFED distro [11:50] <nissim> so I moved to CentOS 6.3 [11:51] <nissim> next I removed all distibution related infiniband rpms and build the latest OFED package [11:52] <nissim>
2013 Mar 05
1
UFO - new version capable of S3 API?
I was running the previous version of UFO, the 3.3 one that was based on Swift 1.4.8. Now there is a 3.3.1 based on Swift 1.7.4. The config that I used last time to enable S3 isn't working with the new one, just updated yesterday using yum. I was using tempauth in the old version and I'm still using tempauth. I have a CentOS 6 system and a Fedora 18 system with UFO on them. The
2012 Nov 14
1
Howto find out volume topology
Hello, I would like to find out the topology of an existing volume. For example, if I have a distributed replicated volume, what bricks are the replication partners? Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2012 Dec 17
2
Transport endpoint
Hi, I've got Gluster error: Transport endpoint not connected. It came up twice after trying to rsync 2 TB filesystem over; it reached about 1.8 TB and got the error. Logs on the server side (on reverse time order): [2012-12-15 00:53:24.747934] I [server-helpers.c:629:server_connection_destroy] 0-RedhawkShared-server: destroyed connection of
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts, We're running glusterfs 3.3 and we have met file permission probelems after gluster volume rebalance. Files got stick permissions T--------- after rebalance which break our client normal fops unexpectedly. Any one known this issue? Thank you for your help. -- ??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 May 10
1
mmap failed messages in syslog
On 5/9/2018 10:15 PM, Shawn Heisey wrote: > It's been about three weeks since I asked this question.? All the > information I have available says that all users have no limits on > virtual memory, so it must be dovecot itself that sets the limit.? Can > that be changed in the 1.2 version?? I can't upgrade to 2.x yet. > > Here's an actual syslog entry, redacted to
2013 Dec 12
3
Is Gluster the wrong solution for us?
We are about to abandon GlusterFS as a solution for our object storage needs. I'm hoping to get some feedback to tell me whether we have missed something and are making the wrong decision. We're already a year into this project after evaluating a number of solutions. I'd like not to abandon GlusterFS if we just misunderstand how it works. Our use case is fairly straight forward.
2012 Dec 15
3
IRC channel stuck on invite only
Hi, Any ops here? irc channel seems broken. Ta, Andrew
2018 Oct 21
4
Disable logging for localhost
Hello Everyone, I am using Zabbix to monitor my Dovecot server, and my logs are filled with lines like this: > Oct 21 15:04:46 osaka dovecot[1256]: pop3-login: Aborted login (no auth > attempts in 0 secs): user=<>, rip=127.0.0.1, lip=127.0.0.1, secured, > session=<bWd0nr14SuF/AAAB> > Oct 21 15:05:29 osaka dovecot[1256]: imap-login: Aborted login (no auth > attempts
2012 Dec 13
1
Rebalance may never finish, Gluster 3.2.6
Hi Guys, I have a rebalance that is going so slow it may never end. Particulars on system: 3 nodes 6 bricks, ~55TB about 10%full. The use of data is very active during the day and less so at night. All are CentOS 6.3, x86_64, Gluster 3.2.6 [root at node01 ~]# gluster volume rebalance data01 status rebalance step 2: data migration in progress: rebalanced 1378203 files of size 308570266988
2012 Aug 17
1
Fwd: vm pxe fail
----- Forwarded Message ----- From: "Andrew Holway" <a.holway at syseleven.de> To: "Alex Jia" <ajia at redhat.com> Cc: kvm at vger.kernel.org Sent: Friday, August 17, 2012 4:24:33 PM Subject: Re: [libvirt-users] vm pxe fail Hello, On Aug 17, 2012, at 4:34 AM, Alex Jia wrote: > Hi Andrew, > I can't confirm a root reason based on your information, perhaps
2012 Jun 28
1
Rebalance failures
I am messing around with gluster management and I've added a couple bricks and did a rebalance, first fix-layout and then migrate data. When I do this I seem to get a lot of failures: gluster> volume rebalance MAIL status Node Rebalanced-files size scanned failures status --------- -----------
2013 Jan 26
4
Write failure on distributed volume with free space available
Hello, Thanks to "partner" on IRC who told me about this (quite big) problem. Apparently in a distributed setup once a brick fills up you start getting write failures. Is there a way to work around this? I would have thought gluster would check for free space before writing to a brick. It's very easy to test, I created a distributed volume from 2 uneven bricks and started to
2018 Jul 17
2
best practices for migrating to new dovecot version
I have a machine with my mail service on it running Debian Squeeze. The version of dovecot on that server is 1.2.15. I'm building a new server with Ubuntu 18.04 (started with 16.04, upgraded in place), where the dovecot version is 2.2.33. I plan to migrate all my services to this new machine. The virtual mailboxes are on disk in maildir format, with accounts in a mysql database managed by
2020 Jan 19
2
solr fts and removing accounts
i use solr fts indexing. It worls very well but it have one database per system, not per user. Lets suppose i delete one or more e-mail users in system. How to remove them in solr database to reclaim space?
2013 May 10
2
Self-heal and high load
Hi all, I'm pretty new to Gluster, and the company I work for uses it for storage across 2 data centres. An issue has cropped up fairly recently with regards to the self-heal mechanism. Occasionally the connection between these 2 Gluster servers breaks or drops momentarily. Due to the nature of the business it's highly likely that files have been written during this time. When the
2013 Nov 01
1
Gluster "Cheat Sheet"
Greetings, One of the best things I've seen at conferences this year has been a bookmark distributed by the RDO folks with most common and/or useful commands for OpenStack users. Some people at Red Hat were wondering about doing the same for Gluster, and I thought it would be a great idea. Paul Cuzner, the author of the gluster-deploy project, took a first cut, pasted below. What do you
2013 Jul 15
4
GlusterFS 3.4.0 and 3.3.2 released!
Hi All, 3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS 3.4.0 can be downloaded from [1] and release notes are available at [2]. Upgrade instructions can be found at [3]. If you would like to propose bug fix candidates or minor features for inclusion in 3.4.1, please add them at [4]. 3.3.2 packages can be downloaded from [5]. A big note of thanks to everyone who helped in