Displaying 20 results from an estimated 80000 matches similar to: "Gluster 2 with stable AFR"
2009 Jun 24
0
Gluster-users Digest, Vol 14, Issue 34
HI:
Was there a limit of servers which was used as storage in Gluster ?
2009-06-24
eagleeyes
???? gluster-users-request
????? 2009-06-24 03:00:42
???? gluster-users
???
??? Gluster-users Digest, Vol 14, Issue 34
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
2009 Apr 03
0
Having some trouble while using AFR
Hai,
Good morning.
I am having some trouble while using AFR. In the client side afr volume.
volume afr
type cluster/afr
subvolumes client
option replicate *:1
option self-heal on
end-volume
While i am using the command sudo -u apache cp -p zip/* test_folder/
It shows the message
cp: getting attribute `trusted.afr.version' of
`zip/ACS1238660426.zip': Operation not permitted
2009 Jun 24
2
Limit of Glusterfs help
HI:
Was there a limit of servers which was used as storage in Gluster ?
2009-06-24
eagleeyes
???? gluster-users-request
????? 2009-06-24 03:00:42
???? gluster-users
???
??? Gluster-users Digest, Vol 14, Issue 34
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
2009 May 11
1
Problem of afr in glusterfs 2.0.0rc1
Hello:
i had met the problem twice when i copy some files into the GFS space .
i have five clients and two servers , when i copy files into /data which was GFS space on client A , the problem was appear.
in the same path , A server can see the all files ,but B and C or D couldin't see the all files ,liks some files was missing ,but when i mount again ,the files was appear
2018 Mar 05
1
[Gluster-devel] Removal of use-compound-fops option in afr
On Mon, Mar 5, 2018 at 9:19 AM, Amar Tumballi <atumball at redhat.com> wrote:
> Pranith,
>
>
>
>> We found that compound fops is not giving better performance in
>> replicate and I am thinking of removing that code. Sent the patch at
>> https://review.gluster.org/19655
>>
>>
> If I understand it right, as of now AFR is the only component
2017 Oct 09
0
[Gluster-devel] AFR: Fail lookups when quorum not met
On 09/22/2017 07:27 PM, Niels de Vos wrote:
> On Fri, Sep 22, 2017 at 12:27:46PM +0530, Ravishankar N wrote:
>> Hello,
>>
>> In AFR we currently allow look-ups to pass through without taking into
>> account whether the lookup is served from the good or bad brick. We always
>> serve from the good brick whenever possible, but if there is none, we just
>> serve
2008 Oct 17
6
GlusterFS compared to KosmosFS (now called cloudstore)?
Hi.
I'm evaluating GlusterFS for our DFS implementation, and wondered how it
compares to KFS/CloudStore?
These features here look especially nice (
http://kosmosfs.sourceforge.net/features.html). Any idea what of them exist
in GlusterFS as well?
Regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2008 Nov 18
1
gluster, where have you been all my life?
Hi All
I've been looking for something like Gluster for a while and stumbled on it
today via the wikipedia pages on Filesystems etc.
I have a few very very simple questions that might even be too simple to be
on the FAQ, but if you think any of them are decent please add them there.
I think it might help if I start with what I want to achieve, then ask the
questions. We want to build a high
2008 Oct 02
0
FW: Why does glusterfs not automatically fix these kinds of problems?
Fwd'ing this since it seems my reply and your response didn't actually go
to the mailing list.
-----Original Message-----
From: Keith Freedman [mailto:freedman at FreeFormIT.com]
Sent: Thursday, October 02, 2008 2:02 PM
To: Will Rouesnel
Subject: RE: [Gluster-users] Why does glusterfs not automatically fix these
kinds of problems?
At 08:47 PM 10/1/2008, you wrote:
>Unison operates on
2008 Dec 10
1
df returns weird values
Hi,
I'm starting to play with glusterfs, and I'm having a problem with the df
output.
The value seems to be wrong.
(on the client)
/var/mule-client$ du -sh
584K .
/var/mule-client$ df -h /var/mule-client/
Filesystem Size Used Avail Use% Mounted on
glusterfs 254G 209G 32G 88% /var/mule-client
(on the server)
/var/mule$ du -sh
584K .
Is it a known
2009 Mar 05
1
BDB speed benefits
Hi.
Any idea what speed benefits the BDB translator provides over standard file
storage?
Also, how it's reliable, and what's the maximum file size it stores in the
DB?
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090305/2b391a92/attachment.html>
2017 Jun 28
0
afr-self-heald.c:479:afr_shd_index_sweep
On 06/28/2017 06:52 PM, Paolo Margara wrote:
> Hi list,
>
> yesterday I noted the following lines into the glustershd.log log file:
>
> [2017-06-28 11:53:05.000890] W [MSGID: 108034]
> [afr-self-heald.c:479:afr_shd_index_sweep]
> 0-iso-images-repo-replicate-0: unable to get index-dir on
> iso-images-repo-client-0
> [2017-06-28 11:53:05.001146] W [MSGID: 108034]
>
2008 Sep 05
8
Gluster update | need your support
Dear Members,
Even though Gluster team is growing at a steady phase, our aggressive development
schedule out phases our resources. We need to expand and also maintain a 1:1 developer /
QA engineer ratio. Our major development focus in the next 8 months will be towards:
* Large scale regression tests (24/7/365)
* Web based monitoring and management
* Hot upgrade/add/remove of storage nodes
2008 Dec 20
14
building 1.4.0rc6
I am trying to build the latest release candidate and have run into a
bit of a problem.
When I run ./configure, I get:
GlusterFS configure summary
===========================
FUSE client : no
Infiniband verbs : no
epoll IO multiplex : yes
Berkeley-DB : no
libglusterfsclient : yes
mod_glusterfs : no ()
argp-standalone : no
I am going to need the gluster FUSE client now
2009 Jan 14
4
locks feature not loading ? (2.0.0rc1)
Hi all,
I upgraded from 1.4.0rc3 to 2.0.0rc1 in my test environment, and while
the upgrade itself went smoothly, i appear to be having problems with
the (posix-)locks feature. :( The feature is clearly declared in the
server config file, and according to the DEBUG-level logs, it is loaded
successfully at runtime ; however, when Gluster attempts to lock an
object (for the purposes of AFR
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all,
for the upgrade I followed this procedure:
* put node in maintenance mode (ensure no client are active)
* yum versionlock delete glusterfs*
* service glusterd stop
* yum update
* systemctl daemon-reload
* service glusterd start
* yum versionlock add glusterfs*
* gluster volume heal vm-images-repo full
* gluster volume heal vm-images-repo info
on each server every time
2011 Jun 28
0
[Gluster-devel] volume rebalance still broken
Replying and adding gluster-users. That seems more appropriate?
________________________________________
From: gluster-devel-bounces+jwalker=gluster.com at nongnu.org [gluster-devel-bounces+jwalker=gluster.com at nongnu.org] on behalf of Emmanuel Dreyfus [manu at netbsd.org]
Sent: Tuesday, June 28, 2011 6:51 AM
To: gluster-devel at nongnu.org
Subject: [Gluster-devel] volume rebalance still broken
2012 Jan 31
0
Gluster 3.3: Unable to delete xattrs
Hi,
I'm running the latest qa build of 3.3 and having a bit of trouble with
extended attrs.
[root at compute-0-0 ~]# rpm -qa | grep gluster
glusterfs-geo-replication-3.3.0qa20-1
glusterfs-core-3.3.0qa20-1
glusterfs-rdma-3.3.0qa20-1
glusterfs-fuse-3.3.0qa20-1
Firstly, is it mandatory to mount ext3 file systems with 'user_xattr'? The
issue I'm having is that I would like to delete
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo,
Which document did you follow for the upgrade? We can fix the
documentation if there are any issues.
On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com>
wrote:
> On 06/29/2017 01:08 PM, Paolo Margara wrote:
>
> Hi all,
>
> for the upgrade I followed this procedure:
>
> - put node in maintenance mode (ensure no client are active)
2017 Sep 22
2
AFR: Fail lookups when quorum not met
Hello,
In AFR we currently allow look-ups to pass through without taking into
account whether the lookup is served from the good or bad brick. We
always serve from the good brick whenever possible, but if there is
none, we just serve the lookup from one of the bricks that we got a
positive reply from.
We found a bug? [1] due to this behavior were the iatt values returned
in the lookup call