On the write-behind translator is there a way to wait for one of the AFR
Replica's to get a close response and then finish replicated the data in the
background (which write-behind currently does) and issue the close system
call to the replica servers long after the application has moved on because
atleast one of the replicas is keeping up?
Thanks
On Sun, Mar 8, 2009 at 12:00 PM, <gluster-users-request at gluster.org>
wrote:
> Send Gluster-users mailing list submissions to
> gluster-users at gluster.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
> gluster-users-request at gluster.org
>
> You can reach the person managing the list at
> gluster-users-owner at gluster.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
>
>
> Today's Topics:
>
> 1. How caches are working on AFR? (Stas Oskin)
> 2. Problems compiling Gluster Patched fuse. (Evan Hart)
> 3. Re: How caches are working on AFR? (Anand Babu Periasamy)
> 4. GlusterFS running, but not syncing is done (Stas Oskin)
> 5. Accessing the host glusterFS directory from OpenVZ virtual
> server (Stas Oskin)
> 6. mounting glusterfs on /etc/mtab read only (Enno Lange)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 8 Mar 2009 02:22:03 +0200
> From: Stas Oskin <stas.oskin at gmail.com>
> Subject: [Gluster-users] How caches are working on AFR?
> To: gluster-users <gluster-users at gluster.org>
> Message-ID:
> <77938bc20903071622x3e277a1s776e0b2ea53a5ace at
mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi.
>
> I have a question to GlustreFS developers.
>
> if I have a pair of servers in client-server AFR (A and B), and the
> application running on A writes to disk, how soon the application receives
> OK and can continue?
>
> After the cache on server A is filled with data (and then all is
> synchronized in background), or only after cache on server B gets data as
> well?
>
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://zresearch.com/pipermail/gluster-users/attachments/20090308/b01adf5e/attachment.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Sat, 7 Mar 2009 13:59:19 -0800
> From: Evan Hart <ehart at devnada.com>
> Subject: [Gluster-users] Problems compiling Gluster Patched fuse.
> To: gluster-users at gluster.org
> Message-ID:
> <56e059b80903071359u640b6e57vb326c96a9a16f9ef at
mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> I'm having problems compiling fuse-2.7.4glfs11
> on # uname -a
> Linux cdc 2.6.27-gentoo-r8 #1 SMP Fri Mar 6 12:21:10 PST 2009 x86_64
> Quad-Core AMD Opteron(tm) Processor 2350 AuthenticAMD GNU/Linux
>
> http://pastebin.com/m2dc978be
>
> Any help would be great..
>
> Thanks
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://zresearch.com/pipermail/gluster-users/attachments/20090307/973d55b4/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 3
> Date: Sat, 07 Mar 2009 18:18:15 -0800
> From: Anand Babu Periasamy <ab at gluster.com>
> Subject: Re: [Gluster-users] How caches are working on AFR?
> To: Stas Oskin <stas.oskin at gmail.com>
> Cc: gluster-users <gluster-users at gluster.org>
> Message-ID: <49B32AE7.80707 at gluster.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Replicate in 2.0 performs atomic writes by default. This means, writes will
> return control
> back to application only after both the volumes (or more) are successfully
> written.
>
> To mask the performance penalty of atomic writes, you should load
> write-behind on top of
> it. Write-behind returns control as soon as it receives the write call from
> the
> application, but it continues to write in background. Write-behind also
> performs
> block-aggregation. Smaller writes are aggregated into fewer large writes.
>
> POSIX says application should verify the return status of close system call
> to ensure all
> writes were successfully written. If they are any pending writes, close
> call will block to
> ensure all the data is completely written. There is an option in
> write-behind to even
> close in background. It is unsafe and turned off by default.
>
> Applications that expect every write to succeed, issues synchronous writes.
>
> I Hope it answers your question.
>
> Happy Hacking,
> --
> Anand Babu Periasamy
> GPG Key ID: 0x62E15A31
> Blog [http://ab.multics.org]
> GlusterFS [http://www.gluster.org]
> The GNU Operating System [http://www.gnu.org]
>
>
>
> Stas Oskin wrote:
> > Hi.
> >
> > I have a question to GlustreFS developers.
> >
> > if I have a pair of servers in client-server AFR (A and B), and the
> > application running on A writes to disk, how soon the application
> > receives OK and can continue?
> >
> > After the cache on server A is filled with data (and then all is
> > synchronized in background), or only after cache on server B gets data
> > as well?
> >
> > Thanks.
> >
> >
> >
------------------------------------------------------------------------
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sun, 8 Mar 2009 10:58:17 +0200
> From: Stas Oskin <stas.oskin at gmail.com>
> Subject: [Gluster-users] GlusterFS running, but not syncing is done
> To: gluster-users <gluster-users at gluster.org>
> Message-ID:
> <77938bc20903080058t1cc55a2n3367e4bc66d179ce at
mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi.
>
> I'm trying to run my first GlusterFS setup, basically 2 servers running
in
> AFR mode.
>
> While the servers find and connect to each other, unfortunately the file
> are
> not being synchronized between them. I mean, when I place a file in one of
> the servers, the other one does not receive it.
>
> Here is what I receive on each of the servers:
> 2009-03-08 02:41:43 N [server-protocol.c:7186:mop_setvolume] server:
> accepted client from 192.168.253.41:1020
> 2009-03-08 02:41:48 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
> 2009-03-08 02:41:48 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
>
> and
>
> 2009-03-08 02:41:43 D [client-protocol.c:6557:notify] home2: got
> GF_EVENT_CHILD_UP
> 2009-03-08 02:41:43 D [socket.c:951:socket_connect] home2: connect ()
> called
> on transport already connected
> 2009-03-08 02:41:43 N [client-protocol.c:5853:client_setvolume_cbk] home2:
> connection and handshake succeeded
> 2009-03-08 02:41:53 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
> 2009-03-08 02:41:53 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
>
> Any idea why the files are not synchronized and how it can be diagnosed?
>
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://zresearch.com/pipermail/gluster-users/attachments/20090308/2dc9a172/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 5
> Date: Sun, 8 Mar 2009 11:59:51 +0200
> From: Stas Oskin <stas.oskin at gmail.com>
> Subject: [Gluster-users] Accessing the host glusterFS directory from
> OpenVZ virtual server
> To: gluster-users <gluster-users at gluster.org>
> Message-ID:
> <77938bc20903080159m20e4e368g7188cf11a182a968 at
mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi.
>
> This might be unrelated to this list, but I looking for a way to access
> GlusterFS partition from OpenVZ virtual server.
>
> Meaning a virtual server running on a particular server will access
it's
> host GlusterFS directory.
>
> The immediate idea I had was to have the virtual server as the client of
> GlusterFS, as it would basically happen on same machine networking, but
> perhaps there is a way to write the data directly to host partition?
>
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://zresearch.com/pipermail/gluster-users/attachments/20090308/63b03427/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 6
> Date: Sun, 08 Mar 2009 14:05:45 +0100
> From: Enno Lange <Enno.Lange at iem.rwth-aachen.de>
> Subject: [Gluster-users] mounting glusterfs on /etc/mtab read only
> To: gluster-users at gluster.org
> Message-ID: <49B3C2A9.9070104 at iem.rwth-aachen.de>
> Content-Type: text/plain; charset=ISO-8859-15; format=flowed
>
> Hi,
>
> we running a cluster of diskless gentoo-systems. Therefore, /etc/mtab is
> linked to /proc/mounts as usual. Trying to mount a glusterfs fails
> because mtab is not writable. Is there by any chance a way to pass
'-n'
> or something equivalent to the underlying mount -t fuse process?
>
> The actual workaround we deployed is to link /etc/mtab to a local file
> on a scratch partition, which in my opinion is quite unsatisfying: The
> mount process will succeed but the mounted fs will not appear in the
> linked /etc/mtab.
>
> Enno Lange
>
>
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
> End of Gluster-users Digest, Vol 11, Issue 12
> *********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090308/fcd24101/attachment.html>