Hi, I am having trouble with the following setup, hope that somebody will be able to help. We have a couple of workstations and two file-servers. Let's just keep it at 2 workstations for now, because I can just add more clients later I guess. So the workstations are called echo and reality, and the servers are dream and neo. I am wondering whether it is possible to have a unified filesystem between echo and reality, using the nufa scheduler to keep things local as much as possible, and at the same time to have afr running on the servers neo and dream so that those guys each have a complete copy of the workstations (echo and reality). I can get it to work with only afr, or only unify, but combining the two somehow does not do it. The best I got so far was to have empty directories on the servers. Worst case is that I go with not doing the unified thing, but it would be nice because it is so much faster and most of the time, files are just accessed locally anyway. With only client side afr, nfs seemed to be faster (with varying performance translators), and with server side afr, nfs and glusterfs where about the same speed - with the big advantage of not having a single point of failure. :) Maybe this setup just doesn't make any sense at all ... Well, hope somebody has some suggestions! Wolfgang
I''m not sure if anyones responded to you. if not, it might be helpful to post your config files. my comments are inline below At 09:11 PM 8/4/2008, Wolfgang Pauli wrote:>Hi, > >I am having trouble with the following setup, hope that somebody will be able >to help. > >We have a couple of workstations and two file-servers. Let''s just >keep it at 2 >workstations for now, because I can just add more clients later I guess. So >the workstations are called echo and reality, and the servers are dream and >neo. > >I am wondering whether it is possible to have a unified filesystem between >echo and reality, using the nufa scheduler to keep things local as much as >possible, and at the same time to have afr running on the servers neo and >dream so that those guys each have a complete copy of the workstations (echo >and reality).When you mean, unified filesystem, you mean the same filesystem available on both clients? and not a Union filesystem (whereby you combine multiple filesystems into one large virtual filesystem) via unify. If so, this is kind of standard. What I''d suggest is look in the wiki examples for the server side AFR, and then set up your clients using the example, my memory escapes me, sorry, but there was an example which used IP round-robin to do a kind of automated server failover, but this may not help with load-balancing. So I suggest you set up the local cache translator to reduce your network overhead. If you need better load balancing on the servers then you can use NUFA or something like that to help out.>I can get it to work with only afr, or only unify, but combining the two >somehow does not do it. The best I got so far was to have empty directories >on the servers.what''re you trying to unify? If you''re servers are AFRing eachother, then this is one filesystem. if you then want to unify multiple AFR bricks, you can do this, but again, I think you''re better off having all the hard work done on the servers, and letting the clients just mount the unified volume from the servers. I think it''s possible to AFR the unify metadata nowadays, but I don''t know for sure.>Worst case is that I go with not doing the unified thing, but it >would be nice >because it is so much faster and most of the time, files are just accessed >locally anyway. With only client side afr, nfs seemed to be faster (with >varying performance translators), and with server side afr, nfs and glusterfs >where about the same speed - with the big advantage of not having a single >point of failure. :)Again, I think all your problems are solved by doing all the hard work on the servers and the clients just mount the unified brick from the server.>Maybe this setup just doesn''t make any sense at all ... > >Well, hope somebody has some suggestions!I''m not sure if I pointed you in the right direction or down a path of despair.. hopefully the former. Keith
I''m not sure if anyones responded to you. if not, it might be helpful to post your config files. my comments are inline below At 09:11 PM 8/4/2008, Wolfgang Pauli wrote:>Hi, > >I am having trouble with the following setup, hope that somebody will be able >to help. > >We have a couple of workstations and two file-servers. Let''s just >keep it at 2 >workstations for now, because I can just add more clients later I guess. So >the workstations are called echo and reality, and the servers are dream and >neo. > >I am wondering whether it is possible to have a unified filesystem between >echo and reality, using the nufa scheduler to keep things local as much as >possible, and at the same time to have afr running on the servers neo and >dream so that those guys each have a complete copy of the workstations (echo >and reality).When you mean, unified filesystem, you mean the same filesystem available on both clients? and not a Union filesystem (whereby you combine multiple filesystems into one large virtual filesystem) via unify. If so, this is kind of standard. What I''d suggest is look in the wiki examples for the server side AFR, and then set up your clients using the example, my memory escapes me, sorry, but there was an example which used IP round-robin to do a kind of automated server failover, but this may not help with load-balancing. So I suggest you set up the local cache translator to reduce your network overhead. If you need better load balancing on the servers then you can use NUFA or something like that to help out.>I can get it to work with only afr, or only unify, but combining the two >somehow does not do it. The best I got so far was to have empty directories >on the servers.what''re you trying to unify? If you''re servers are AFRing eachother, then this is one filesystem. if you then want to unify multiple AFR bricks, you can do this, but again, I think you''re better off having all the hard work done on the servers, and letting the clients just mount the unified volume from the servers. I think it''s possible to AFR the unify metadata nowadays, but I don''t know for sure.>Worst case is that I go with not doing the unified thing, but it >would be nice >because it is so much faster and most of the time, files are just accessed >locally anyway. With only client side afr, nfs seemed to be faster (with >varying performance translators), and with server side afr, nfs and glusterfs >where about the same speed - with the big advantage of not having a single >point of failure. :)Again, I think all your problems are solved by doing all the hard work on the servers and the clients just mount the unified brick from the server.>Maybe this setup just doesn''t make any sense at all ... > >Well, hope somebody has some suggestions!I''m not sure if I pointed you in the right direction or down a path of despair.. hopefully the former. Keith
I'm not sure if anyones responded to you. if not, it might be helpful to post your config files. my comments are inline below At 09:11 PM 8/4/2008, Wolfgang Pauli wrote:>Hi, > >I am having trouble with the following setup, hope that somebody will be able >to help. > >We have a couple of workstations and two file-servers. Let's just >keep it at 2 >workstations for now, because I can just add more clients later I guess. So >the workstations are called echo and reality, and the servers are dream and >neo. > >I am wondering whether it is possible to have a unified filesystem between >echo and reality, using the nufa scheduler to keep things local as much as >possible, and at the same time to have afr running on the servers neo and >dream so that those guys each have a complete copy of the workstations (echo >and reality).When you mean, unified filesystem, you mean the same filesystem available on both clients? and not a Union filesystem (whereby you combine multiple filesystems into one large virtual filesystem) via unify. If so, this is kind of standard. What I'd suggest is look in the wiki examples for the server side AFR, and then set up your clients using the example, my memory escapes me, sorry, but there was an example which used IP round-robin to do a kind of automated server failover, but this may not help with load-balancing. So I suggest you set up the local cache translator to reduce your network overhead. If you need better load balancing on the servers then you can use NUFA or something like that to help out.>I can get it to work with only afr, or only unify, but combining the two >somehow does not do it. The best I got so far was to have empty directories >on the servers.what're you trying to unify? If you're servers are AFRing eachother, then this is one filesystem. if you then want to unify multiple AFR bricks, you can do this, but again, I think you're better off having all the hard work done on the servers, and letting the clients just mount the unified volume from the servers. I think it's possible to AFR the unify metadata nowadays, but I don't know for sure.>Worst case is that I go with not doing the unified thing, but it >would be nice >because it is so much faster and most of the time, files are just accessed >locally anyway. With only client side afr, nfs seemed to be faster (with >varying performance translators), and with server side afr, nfs and glusterfs >where about the same speed - with the big advantage of not having a single >point of failure. :)Again, I think all your problems are solved by doing all the hard work on the servers and the clients just mount the unified brick from the server.>Maybe this setup just doesn't make any sense at all ... > >Well, hope somebody has some suggestions!I'm not sure if I pointed you in the right direction or down a path of despair.. hopefully the former. Keith
I''m not sure if anyones responded to you. if not, it might be helpful to post your config files. my comments are inline below At 09:11 PM 8/4/2008, Wolfgang Pauli wrote:>Hi, > >I am having trouble with the following setup, hope that somebody will be able >to help. > >We have a couple of workstations and two file-servers. Let''s just >keep it at 2 >workstations for now, because I can just add more clients later I guess. So >the workstations are called echo and reality, and the servers are dream and >neo. > >I am wondering whether it is possible to have a unified filesystem between >echo and reality, using the nufa scheduler to keep things local as much as >possible, and at the same time to have afr running on the servers neo and >dream so that those guys each have a complete copy of the workstations (echo >and reality).When you mean, unified filesystem, you mean the same filesystem available on both clients? and not a Union filesystem (whereby you combine multiple filesystems into one large virtual filesystem) via unify. If so, this is kind of standard. What I''d suggest is look in the wiki examples for the server side AFR, and then set up your clients using the example, my memory escapes me, sorry, but there was an example which used IP round-robin to do a kind of automated server failover, but this may not help with load-balancing. So I suggest you set up the local cache translator to reduce your network overhead. If you need better load balancing on the servers then you can use NUFA or something like that to help out.>I can get it to work with only afr, or only unify, but combining the two >somehow does not do it. The best I got so far was to have empty directories >on the servers.what''re you trying to unify? If you''re servers are AFRing eachother, then this is one filesystem. if you then want to unify multiple AFR bricks, you can do this, but again, I think you''re better off having all the hard work done on the servers, and letting the clients just mount the unified volume from the servers. I think it''s possible to AFR the unify metadata nowadays, but I don''t know for sure.>Worst case is that I go with not doing the unified thing, but it >would be nice >because it is so much faster and most of the time, files are just accessed >locally anyway. With only client side afr, nfs seemed to be faster (with >varying performance translators), and with server side afr, nfs and glusterfs >where about the same speed - with the big advantage of not having a single >point of failure. :)Again, I think all your problems are solved by doing all the hard work on the servers and the clients just mount the unified brick from the server.>Maybe this setup just doesn''t make any sense at all ... > >Well, hope somebody has some suggestions!I''m not sure if I pointed you in the right direction or down a path of despair.. hopefully the former. Keith
Hi, Thanks for you your reply. Here is the part of my configuration file that might explain better what I am trying to do. ----------- #glusterfs-server-dream.vol # dream-mirror is the directory where I would like to have a complete copy of unify0 volume dream-mirror type storage/posix option directory /glusterfs-mirror end-volume volume dream type storage/posix option directory /glusterfs end-volume # namespace for unify0 volume dream-ns type storage/posix option directory /glusterfs-ns end-volume # another node volume neo type protocol/client option transport-type tcp/client option remote-host neo # defined in /etc/hosts option remote-subvolume neo end-volume #another node volume echo type protocol/client option transport-type tcp/client option remote-host echo option remote-subvolume echo end-volume volume unify0 type cluster/unify option scheduler rr # round robin # going to switch to NUFA option namespace dream-ns subvolumes dream echo neo end-volume volume afr0 type cluster/afr subvolumes unify0 dream-mirror end-volume volume server type protocol/server option transport-type tcp/server subvolumes dream dream-ns unify0 option auth.ip.dream.allow * option auth.ip.dream-ns.allow * option auth.ip.unify0.allow * # option auth.ip.dream-mirror.allow * # option auth.ip.afr0.allow * end-volume ---------- # glusterfs-client-dream.vol volume unify0 type protocol/client option transport-type tcp/client option remote-host 127.0.0.1 # IP address of server2 option remote-subvolume unify0 # use dream on server2 end-volume ---------------- the problem with our network is that it is slow (100MBit/s). So it would be great if all files (talking about /home/*) would just stay on the workstations, unless needed somewhere else. So I would like to do a afr over a unify guy, but so far volume dream-mirror remains empty. Thanks! Wolfgang
ok. I see what you''re trying to do. I belive the afr of the unify in the 2nd another node should be fine. I''m guessing what you''re experiencing is that dream-mirror is empty? for the AFR brick, I''d add a local read volume as your unify, unless you want anothernode2 to read from the mirror, which I think you don''t. Is something mounting afr0? As I understand it, the afr happens when files are accessed through the AFR volume. So, just defining the volume doesn''t accomplish anything. When you access a file that''s mounted on the afr volume (or on a volume below the afr volume), the AFR translator, asks both bricks for the extended attributes and the file timestamp (and for the directory as well). If they''re not the same, then it copies over the new one to the mirror(s). However, if no file request ever passes through the AFR translator, then nothing gets replicated. So, some node has to moun the AFR brick. your configuration is actually a good one for doing periodic mirroring. you mount the afr volume, run a find (there are examples in the wiki) across the new mount point, thus causing auto-healing of the entire volume. then unmount it. And you have effectively a point in time snapshot. You then mount it again, run find to auto-heal. However, if you want "live" replication, then you need the AFR volume to be in use and active.. Ideally, you should have ALL the nodes using the AFR config, and mounting the AFR volume --- set the unify brick as the read volume. This way, anytime any node reads data, it reads from the unify and anytime any of them write data, it gets written to the mirro via the AFR translator. I hope I understood your intentions clearly. Keith At 04:09 PM 8/5/2008, Wolfgang Pauli wrote:>Hi, > >Thanks for you your reply. > >Here is the part of my configuration file that might explain better what I am >trying to do. > >----------- >#glusterfs-server-dream.vol > ># dream-mirror is the directory where I would like to have a complete copy of >unify0 >volume dream-mirror > type storage/posix > option directory /glusterfs-mirror >end-volume > >volume dream > type storage/posix > option directory /glusterfs >end-volume > ># namespace for unify0 >volume dream-ns > type storage/posix > option directory /glusterfs-ns >end-volume > ># another node >volume neo > type protocol/client > option transport-type tcp/client > option remote-host neo # defined in /etc/hosts > option remote-subvolume neo >end-volume > >#another node >volume echo > type protocol/client > option transport-type tcp/client > option remote-host echo > option remote-subvolume echo >end-volume > >volume unify0 > type cluster/unify > option scheduler rr # round robin # going to switch to NUFA > option namespace dream-ns > subvolumes dream echo neo >end-volume > >volume afr0 > type cluster/afr > subvolumes unify0 dream-mirror >end-volume > >volume server > type protocol/server > option transport-type tcp/server > subvolumes dream dream-ns unify0 > option auth.ip.dream.allow * > option auth.ip.dream-ns.allow * > option auth.ip.unify0.allow * ># option auth.ip.dream-mirror.allow * ># option auth.ip.afr0.allow * >end-volume > >---------- > ># glusterfs-client-dream.vol > >volume unify0 > type protocol/client > option transport-type tcp/client > option remote-host 127.0.0.1 # IP address of server2 > option remote-subvolume unify0 # use dream on server2 >end-volume > >---------------- > >the problem with our network is that it is slow (100MBit/s). So it would be >great if all files (talking about /home/*) would just stay on the >workstations, unless needed somewhere else. So I would like to do a afr over >a unify guy, but so far volume dream-mirror remains empty. > >Thanks! > >Wolfgang > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
ok. I see what you''re trying to do. I belive the afr of the unify in the 2nd another node should be fine. I''m guessing what you''re experiencing is that dream-mirror is empty? for the AFR brick, I''d add a local read volume as your unify, unless you want anothernode2 to read from the mirror, which I think you don''t. Is something mounting afr0? As I understand it, the afr happens when files are accessed through the AFR volume. So, just defining the volume doesn''t accomplish anything. When you access a file that''s mounted on the afr volume (or on a volume below the afr volume), the AFR translator, asks both bricks for the extended attributes and the file timestamp (and for the directory as well). If they''re not the same, then it copies over the new one to the mirror(s). However, if no file request ever passes through the AFR translator, then nothing gets replicated. So, some node has to moun the AFR brick. your configuration is actually a good one for doing periodic mirroring. you mount the afr volume, run a find (there are examples in the wiki) across the new mount point, thus causing auto-healing of the entire volume. then unmount it. And you have effectively a point in time snapshot. You then mount it again, run find to auto-heal. However, if you want "live" replication, then you need the AFR volume to be in use and active.. Ideally, you should have ALL the nodes using the AFR config, and mounting the AFR volume --- set the unify brick as the read volume. This way, anytime any node reads data, it reads from the unify and anytime any of them write data, it gets written to the mirro via the AFR translator. I hope I understood your intentions clearly. Keith At 04:09 PM 8/5/2008, Wolfgang Pauli wrote:>Hi, > >Thanks for you your reply. > >Here is the part of my configuration file that might explain better what I am >trying to do. > >----------- >#glusterfs-server-dream.vol > ># dream-mirror is the directory where I would like to have a complete copy of >unify0 >volume dream-mirror > type storage/posix > option directory /glusterfs-mirror >end-volume > >volume dream > type storage/posix > option directory /glusterfs >end-volume > ># namespace for unify0 >volume dream-ns > type storage/posix > option directory /glusterfs-ns >end-volume > ># another node >volume neo > type protocol/client > option transport-type tcp/client > option remote-host neo # defined in /etc/hosts > option remote-subvolume neo >end-volume > >#another node >volume echo > type protocol/client > option transport-type tcp/client > option remote-host echo > option remote-subvolume echo >end-volume > >volume unify0 > type cluster/unify > option scheduler rr # round robin # going to switch to NUFA > option namespace dream-ns > subvolumes dream echo neo >end-volume > >volume afr0 > type cluster/afr > subvolumes unify0 dream-mirror >end-volume > >volume server > type protocol/server > option transport-type tcp/server > subvolumes dream dream-ns unify0 > option auth.ip.dream.allow * > option auth.ip.dream-ns.allow * > option auth.ip.unify0.allow * ># option auth.ip.dream-mirror.allow * ># option auth.ip.afr0.allow * >end-volume > >---------- > ># glusterfs-client-dream.vol > >volume unify0 > type protocol/client > option transport-type tcp/client > option remote-host 127.0.0.1 # IP address of server2 > option remote-subvolume unify0 # use dream on server2 >end-volume > >---------------- > >the problem with our network is that it is slow (100MBit/s). So it would be >great if all files (talking about /home/*) would just stay on the >workstations, unless needed somewhere else. So I would like to do a afr over >a unify guy, but so far volume dream-mirror remains empty. > >Thanks! > >Wolfgang > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
ok. I see what you''re trying to do. I belive the afr of the unify in the 2nd another node should be fine. I''m guessing what you''re experiencing is that dream-mirror is empty? for the AFR brick, I''d add a local read volume as your unify, unless you want anothernode2 to read from the mirror, which I think you don''t. Is something mounting afr0? As I understand it, the afr happens when files are accessed through the AFR volume. So, just defining the volume doesn''t accomplish anything. When you access a file that''s mounted on the afr volume (or on a volume below the afr volume), the AFR translator, asks both bricks for the extended attributes and the file timestamp (and for the directory as well). If they''re not the same, then it copies over the new one to the mirror(s). However, if no file request ever passes through the AFR translator, then nothing gets replicated. So, some node has to moun the AFR brick. your configuration is actually a good one for doing periodic mirroring. you mount the afr volume, run a find (there are examples in the wiki) across the new mount point, thus causing auto-healing of the entire volume. then unmount it. And you have effectively a point in time snapshot. You then mount it again, run find to auto-heal. However, if you want "live" replication, then you need the AFR volume to be in use and active.. Ideally, you should have ALL the nodes using the AFR config, and mounting the AFR volume --- set the unify brick as the read volume. This way, anytime any node reads data, it reads from the unify and anytime any of them write data, it gets written to the mirro via the AFR translator. I hope I understood your intentions clearly. Keith At 04:09 PM 8/5/2008, Wolfgang Pauli wrote:>Hi, > >Thanks for you your reply. > >Here is the part of my configuration file that might explain better what I am >trying to do. > >----------- >#glusterfs-server-dream.vol > ># dream-mirror is the directory where I would like to have a complete copy of >unify0 >volume dream-mirror > type storage/posix > option directory /glusterfs-mirror >end-volume > >volume dream > type storage/posix > option directory /glusterfs >end-volume > ># namespace for unify0 >volume dream-ns > type storage/posix > option directory /glusterfs-ns >end-volume > ># another node >volume neo > type protocol/client > option transport-type tcp/client > option remote-host neo # defined in /etc/hosts > option remote-subvolume neo >end-volume > >#another node >volume echo > type protocol/client > option transport-type tcp/client > option remote-host echo > option remote-subvolume echo >end-volume > >volume unify0 > type cluster/unify > option scheduler rr # round robin # going to switch to NUFA > option namespace dream-ns > subvolumes dream echo neo >end-volume > >volume afr0 > type cluster/afr > subvolumes unify0 dream-mirror >end-volume > >volume server > type protocol/server > option transport-type tcp/server > subvolumes dream dream-ns unify0 > option auth.ip.dream.allow * > option auth.ip.dream-ns.allow * > option auth.ip.unify0.allow * ># option auth.ip.dream-mirror.allow * ># option auth.ip.afr0.allow * >end-volume > >---------- > ># glusterfs-client-dream.vol > >volume unify0 > type protocol/client > option transport-type tcp/client > option remote-host 127.0.0.1 # IP address of server2 > option remote-subvolume unify0 # use dream on server2 >end-volume > >---------------- > >the problem with our network is that it is slow (100MBit/s). So it would be >great if all files (talking about /home/*) would just stay on the >workstations, unless needed somewhere else. So I would like to do a afr over >a unify guy, but so far volume dream-mirror remains empty. > >Thanks! > >Wolfgang > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Hi, Hm ... I think I understood your email and we are on the same page. However, it seems like an afr of a unify volume and a posix volume don't work. Files created in the unify volume never show up in the mounted afr volume. If I create a file in the afr volume, that works, but is followed by Input/Output errors, until I do a setfattr -x trusted.glusterfs.version on the directories. (Can give a more detailed description in case this looks like a bug) Is it possible that a afr over a unify is just not supposed to work? Thanks! Wolfgang On Tuesday 05 August 2008 20:56:10 Keith Freedman wrote:> ok. I see what you're trying to do. > I belive the afr of the unify in the 2nd another node should be fine. > > I'm guessing what you're experiencing is that dream-mirror is empty? > > for the AFR brick, I'd add a local read volume as your unify, unless > you want anothernode2 to read from the mirror, which I think you don't. > > Is something mounting afr0? As I understand it, the afr happens > when files are accessed through the AFR volume. > So, just defining the volume doesn't accomplish anything. > When you access a file that's mounted on the afr volume (or on a > volume below the afr volume), the AFR translator, asks both bricks > for the extended attributes and the file timestamp (and for the > directory as well). If they're not the same, then it copies over > the new one to the mirror(s). However, if no file request ever > passes through the AFR translator, then nothing gets replicated. > > So, some node has to moun the AFR brick. > > your configuration is actually a good one for doing periodic mirroring. > you mount the afr volume, run a find (there are examples in the wiki) > across the new mount point, thus causing auto-healing of the entire > volume. then unmount it. > And you have effectively a point in time snapshot. > > You then mount it again, run find to auto-heal. > > However, if you want "live" replication, then you need the AFR volume > to be in use and active.. > > Ideally, you should have ALL the nodes using the AFR config, and > mounting the AFR volume --- set the unify brick as the read volume. > > This way, anytime any node reads data, it reads from the unify and > anytime any of them write data, it gets written to the mirro via the > AFR translator. > > I hope I understood your intentions clearly. > > Keith > > At 04:09 PM 8/5/2008, Wolfgang Pauli wrote: > >Hi, > > > >Thanks for you your reply. > > > >Here is the part of my configuration file that might explain better what I > > am trying to do. > > > >----------- > >#glusterfs-server-dream.vol > > > ># dream-mirror is the directory where I would like to have a complete copy > > of unify0 > >volume dream-mirror > > type storage/posix > > option directory /glusterfs-mirror > >end-volume > > > >volume dream > > type storage/posix > > option directory /glusterfs > >end-volume > > > ># namespace for unify0 > >volume dream-ns > > type storage/posix > > option directory /glusterfs-ns > >end-volume > > > ># another node > >volume neo > > type protocol/client > > option transport-type tcp/client > > option remote-host neo # defined in /etc/hosts > > option remote-subvolume neo > >end-volume > > > >#another node > >volume echo > > type protocol/client > > option transport-type tcp/client > > option remote-host echo > > option remote-subvolume echo > >end-volume > > > >volume unify0 > > type cluster/unify > > option scheduler rr # round robin # going to switch to NUFA > > option namespace dream-ns > > subvolumes dream echo neo > >end-volume > > > >volume afr0 > > type cluster/afr > > subvolumes unify0 dream-mirror > >end-volume > > > >volume server > > type protocol/server > > option transport-type tcp/server > > subvolumes dream dream-ns unify0 > > option auth.ip.dream.allow * > > option auth.ip.dream-ns.allow * > > option auth.ip.unify0.allow * > ># option auth.ip.dream-mirror.allow * > ># option auth.ip.afr0.allow * > >end-volume > > > >---------- > > > ># glusterfs-client-dream.vol > > > >volume unify0 > > type protocol/client > > option transport-type tcp/client > > option remote-host 127.0.0.1 # IP address of server2 > > option remote-subvolume unify0 # use dream on server2 > >end-volume > > > >---------------- > > > >the problem with our network is that it is slow (100MBit/s). So it would > > be great if all files (talking about /home/*) would just stay on the > > workstations, unless needed somewhere else. So I would like to do a afr > > over a unify guy, but so far volume dream-mirror remains empty. > > > >Thanks! > > > >Wolfgang > > > > > >_______________________________________________ > >Gluster-users mailing list > >Gluster-users at gluster.org > >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
AHA. yes. I now see the problem, and this is probably a bug report that needs to be sent to the devs. If you mean, the xattr isn''t being set on the UNIFY volume, then this makes sense. Most likely the extended attributes aren''t being passed through the unify volume so they dont get written to the disk. I would suppose there are 2 options for this.. One is for gluster to always use extended attributes when the filesystem supports them (this could be an option on the storage translator possibly to turn it off [or on depending on what they think the default should be])... The advantage of this is that at any time, AFR can be enabled on a gluster filesystem and it doesn''t require any preparation. The other is to require the intermediary translators to pass through extended attributes when necessary. On a personal note.. all my brothers went to CU.. I went to CSU :) Keith At 09:41 PM 8/5/2008, Wolfgang Pauli wrote:>Hi, > >Hm ... I think I understood your email and we are on the same page. However, >it seems like an afr of a unify volume and a posix volume don''t work. > >Files created in the unify volume never show up in the mounted afr volume. If >I create a file in the afr volume, that works, but is followed by >Input/Output errors, until I do a setfattr -x trusted.glusterfs.version on >the directories. (Can give a more detailed description in case this looks >like a bug) > >Is it possible that a afr over a unify is just not supposed to work? > >Thanks! > >Wolfgang > >On Tuesday 05 August 2008 20:56:10 Keith Freedman wrote: > > ok. I see what you''re trying to do. > > I belive the afr of the unify in the 2nd another node should be fine. > > > > I''m guessing what you''re experiencing is that dream-mirror is empty? > > > > for the AFR brick, I''d add a local read volume as your unify, unless > > you want anothernode2 to read from the mirror, which I think you don''t. > > > > Is something mounting afr0? As I understand it, the afr happens > > when files are accessed through the AFR volume. > > So, just defining the volume doesn''t accomplish anything. > > When you access a file that''s mounted on the afr volume (or on a > > volume below the afr volume), the AFR translator, asks both bricks > > for the extended attributes and the file timestamp (and for the > > directory as well). If they''re not the same, then it copies over > > the new one to the mirror(s). However, if no file request ever > > passes through the AFR translator, then nothing gets replicated. > > > > So, some node has to moun the AFR brick. > > > > your configuration is actually a good one for doing periodic mirroring. > > you mount the afr volume, run a find (there are examples in the wiki) > > across the new mount point, thus causing auto-healing of the entire > > volume. then unmount it. > > And you have effectively a point in time snapshot. > > > > You then mount it again, run find to auto-heal. > > > > However, if you want "live" replication, then you need the AFR volume > > to be in use and active.. > > > > Ideally, you should have ALL the nodes using the AFR config, and > > mounting the AFR volume --- set the unify brick as the read volume. > > > > This way, anytime any node reads data, it reads from the unify and > > anytime any of them write data, it gets written to the mirro via the > > AFR translator. > > > > I hope I understood your intentions clearly. > > > > Keith > > > > At 04:09 PM 8/5/2008, Wolfgang Pauli wrote: > > >Hi, > > > > > >Thanks for you your reply. > > > > > >Here is the part of my configuration file that might explain better what I > > > am trying to do. > > > > > >----------- > > >#glusterfs-server-dream.vol > > > > > ># dream-mirror is the directory where I would like to have a complete copy > > > of unify0 > > >volume dream-mirror > > > type storage/posix > > > option directory /glusterfs-mirror > > >end-volume > > > > > >volume dream > > > type storage/posix > > > option directory /glusterfs > > >end-volume > > > > > ># namespace for unify0 > > >volume dream-ns > > > type storage/posix > > > option directory /glusterfs-ns > > >end-volume > > > > > ># another node > > >volume neo > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host neo # defined in /etc/hosts > > > option remote-subvolume neo > > >end-volume > > > > > >#another node > > >volume echo > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host echo > > > option remote-subvolume echo > > >end-volume > > > > > >volume unify0 > > > type cluster/unify > > > option scheduler rr # round robin # going to switch to NUFA > > > option namespace dream-ns > > > subvolumes dream echo neo > > >end-volume > > > > > >volume afr0 > > > type cluster/afr > > > subvolumes unify0 dream-mirror > > >end-volume > > > > > >volume server > > > type protocol/server > > > option transport-type tcp/server > > > subvolumes dream dream-ns unify0 > > > option auth.ip.dream.allow * > > > option auth.ip.dream-ns.allow * > > > option auth.ip.unify0.allow * > > ># option auth.ip.dream-mirror.allow * > > ># option auth.ip.afr0.allow * > > >end-volume > > > > > >---------- > > > > > ># glusterfs-client-dream.vol > > > > > >volume unify0 > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host 127.0.0.1 # IP address of server2 > > > option remote-subvolume unify0 # use dream on server2 > > >end-volume > > > > > >---------------- > > > > > >the problem with our network is that it is slow (100MBit/s). So it would > > > be great if all files (talking about /home/*) would just stay on the > > > workstations, unless needed somewhere else. So I would like to do a afr > > > over a unify guy, but so far volume dream-mirror remains empty. > > > > > >Thanks! > > > > > >Wolfgang > > > > > > > > >_______________________________________________ > > >Gluster-users mailing list > > >Gluster-users at gluster.org > > >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
AHA. yes. I now see the problem, and this is probably a bug report that needs to be sent to the devs. If you mean, the xattr isn''t being set on the UNIFY volume, then this makes sense. Most likely the extended attributes aren''t being passed through the unify volume so they dont get written to the disk. I would suppose there are 2 options for this.. One is for gluster to always use extended attributes when the filesystem supports them (this could be an option on the storage translator possibly to turn it off [or on depending on what they think the default should be])... The advantage of this is that at any time, AFR can be enabled on a gluster filesystem and it doesn''t require any preparation. The other is to require the intermediary translators to pass through extended attributes when necessary. On a personal note.. all my brothers went to CU.. I went to CSU :) Keith At 09:41 PM 8/5/2008, Wolfgang Pauli wrote:>Hi, > >Hm ... I think I understood your email and we are on the same page. However, >it seems like an afr of a unify volume and a posix volume don''t work. > >Files created in the unify volume never show up in the mounted afr volume. If >I create a file in the afr volume, that works, but is followed by >Input/Output errors, until I do a setfattr -x trusted.glusterfs.version on >the directories. (Can give a more detailed description in case this looks >like a bug) > >Is it possible that a afr over a unify is just not supposed to work? > >Thanks! > >Wolfgang > >On Tuesday 05 August 2008 20:56:10 Keith Freedman wrote: > > ok. I see what you''re trying to do. > > I belive the afr of the unify in the 2nd another node should be fine. > > > > I''m guessing what you''re experiencing is that dream-mirror is empty? > > > > for the AFR brick, I''d add a local read volume as your unify, unless > > you want anothernode2 to read from the mirror, which I think you don''t. > > > > Is something mounting afr0? As I understand it, the afr happens > > when files are accessed through the AFR volume. > > So, just defining the volume doesn''t accomplish anything. > > When you access a file that''s mounted on the afr volume (or on a > > volume below the afr volume), the AFR translator, asks both bricks > > for the extended attributes and the file timestamp (and for the > > directory as well). If they''re not the same, then it copies over > > the new one to the mirror(s). However, if no file request ever > > passes through the AFR translator, then nothing gets replicated. > > > > So, some node has to moun the AFR brick. > > > > your configuration is actually a good one for doing periodic mirroring. > > you mount the afr volume, run a find (there are examples in the wiki) > > across the new mount point, thus causing auto-healing of the entire > > volume. then unmount it. > > And you have effectively a point in time snapshot. > > > > You then mount it again, run find to auto-heal. > > > > However, if you want "live" replication, then you need the AFR volume > > to be in use and active.. > > > > Ideally, you should have ALL the nodes using the AFR config, and > > mounting the AFR volume --- set the unify brick as the read volume. > > > > This way, anytime any node reads data, it reads from the unify and > > anytime any of them write data, it gets written to the mirro via the > > AFR translator. > > > > I hope I understood your intentions clearly. > > > > Keith > > > > At 04:09 PM 8/5/2008, Wolfgang Pauli wrote: > > >Hi, > > > > > >Thanks for you your reply. > > > > > >Here is the part of my configuration file that might explain better what I > > > am trying to do. > > > > > >----------- > > >#glusterfs-server-dream.vol > > > > > ># dream-mirror is the directory where I would like to have a complete copy > > > of unify0 > > >volume dream-mirror > > > type storage/posix > > > option directory /glusterfs-mirror > > >end-volume > > > > > >volume dream > > > type storage/posix > > > option directory /glusterfs > > >end-volume > > > > > ># namespace for unify0 > > >volume dream-ns > > > type storage/posix > > > option directory /glusterfs-ns > > >end-volume > > > > > ># another node > > >volume neo > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host neo # defined in /etc/hosts > > > option remote-subvolume neo > > >end-volume > > > > > >#another node > > >volume echo > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host echo > > > option remote-subvolume echo > > >end-volume > > > > > >volume unify0 > > > type cluster/unify > > > option scheduler rr # round robin # going to switch to NUFA > > > option namespace dream-ns > > > subvolumes dream echo neo > > >end-volume > > > > > >volume afr0 > > > type cluster/afr > > > subvolumes unify0 dream-mirror > > >end-volume > > > > > >volume server > > > type protocol/server > > > option transport-type tcp/server > > > subvolumes dream dream-ns unify0 > > > option auth.ip.dream.allow * > > > option auth.ip.dream-ns.allow * > > > option auth.ip.unify0.allow * > > ># option auth.ip.dream-mirror.allow * > > ># option auth.ip.afr0.allow * > > >end-volume > > > > > >---------- > > > > > ># glusterfs-client-dream.vol > > > > > >volume unify0 > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host 127.0.0.1 # IP address of server2 > > > option remote-subvolume unify0 # use dream on server2 > > >end-volume > > > > > >---------------- > > > > > >the problem with our network is that it is slow (100MBit/s). So it would > > > be great if all files (talking about /home/*) would just stay on the > > > workstations, unless needed somewhere else. So I would like to do a afr > > > over a unify guy, but so far volume dream-mirror remains empty. > > > > > >Thanks! > > > > > >Wolfgang > > > > > > > > >_______________________________________________ > > >Gluster-users mailing list > > >Gluster-users at gluster.org > > >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
AHA. yes. I now see the problem, and this is probably a bug report that needs to be sent to the devs. If you mean, the xattr isn''t being set on the UNIFY volume, then this makes sense. Most likely the extended attributes aren''t being passed through the unify volume so they dont get written to the disk. I would suppose there are 2 options for this.. One is for gluster to always use extended attributes when the filesystem supports them (this could be an option on the storage translator possibly to turn it off [or on depending on what they think the default should be])... The advantage of this is that at any time, AFR can be enabled on a gluster filesystem and it doesn''t require any preparation. The other is to require the intermediary translators to pass through extended attributes when necessary. On a personal note.. all my brothers went to CU.. I went to CSU :) Keith At 09:41 PM 8/5/2008, Wolfgang Pauli wrote:>Hi, > >Hm ... I think I understood your email and we are on the same page. However, >it seems like an afr of a unify volume and a posix volume don''t work. > >Files created in the unify volume never show up in the mounted afr volume. If >I create a file in the afr volume, that works, but is followed by >Input/Output errors, until I do a setfattr -x trusted.glusterfs.version on >the directories. (Can give a more detailed description in case this looks >like a bug) > >Is it possible that a afr over a unify is just not supposed to work? > >Thanks! > >Wolfgang > >On Tuesday 05 August 2008 20:56:10 Keith Freedman wrote: > > ok. I see what you''re trying to do. > > I belive the afr of the unify in the 2nd another node should be fine. > > > > I''m guessing what you''re experiencing is that dream-mirror is empty? > > > > for the AFR brick, I''d add a local read volume as your unify, unless > > you want anothernode2 to read from the mirror, which I think you don''t. > > > > Is something mounting afr0? As I understand it, the afr happens > > when files are accessed through the AFR volume. > > So, just defining the volume doesn''t accomplish anything. > > When you access a file that''s mounted on the afr volume (or on a > > volume below the afr volume), the AFR translator, asks both bricks > > for the extended attributes and the file timestamp (and for the > > directory as well). If they''re not the same, then it copies over > > the new one to the mirror(s). However, if no file request ever > > passes through the AFR translator, then nothing gets replicated. > > > > So, some node has to moun the AFR brick. > > > > your configuration is actually a good one for doing periodic mirroring. > > you mount the afr volume, run a find (there are examples in the wiki) > > across the new mount point, thus causing auto-healing of the entire > > volume. then unmount it. > > And you have effectively a point in time snapshot. > > > > You then mount it again, run find to auto-heal. > > > > However, if you want "live" replication, then you need the AFR volume > > to be in use and active.. > > > > Ideally, you should have ALL the nodes using the AFR config, and > > mounting the AFR volume --- set the unify brick as the read volume. > > > > This way, anytime any node reads data, it reads from the unify and > > anytime any of them write data, it gets written to the mirro via the > > AFR translator. > > > > I hope I understood your intentions clearly. > > > > Keith > > > > At 04:09 PM 8/5/2008, Wolfgang Pauli wrote: > > >Hi, > > > > > >Thanks for you your reply. > > > > > >Here is the part of my configuration file that might explain better what I > > > am trying to do. > > > > > >----------- > > >#glusterfs-server-dream.vol > > > > > ># dream-mirror is the directory where I would like to have a complete copy > > > of unify0 > > >volume dream-mirror > > > type storage/posix > > > option directory /glusterfs-mirror > > >end-volume > > > > > >volume dream > > > type storage/posix > > > option directory /glusterfs > > >end-volume > > > > > ># namespace for unify0 > > >volume dream-ns > > > type storage/posix > > > option directory /glusterfs-ns > > >end-volume > > > > > ># another node > > >volume neo > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host neo # defined in /etc/hosts > > > option remote-subvolume neo > > >end-volume > > > > > >#another node > > >volume echo > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host echo > > > option remote-subvolume echo > > >end-volume > > > > > >volume unify0 > > > type cluster/unify > > > option scheduler rr # round robin # going to switch to NUFA > > > option namespace dream-ns > > > subvolumes dream echo neo > > >end-volume > > > > > >volume afr0 > > > type cluster/afr > > > subvolumes unify0 dream-mirror > > >end-volume > > > > > >volume server > > > type protocol/server > > > option transport-type tcp/server > > > subvolumes dream dream-ns unify0 > > > option auth.ip.dream.allow * > > > option auth.ip.dream-ns.allow * > > > option auth.ip.unify0.allow * > > ># option auth.ip.dream-mirror.allow * > > ># option auth.ip.afr0.allow * > > >end-volume > > > > > >---------- > > > > > ># glusterfs-client-dream.vol > > > > > >volume unify0 > > > type protocol/client > > > option transport-type tcp/client > > > option remote-host 127.0.0.1 # IP address of server2 > > > option remote-subvolume unify0 # use dream on server2 > > >end-volume > > > > > >---------------- > > > > > >the problem with our network is that it is slow (100MBit/s). So it would > > > be great if all files (talking about /home/*) would just stay on the > > > workstations, unless needed somewhere else. So I would like to do a afr > > > over a unify guy, but so far volume dream-mirror remains empty. > > > > > >Thanks! > > > > > >Wolfgang > > > > > > > > >_______________________________________________ > > >Gluster-users mailing list > > >Gluster-users at gluster.org > > >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Too bad. I will just set this up as a big unify volume then. Should work as well with nightly backups with tob. In regard to your personal note. You are lucky that you did not have to deal with the CU network policy then. In our offices, 10MBit/s are for free, 100 are 20 per jack, and 1GBit/s are 90 per jack. That's why we need to use the unify translator with the nufa scheduler to keep things local. :( Even though client-side afr would be the way to go ... I tried to use all the different performance translators to speed things up, but to no avail. Wolfgang On Tuesday 05 August 2008 22:58:40 Keith Freedman wrote:> AHA. yes. I now see the problem, and this is probably a bug report > that needs to be sent to the devs. > > If you mean, the xattr isn't being set on the UNIFY volume, then this > makes sense. > Most likely the extended attributes aren't being passed through the > unify volume so they dont get written to the disk. > > I would suppose there are 2 options for this.. One is for gluster to > always use extended attributes when the filesystem supports them > (this could be an option on the storage translator possibly to turn > it off [or on depending on what they think the default should be])... > The advantage of this is that at any time, AFR can be enabled on a > gluster filesystem and it doesn't require any preparation. > The other is to require the intermediary translators to pass through > extended attributes when necessary. > > On a personal note.. all my brothers went to CU.. I went to CSU :) > > Keith > > At 09:41 PM 8/5/2008, Wolfgang Pauli wrote: > >Hi, > > > >Hm ... I think I understood your email and we are on the same page. > > However, it seems like an afr of a unify volume and a posix volume don't > > work. > > > >Files created in the unify volume never show up in the mounted afr volume. > > If I create a file in the afr volume, that works, but is followed by > > Input/Output errors, until I do a setfattr -x trusted.glusterfs.version > > on the directories. (Can give a more detailed description in case this > > looks like a bug) > > > >Is it possible that a afr over a unify is just not supposed to work? > > > >Thanks! > > > >Wolfgang > > > >On Tuesday 05 August 2008 20:56:10 Keith Freedman wrote: > > > ok. I see what you're trying to do. > > > I belive the afr of the unify in the 2nd another node should be fine. > > > > > > I'm guessing what you're experiencing is that dream-mirror is empty? > > > > > > for the AFR brick, I'd add a local read volume as your unify, unless > > > you want anothernode2 to read from the mirror, which I think you don't. > > > > > > Is something mounting afr0? As I understand it, the afr happens > > > when files are accessed through the AFR volume. > > > So, just defining the volume doesn't accomplish anything. > > > When you access a file that's mounted on the afr volume (or on a > > > volume below the afr volume), the AFR translator, asks both bricks > > > for the extended attributes and the file timestamp (and for the > > > directory as well). If they're not the same, then it copies over > > > the new one to the mirror(s). However, if no file request ever > > > passes through the AFR translator, then nothing gets replicated. > > > > > > So, some node has to moun the AFR brick. > > > > > > your configuration is actually a good one for doing periodic mirroring. > > > you mount the afr volume, run a find (there are examples in the wiki) > > > across the new mount point, thus causing auto-healing of the entire > > > volume. then unmount it. > > > And you have effectively a point in time snapshot. > > > > > > You then mount it again, run find to auto-heal. > > > > > > However, if you want "live" replication, then you need the AFR volume > > > to be in use and active.. > > > > > > Ideally, you should have ALL the nodes using the AFR config, and > > > mounting the AFR volume --- set the unify brick as the read volume. > > > > > > This way, anytime any node reads data, it reads from the unify and > > > anytime any of them write data, it gets written to the mirro via the > > > AFR translator. > > > > > > I hope I understood your intentions clearly. > > > > > > Keith > > > > > > At 04:09 PM 8/5/2008, Wolfgang Pauli wrote: > > > >Hi, > > > > > > > >Thanks for you your reply. > > > > > > > >Here is the part of my configuration file that might explain better > > > > what I am trying to do. > > > > > > > >----------- > > > >#glusterfs-server-dream.vol > > > > > > > ># dream-mirror is the directory where I would like to have a complete > > > > copy of unify0 > > > >volume dream-mirror > > > > type storage/posix > > > > option directory /glusterfs-mirror > > > >end-volume > > > > > > > >volume dream > > > > type storage/posix > > > > option directory /glusterfs > > > >end-volume > > > > > > > ># namespace for unify0 > > > >volume dream-ns > > > > type storage/posix > > > > option directory /glusterfs-ns > > > >end-volume > > > > > > > ># another node > > > >volume neo > > > > type protocol/client > > > > option transport-type tcp/client > > > > option remote-host neo # defined in /etc/hosts > > > > option remote-subvolume neo > > > >end-volume > > > > > > > >#another node > > > >volume echo > > > > type protocol/client > > > > option transport-type tcp/client > > > > option remote-host echo > > > > option remote-subvolume echo > > > >end-volume > > > > > > > >volume unify0 > > > > type cluster/unify > > > > option scheduler rr # round robin # going to switch to NUFA > > > > option namespace dream-ns > > > > subvolumes dream echo neo > > > >end-volume > > > > > > > >volume afr0 > > > > type cluster/afr > > > > subvolumes unify0 dream-mirror > > > >end-volume > > > > > > > >volume server > > > > type protocol/server > > > > option transport-type tcp/server > > > > subvolumes dream dream-ns unify0 > > > > option auth.ip.dream.allow * > > > > option auth.ip.dream-ns.allow * > > > > option auth.ip.unify0.allow * > > > ># option auth.ip.dream-mirror.allow * > > > ># option auth.ip.afr0.allow * > > > >end-volume > > > > > > > >---------- > > > > > > > ># glusterfs-client-dream.vol > > > > > > > >volume unify0 > > > > type protocol/client > > > > option transport-type tcp/client > > > > option remote-host 127.0.0.1 # IP address of server2 > > > > option remote-subvolume unify0 # use dream on server2 > > > >end-volume > > > > > > > >---------------- > > > > > > > >the problem with our network is that it is slow (100MBit/s). So it > > > > would be great if all files (talking about /home/*) would just stay > > > > on the workstations, unless needed somewhere else. So I would like to > > > > do a afr over a unify guy, but so far volume dream-mirror remains > > > > empty. > > > > > > > >Thanks! > > > > > > > >Wolfgang > > > > > > > > > > > >_______________________________________________ > > > >Gluster-users mailing list > > > >Gluster-users at gluster.org > > > >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > > > >_______________________________________________ > >Gluster-users mailing list > >Gluster-users at gluster.org > >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users