-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello to everyone on the list, I am trying to find a way to "unify" 2 or 3 servers (hopefully more in the future) to host our website's users' content ( the files are between a few kB to a few MB ). The servers are located in 2 different places (different countries) because they both host applications that run "locally". My first idea was to simply mount via NFS, sshfs or some other better way a "user" directory hosted in the second country, but that is only temporary, and not very good. Then I discovered there was something called distributed file systems, and I am now experimentating various ways to achieve my goald : have a DFS hosting all my users contents, wherever the servers are in the world, with some redundancy when we'll have enough servers to avoid data loss. On another part, I would like one of my servers to host only files from a certain user (specific directory 'for instance, path looks like 'users/users123'), though I'd like these files to be also stored on my local server (has a local backup). I've seen it was apparently possible using something close to that in the client config : volume unify type cluster/unify option namespace client-1 option scheduler switch option switch.case *users/users123*:remote_server local_server subvolumes remote_server local_server end-volume So, anyway, thank you for your time, Eric -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkjSR0sACgkQfYfG8fCaAjjs8QCeK0pwsn8UBY+w1zXfovBduVSG HjMAoMeYIknx5MTppz1VIUxVMAhFxl+i =TL4d -----END PGP SIGNATURE-----
I'm currently using gluster in a webserver environment. It's mostly ok, some stability issues but I think they'll be resolved when 1.4 goes stable. comments inline below: At 05:19 AM 9/18/2008, Monchanin Eric wrote:>-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >I am trying to find a way to "unify" 2 or 3 servers (hopefully more in >the future) to host our website's users' content ( the files are >between a few kB to a few MB ). >The servers are located in 2 different places (different countries) >because they both host applications that run "locally".you're going to have severe performance issues with having things so far apart. You're also highly at risk of network interruptions so you really need a solution that can handle loosing contact with the other server. NFS wont do this for you at all, nor will sshfs. Basically you need to replicate rather than unify. (You could do both), but you will need to make sure that there are local copies of all data or you will be down during a network interruption.>My first idea was to simply mount via NFS, sshfs or some other better >way a "user" directory hosted in the second country, but that is only >temporary, and not very good. >Then I discovered there was something called distributed file systems, >and I am now experimentating various ways to achieve my goald : have a >DFS hosting all my users contents, wherever the servers are in the >world, with some redundancy when we'll have enough servers to avoid >data loss.if the data is primarily read only and changes infrequently, you may be able to get away with an rsync based solution (I use Unify to sync the non-time sensitive files, like email accounts and password files, etc... and unison for the user home/webroot files. so if your data isn't time sensitive or is updated infrequently this is probably an ideal solution, it's low bandwidth, and not dependant on a network.>On another part, I would like one of my servers to host only files >from a certain user (specific directory 'for instance, path looks like >'users/users123'), though I'd like these files to be also stored on my >local server (has a local backup).I belive you can do this with unison, some of the translators let you specify this, although you may just need to run multiple gluster clients/servers for each of the users, but ultimately you can somehow accomplish this with gluster.>I've seen it was apparently possible using something close to that in >the client config : > >volume unify > type cluster/unify > option namespace client-1 > option scheduler switch > option switch.case *users/users123*:remote_server local_server > subvolumes remote_server local_server >end-volumethis might work, or perhaps instead of having users as your gluster bricks, you have a separate config for each user. I'm not sure what sort of resource overhead this would create, it's probably negligible relative to the flexibility. Gluster/AFR would do what you need, there would be some network latency issues with the servers being in different countries. every file access the server has to check the other one to make sure it's got the latest files (You obviously have this with NFS, etc. also) which may make things realllly slow, but it's worth testing out at least. I'd wait for 1.4 which has the binary protocol and, therefor, smaller network packets. It also seems to have some significant AFR improvements.>So, anyway, thank you for your time, > >Eric >-----BEGIN PGP SIGNATURE----- >Version: GnuPG v1.4.9 (MingW32) >Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > >iEYEARECAAYFAkjSR0sACgkQfYfG8fCaAjjs8QCeK0pwsn8UBY+w1zXfovBduVSG >HjMAoMeYIknx5MTppz1VIUxVMAhFxl+i >=TL4d >-----END PGP SIGNATURE----- > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
I''m currently using gluster in a webserver environment. It''s mostly ok, some stability issues but I think they''ll be resolved when 1.4 goes stable. comments inline below: At 05:19 AM 9/18/2008, Monchanin Eric wrote:>-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >I am trying to find a way to "unify" 2 or 3 servers (hopefully more in >the future) to host our website''s users'' content ( the files are >between a few kB to a few MB ). >The servers are located in 2 different places (different countries) >because they both host applications that run "locally".you''re going to have severe performance issues with having things so far apart. You''re also highly at risk of network interruptions so you really need a solution that can handle loosing contact with the other server. NFS wont do this for you at all, nor will sshfs. Basically you need to replicate rather than unify. (You could do both), but you will need to make sure that there are local copies of all data or you will be down during a network interruption.>My first idea was to simply mount via NFS, sshfs or some other better >way a "user" directory hosted in the second country, but that is only >temporary, and not very good. >Then I discovered there was something called distributed file systems, >and I am now experimentating various ways to achieve my goald : have a >DFS hosting all my users contents, wherever the servers are in the >world, with some redundancy when we''ll have enough servers to avoid >data loss.if the data is primarily read only and changes infrequently, you may be able to get away with an rsync based solution (I use Unify to sync the non-time sensitive files, like email accounts and password files, etc... and unison for the user home/webroot files. so if your data isn''t time sensitive or is updated infrequently this is probably an ideal solution, it''s low bandwidth, and not dependant on a network.>On another part, I would like one of my servers to host only files >from a certain user (specific directory ''for instance, path looks like >''users/users123''), though I''d like these files to be also stored on my >local server (has a local backup).I belive you can do this with unison, some of the translators let you specify this, although you may just need to run multiple gluster clients/servers for each of the users, but ultimately you can somehow accomplish this with gluster.>I''ve seen it was apparently possible using something close to that in >the client config : > >volume unify > type cluster/unify > option namespace client-1 > option scheduler switch > option switch.case *users/users123*:remote_server local_server > subvolumes remote_server local_server >end-volumethis might work, or perhaps instead of having users as your gluster bricks, you have a separate config for each user. I''m not sure what sort of resource overhead this would create, it''s probably negligible relative to the flexibility. Gluster/AFR would do what you need, there would be some network latency issues with the servers being in different countries. every file access the server has to check the other one to make sure it''s got the latest files (You obviously have this with NFS, etc. also) which may make things realllly slow, but it''s worth testing out at least. I''d wait for 1.4 which has the binary protocol and, therefor, smaller network packets. It also seems to have some significant AFR improvements.>So, anyway, thank you for your time, > >Eric >-----BEGIN PGP SIGNATURE----- >Version: GnuPG v1.4.9 (MingW32) >Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > >iEYEARECAAYFAkjSR0sACgkQfYfG8fCaAjjs8QCeK0pwsn8UBY+w1zXfovBduVSG >HjMAoMeYIknx5MTppz1VIUxVMAhFxl+i >=TL4d >-----END PGP SIGNATURE----- > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
I''m currently using gluster in a webserver environment. It''s mostly ok, some stability issues but I think they''ll be resolved when 1.4 goes stable. comments inline below: At 05:19 AM 9/18/2008, Monchanin Eric wrote:>-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >I am trying to find a way to "unify" 2 or 3 servers (hopefully more in >the future) to host our website''s users'' content ( the files are >between a few kB to a few MB ). >The servers are located in 2 different places (different countries) >because they both host applications that run "locally".you''re going to have severe performance issues with having things so far apart. You''re also highly at risk of network interruptions so you really need a solution that can handle loosing contact with the other server. NFS wont do this for you at all, nor will sshfs. Basically you need to replicate rather than unify. (You could do both), but you will need to make sure that there are local copies of all data or you will be down during a network interruption.>My first idea was to simply mount via NFS, sshfs or some other better >way a "user" directory hosted in the second country, but that is only >temporary, and not very good. >Then I discovered there was something called distributed file systems, >and I am now experimentating various ways to achieve my goald : have a >DFS hosting all my users contents, wherever the servers are in the >world, with some redundancy when we''ll have enough servers to avoid >data loss.if the data is primarily read only and changes infrequently, you may be able to get away with an rsync based solution (I use Unify to sync the non-time sensitive files, like email accounts and password files, etc... and unison for the user home/webroot files. so if your data isn''t time sensitive or is updated infrequently this is probably an ideal solution, it''s low bandwidth, and not dependant on a network.>On another part, I would like one of my servers to host only files >from a certain user (specific directory ''for instance, path looks like >''users/users123''), though I''d like these files to be also stored on my >local server (has a local backup).I belive you can do this with unison, some of the translators let you specify this, although you may just need to run multiple gluster clients/servers for each of the users, but ultimately you can somehow accomplish this with gluster.>I''ve seen it was apparently possible using something close to that in >the client config : > >volume unify > type cluster/unify > option namespace client-1 > option scheduler switch > option switch.case *users/users123*:remote_server local_server > subvolumes remote_server local_server >end-volumethis might work, or perhaps instead of having users as your gluster bricks, you have a separate config for each user. I''m not sure what sort of resource overhead this would create, it''s probably negligible relative to the flexibility. Gluster/AFR would do what you need, there would be some network latency issues with the servers being in different countries. every file access the server has to check the other one to make sure it''s got the latest files (You obviously have this with NFS, etc. also) which may make things realllly slow, but it''s worth testing out at least. I''d wait for 1.4 which has the binary protocol and, therefor, smaller network packets. It also seems to have some significant AFR improvements.>So, anyway, thank you for your time, > >Eric >-----BEGIN PGP SIGNATURE----- >Version: GnuPG v1.4.9 (MingW32) >Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > >iEYEARECAAYFAkjSR0sACgkQfYfG8fCaAjjs8QCeK0pwsn8UBY+w1zXfovBduVSG >HjMAoMeYIknx5MTppz1VIUxVMAhFxl+i >=TL4d >-----END PGP SIGNATURE----- > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
I''m currently using gluster in a webserver environment. It''s mostly ok, some stability issues but I think they''ll be resolved when 1.4 goes stable. comments inline below: At 05:19 AM 9/18/2008, Monchanin Eric wrote:>-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >I am trying to find a way to "unify" 2 or 3 servers (hopefully more in >the future) to host our website''s users'' content ( the files are >between a few kB to a few MB ). >The servers are located in 2 different places (different countries) >because they both host applications that run "locally".you''re going to have severe performance issues with having things so far apart. You''re also highly at risk of network interruptions so you really need a solution that can handle loosing contact with the other server. NFS wont do this for you at all, nor will sshfs. Basically you need to replicate rather than unify. (You could do both), but you will need to make sure that there are local copies of all data or you will be down during a network interruption.>My first idea was to simply mount via NFS, sshfs or some other better >way a "user" directory hosted in the second country, but that is only >temporary, and not very good. >Then I discovered there was something called distributed file systems, >and I am now experimentating various ways to achieve my goald : have a >DFS hosting all my users contents, wherever the servers are in the >world, with some redundancy when we''ll have enough servers to avoid >data loss.if the data is primarily read only and changes infrequently, you may be able to get away with an rsync based solution (I use Unify to sync the non-time sensitive files, like email accounts and password files, etc... and unison for the user home/webroot files. so if your data isn''t time sensitive or is updated infrequently this is probably an ideal solution, it''s low bandwidth, and not dependant on a network.>On another part, I would like one of my servers to host only files >from a certain user (specific directory ''for instance, path looks like >''users/users123''), though I''d like these files to be also stored on my >local server (has a local backup).I belive you can do this with unison, some of the translators let you specify this, although you may just need to run multiple gluster clients/servers for each of the users, but ultimately you can somehow accomplish this with gluster.>I''ve seen it was apparently possible using something close to that in >the client config : > >volume unify > type cluster/unify > option namespace client-1 > option scheduler switch > option switch.case *users/users123*:remote_server local_server > subvolumes remote_server local_server >end-volumethis might work, or perhaps instead of having users as your gluster bricks, you have a separate config for each user. I''m not sure what sort of resource overhead this would create, it''s probably negligible relative to the flexibility. Gluster/AFR would do what you need, there would be some network latency issues with the servers being in different countries. every file access the server has to check the other one to make sure it''s got the latest files (You obviously have this with NFS, etc. also) which may make things realllly slow, but it''s worth testing out at least. I''d wait for 1.4 which has the binary protocol and, therefor, smaller network packets. It also seems to have some significant AFR improvements.>So, anyway, thank you for your time, > >Eric >-----BEGIN PGP SIGNATURE----- >Version: GnuPG v1.4.9 (MingW32) >Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > >iEYEARECAAYFAkjSR0sACgkQfYfG8fCaAjjs8QCeK0pwsn8UBY+w1zXfovBduVSG >HjMAoMeYIknx5MTppz1VIUxVMAhFxl+i >=TL4d >-----END PGP SIGNATURE----- > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users