Jonathan Endersby
2008-Nov-18 15:17 UTC
[Gluster-users] gluster, where have you been all my life?
Hi All I've been looking for something like Gluster for a while and stumbled on it today via the wikipedia pages on Filesystems etc. I have a few very very simple questions that might even be too simple to be on the FAQ, but if you think any of them are decent please add them there. I think it might help if I start with what I want to achieve, then ask the questions. We want to build a high uptime, storage solution that can scale easily, but we want to do it on the cheap, with normal SATA disks and "consumer" motherboards etc. We'll use GigE to connect the nodes. Performance is not such a big criteria since the required throughput is nothing spectacular. Based on what I've read I think we should use: 1. Unify translator (to present one big FS to my systems master server) 2. AFR (to make my data redundant ... I'd like to try and avoid using RAID on the nodes.) So my questions are: 1. Is there anything blindingly wrong with what I'm suggesting? 2. Should I use AFR to achieve redundancy? 2. What is the minimum number of machines/bricks in the cluster that will support data redundancy with AFR? 3. The AFR docs seem to indicate that it keeps a copy of the file on *every* node... isn't that wasting a lot of space? I really just need 2 copies or three copies so that one node or 2 nodes can go down at a time. I'm sure I'll have lots more questions but for now that should point me in the right direction. Regards J. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081118/e7727019/attachment.html>
Raghavendra G
2008-Nov-18 15:36 UTC
[Gluster-users] gluster, where have you been all my life?
Hi Jonathan, comments are inlined. On Tue, Nov 18, 2008 at 7:17 PM, Jonathan Endersby <arbitraryuser at gmail.com>wrote:> Hi All > > I've been looking for something like Gluster for a while and stumbled on it > today via the wikipedia pages on Filesystems etc. > > I have a few very very simple questions that might even be too simple to be > on the FAQ, but if you think any of them are decent please add them there. > > I think it might help if I start with what I want to achieve, then ask the > questions. We want to build a high uptime, storage solution that can scale > easily, but we want to do it on the cheap, with normal SATA disks and > "consumer" motherboards etc. We'll use GigE to connect the nodes. > Performance is not such a big criteria since the required throughput is > nothing spectacular. > > Based on what I've read I think we should use: > > 1. Unify translator (to present one big FS to my systems master server) > 2. AFR (to make my data redundant ... I'd like to try and avoid using RAID > on the nodes.)yes.> > > So my questions are: > > 1. Is there anything blindingly wrong with what I'm suggesting?no> 2. Should I use AFR to achieve redundancy?yes.> > 2. What is the minimum number of machines/bricks in the cluster that will > support data redundancy with AFR?two> > 3. The AFR docs seem to indicate that it keeps a copy of the file on > *every* node... isn't that wasting a lot of space? I really just need 2 > copies or three copies so that one node or 2 nodes can go down at a time.afr has to be configured with as many children as the number of copies of the file has to be present. Hence as per your requirements, you can have only 2 or 3 nodes as children of afr and have the rest as children of unify.> > I'm sure I'll have lots more questions but for now that should point me in > the right direction. > > Regards > J. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >regards, -- Raghavendra G -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081118/0975e3a2/attachment.html>
Possibly Parallel Threads
- Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
- Problems running dbench on 3.3
- Crashing (signal received: 11)
- stale file handle on gluster NFS client when trying to remove a directory
- Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware