-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Heyho guys, I'm running since years glusterfs in a small environment without big problems. Now I'm going to use glusterFS for a bigger cluster but I've some questions :) Environment: * 4 Servers * 20 x 2TB HDD, each * Raidcontroller * Raid 10 * 4x bricks => Replicated, Distributed volume * Gluster 3.4 1) I'm asking me, if I can delete the raid10 on each server and create for each HDD a separate brick. In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there any experience about the write throughput in a production system with many of bricks like in this case? In addition i'll get double of HDD capacity. 2) I've heard a talk about glusterFS and out scaling. The main point was if more bricks are in use, the scale out process will take a long time. The problem was/is the Hash-Algo. So I'm asking me how is it if I've one very big brick (Raid10 20TB on each server) or I've much more bricks, what's faster and is there any issues? Is there any experiences ? 3) Failover of a HDD is for a raid controller with HotSpare HDD not a big deal. Glusterfs will rebuild automatically if a brick fails and there are no data present, this action will perform a lot of network traffic between the mirror bricks but it will handle it equal as the raid controller right ? Thanks and cheers Heiko - -- Anynines.com Avarteq GmbH B.Sc. Informatik Heiko Kr?mer CIO Twitter: @anynines - ---- Gesch?ftsf?hrer: Alexander Fai?t, Dipl.-Inf.(FH) Julian Fischer Handelsregister: AG Saarbr?cken HRB 17413, Ust-IdNr.: DE262633168 Sitz: Saarbr?cken -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.14 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJSpcMfAAoJELxFogM4ixOF/ncH/3L9DvOWHrF0XBqCgeT6QQ6B lDwtXiD9xoznht0Zs2S9LA9Z7r2l5/fzMOUSOawEMv6M16Guwq3gQ1lClUi4Iwj0 GKKtYQ6F4aG4KXHY4dlu1QKT5OaLk8ljCQ47Tc9aAiJMhfC1/IgQXOslFv26utdJ N9jxiCl2+r/tQvQRw6mA4KAuPYPwOV+hMtkwfrM4UsIYGGbkNPnz1oqmBsfGdSOs TJh6+lQRD9KYw72q3I9G6ZYlI7ylL9Q7vjTroVKH232pLo4G58NLxyvWvcOB9yK6 Bpf/gRMxFNKA75eW5EJYeZ6EovwcyCAv7iAm+xNKhzsoZqbBbTOJxS5zKm4YWoY=bDly -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: hkraemer.vcf Type: text/x-vcard Size: 277 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131209/d70112ef/attachment.vcf>
On 09.12.2013 13:18, Heiko Kr?mer wrote:> 1) > I'm asking me, if I can delete the raid10 on each server and create > for each HDD a separate brick. > In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there > any experience about the write throughput in a production system with > many of bricks like in this case? In addition i'll get double of HDD > capacity.I have found problems with bricks to be disruptive whereas replacing a RAID member is quite trivial. I would recommend against dropping RAID.> 3) > Failover of a HDD is for a raid controller with HotSpare HDD not a big > deal. Glusterfs will rebuild automatically if a brick fails and there > are no data present, this action will perform a lot of network traffic > between the mirror bricks but it will handle it equal as the raid > controller right ?Gluster will not "rebuild automatically" a brick, you will need to manually add/remove it. Additionally, if a brick goes bad gluster won't do anything about it, the affected volumes will just slow down or stop working at all. Again, my advice is KEEP THE RAID and set up good monitoring of drives. :) HTH Lucian -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro
Hi Heiko, some years ago I had to deliver a reliable storage that should be easy to grow in size over time. For that I was in close contact with presto prime who produced a lot of interesting research results accessible to the public. http://www.prestoprime.org/project/public.en.html what was striking me was the general concern of how and when and with which pattern hard drives will fail, and the rebuilding time in case a "big" (i.e. 2TB+) drive fails. (one of the papers at pp was dealing in detail with that) From that background my approach was to build relatively small raid6 bricks (9 * 2 TB + 1 Hot-Spare) and connect them together with a distributed glusterfs. I never experienced any problems with that and felt quite comfortable about it. That was for just a lot of big file data exported via samba. At the same time I used another, mirrored, glusterfs as a storage backend for my VM-images, same there, no problem and much less hazel and headache than drbd and ocfs2 which I run on another system. hth best Bernhard Bernhard Glomm IT Administration Phone: +49 (30) 86880 134 Fax: +49 (30) 86880 100 Skype: bernhard.glomm.ecologic Ecologic Institut gemeinn?tzige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464 Ecologic? is a Trade Mark (TM) of Ecologic Institut gemeinn?tzige GmbH On Dec 9, 2013, at 2:18 PM, Heiko Kr?mer <hkraemer at anynines.de> wrote:> Signed PGP part > Heyho guys, > > I'm running since years glusterfs in a small environment without big > problems. > > Now I'm going to use glusterFS for a bigger cluster but I've some > questions :) > > Environment: > * 4 Servers > * 20 x 2TB HDD, each > * Raidcontroller > * Raid 10 > * 4x bricks => Replicated, Distributed volume > * Gluster 3.4 > > 1) > I'm asking me, if I can delete the raid10 on each server and create > for each HDD a separate brick. > In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there > any experience about the write throughput in a production system with > many of bricks like in this case? In addition i'll get double of HDD > capacity. > > 2) > I've heard a talk about glusterFS and out scaling. The main point was > if more bricks are in use, the scale out process will take a long > time. The problem was/is the Hash-Algo. So I'm asking me how is it if > I've one very big brick (Raid10 20TB on each server) or I've much more > bricks, what's faster and is there any issues? > Is there any experiences ? > > 3) > Failover of a HDD is for a raid controller with HotSpare HDD not a big > deal. Glusterfs will rebuild automatically if a brick fails and there > are no data present, this action will perform a lot of network traffic > between the mirror bricks but it will handle it equal as the raid > controller right ? > > > > Thanks and cheers > Heiko > > > > -- > Anynines.com > > Avarteq GmbH > B.Sc. Informatik > Heiko Kr?mer > CIO > Twitter: @anynines > > ---- > Gesch?ftsf?hrer: Alexander Fai?t, Dipl.-Inf.(FH) Julian Fischer > Handelsregister: AG Saarbr?cken HRB 17413, Ust-IdNr.: DE262633168 > Sitz: Saarbr?cken > > <hkraemer.vcf>_______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131209/c95b9cc8/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131209/c95b9cc8/attachment.sig>
----- Original Message -----> From: "Heiko Kr?mer" <hkraemer at anynines.de> > To: "gluster-users at gluster.org List" <gluster-users at gluster.org> > Sent: Monday, December 9, 2013 8:18:28 AM > Subject: [Gluster-users] Gluster infrastructure question > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Heyho guys, > > I'm running since years glusterfs in a small environment without big > problems. > > Now I'm going to use glusterFS for a bigger cluster but I've some > questions :) > > Environment: > * 4 Servers > * 20 x 2TB HDD, each > * Raidcontroller > * Raid 10 > * 4x bricks => Replicated, Distributed volume > * Gluster 3.4 > > 1) > I'm asking me, if I can delete the raid10 on each server and create > for each HDD a separate brick. > In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there > any experience about the write throughput in a production system with > many of bricks like in this case? In addition i'll get double of HDD > capacity.Have a look at: http://rhsummit.files.wordpress.com/2012/03/england-rhs-performance.pdf Specifically: ? RAID arrays ? More RAID LUNs for better concurrency ? For RAID6, 256-KB stripe size I use a single RAID 6 that is divided into several LUNs for my bricks. For example, on my Dell servers(with PERC6 RAID controllers) each server has 12 disks that I put into raid 6. Then I break the RAID 6 into 6 LUNs and create a new PV/VG/LV for each brick. From there I follow the recommendations listed in the presentation. HTH! -b> 2) > I've heard a talk about glusterFS and out scaling. The main point was > if more bricks are in use, the scale out process will take a long > time. The problem was/is the Hash-Algo. So I'm asking me how is it if > I've one very big brick (Raid10 20TB on each server) or I've much more > bricks, what's faster and is there any issues? > Is there any experiences ? > > 3) > Failover of a HDD is for a raid controller with HotSpare HDD not a big > deal. Glusterfs will rebuild automatically if a brick fails and there > are no data present, this action will perform a lot of network traffic > between the mirror bricks but it will handle it equal as the raid > controller right ? > > > > Thanks and cheers > Heiko > > > > - -- > Anynines.com > > Avarteq GmbH > B.Sc. Informatik > Heiko Kr?mer > CIO > Twitter: @anynines > > - ---- > Gesch?ftsf?hrer: Alexander Fai?t, Dipl.-Inf.(FH) Julian Fischer > Handelsregister: AG Saarbr?cken HRB 17413, Ust-IdNr.: DE262633168 > Sitz: Saarbr?cken > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.14 (GNU/Linux) > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQEcBAEBAgAGBQJSpcMfAAoJELxFogM4ixOF/ncH/3L9DvOWHrF0XBqCgeT6QQ6B > lDwtXiD9xoznht0Zs2S9LA9Z7r2l5/fzMOUSOawEMv6M16Guwq3gQ1lClUi4Iwj0 > GKKtYQ6F4aG4KXHY4dlu1QKT5OaLk8ljCQ47Tc9aAiJMhfC1/IgQXOslFv26utdJ > N9jxiCl2+r/tQvQRw6mA4KAuPYPwOV+hMtkwfrM4UsIYGGbkNPnz1oqmBsfGdSOs > TJh6+lQRD9KYw72q3I9G6ZYlI7ylL9Q7vjTroVKH232pLo4G58NLxyvWvcOB9yK6 > Bpf/gRMxFNKA75eW5EJYeZ6EovwcyCAv7iAm+xNKhzsoZqbBbTOJxS5zKm4YWoY> =bDly > -----END PGP SIGNATURE----- > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users
Possibly Parallel Threads
- Exceptions not getting caught on bare-metal target
- Supporting freeze in GlobalISel / freeze semantics in MIR
- [Bug 12569] New: Missing directory errors not ignored
- Correct modelling of instructions with types smaller than the register class
- print samba-tool dsacl