Arden Wiebe
2009-Jan-10 21:40 UTC
[Lustre-discuss] Optimal OSS OST drives for boxed deployment
I have two OSS each have six 1TB drives. sda contains the kernel and the operating system. sdb,sdc,sdd,sde,sdf are the targets and make only a raid 5. Is it advisable to add another drive to each of these OSS''s to facilitate raid 6 for the targets? sda has only / partion and occupies the entire 1TB drive.
Arden Wiebe
2009-Jan-10 21:48 UTC
[Lustre-discuss] Optimal OSS OST drives for boxed deployment
The reason I ask is because I am at the tuning and configuration section of my deployment and that includes raiding the servers properly.? So far this is what it looks like unmounted from a client.? I''ve had it all mounted from the client connected to an OSS before but that is not ideal. root at lustreone ~]# cat /proc/fs/lustre/devices ? 0 UP mgc MGC192.168.0.19 at tcp 0c8fe823-9b73-df4d-b3d3-73eb8db70038 5 ? 1 UP ost OSS OSS_uuid 3 ? 2 UP obdfilter datafs-OST001e datafs-OST001e_UUID 3 ? 3 UP obdfilter datafs-OST001f datafs-OST001f_UUID 3 ? 4 UP obdfilter datafs-OST0020 datafs-OST0020_UUID 3 ? 5 UP obdfilter datafs-OST0021 datafs-OST0021_UUID 3 ? 6 UP obdfilter datafs-OST0022 datafs-OST0022_UUID 3 [root at lustreone ~]# Ouch! [root at lustretwo Desktop]# cat /proc/fs/lustre/devices ? 0 UP mgc MGC192.168.0.19 at tcp 19c4feb6-3285-f01f-b528-02dfeaef0b57 5 ? 1 UP ost OSS OSS_uuid 3 ? 2 UP obdfilter datafs-OST0023 datafs-OST0023_UUID 5 ? 3 UP obdfilter datafs-OST0024 datafs-OST0024_UUID 5 ? 4 UP obdfilter datafs-OST0025 datafs-OST0025_UUID 5 ? 5 UP obdfilter datafs-OST0026 datafs-OST0026_UUID 5 ? 6 UP obdfilter datafs-OST0027 datafs-OST0027_UUID 5 [root at lustretwo Desktop]#? [root at lustrethree Desktop]# cat /proc/fs/lustre/devices ? 0 UP mgs MGS MGS 11 ? 1 UP mgc MGC192.168.0.19 at tcp 3aba1efe-92c2-88dd-c06b-47be63d63f49 5 [root at lustrethree Desktop]# [root at lustrefour Desktop]# cat /proc/fs/lustre/devices ? 0 UP mgc MGC192.168.0.19 at tcp c9c83cf8-2965-4677-5b76-404d738e15bc 5 ? 1 UP mdt MDS MDS_uuid 3 ? 2 UP lov datafs-mdtlov datafs-mdtlov_UUID 4 ? 3 IN osc datafs-OST0000-osc datafs-mdtlov_UUID 5 ? 4 IN osc datafs-OST0001-osc datafs-mdtlov_UUID 5 ? 5 IN osc datafs-OST0002-osc datafs-mdtlov_UUID 5 ? 6 IN osc datafs-OST0003-osc datafs-mdtlov_UUID 5 ? 7 IN osc datafs-OST0004-osc datafs-mdtlov_UUID 5 ? 8 IN osc datafs-OST0005-osc datafs-mdtlov_UUID 5 ? 9 IN osc datafs-OST0006-osc datafs-mdtlov_UUID 5 ?10 IN osc datafs-OST0007-osc datafs-mdtlov_UUID 5 ?11 IN osc datafs-OST0008-osc datafs-mdtlov_UUID 5 ?12 IN osc datafs-OST0009-osc datafs-mdtlov_UUID 5 ?13 IN osc datafs-OST000a-osc datafs-mdtlov_UUID 5 ?14 IN osc datafs-OST000b-osc datafs-mdtlov_UUID 5 ?15 IN osc datafs-OST000c-osc datafs-mdtlov_UUID 5 ?16 IN osc datafs-OST000d-osc datafs-mdtlov_UUID 5 ?17 IN osc datafs-OST000e-osc datafs-mdtlov_UUID 5 ?18 IN osc datafs-OST000f-osc datafs-mdtlov_UUID 5 ?19 IN osc datafs-OST0010-osc datafs-mdtlov_UUID 5 ?20 IN osc datafs-OST0011-osc datafs-mdtlov_UUID 5 ?21 IN osc datafs-OST0012-osc datafs-mdtlov_UUID 5 ?22 IN osc datafs-OST0013-osc datafs-mdtlov_UUID 5 ?23 UP mds datafs-MDT0000 datafs-MDT0000_UUID 5 ?24 UP osc datafs-OST0014-osc datafs-mdtlov_UUID 5 ?25 UP osc datafs-OST0015-osc datafs-mdtlov_UUID 5 ?26 UP osc datafs-OST0016-osc datafs-mdtlov_UUID 5 ?27 UP osc datafs-OST0017-osc datafs-mdtlov_UUID 5 ?28 UP osc datafs-OST0018-osc datafs-mdtlov_UUID 5 ?29 UP osc datafs-OST0019-osc datafs-mdtlov_UUID 5 ?30 UP osc datafs-OST001a-osc datafs-mdtlov_UUID 5 ?31 UP osc datafs-OST001b-osc datafs-mdtlov_UUID 5 ?32 UP osc datafs-OST001c-osc datafs-mdtlov_UUID 5 ?33 UP osc datafs-OST001d-osc datafs-mdtlov_UUID 5 ?34 UP osc datafs-OST001e-osc datafs-mdtlov_UUID 5 ?35 UP osc datafs-OST001f-osc datafs-mdtlov_UUID 5 ?36 UP osc datafs-OST0020-osc datafs-mdtlov_UUID 5 ?37 UP osc datafs-OST0021-osc datafs-mdtlov_UUID 5 ?38 UP osc datafs-OST0022-osc datafs-mdtlov_UUID 5 ?39 UP osc datafs-OST0023-osc datafs-mdtlov_UUID 5 ?40 UP osc datafs-OST0024-osc datafs-mdtlov_UUID 5 ?41 UP osc datafs-OST0025-osc datafs-mdtlov_UUID 5 ?42 UP osc datafs-OST0026-osc datafs-mdtlov_UUID 5 ?43 UP osc datafs-OST0027-osc datafs-mdtlov_UUID 5 [root at lustrefour Desktop]#??????????? --- On Sat, 1/10/09, Arden Wiebe <albert682 at yahoo.com> wrote: From: Arden Wiebe <albert682 at yahoo.com> Subject: [Lustre-discuss] Optimal OSS OST drives for boxed deployment To: lustre-discuss at lists.lustre.org Date: Saturday, January 10, 2009, 1:40 PM I have two OSS each have six 1TB drives.? sda contains the kernel and the operating system. sdb,sdc,sdd,sde,sdf are the targets and make only a raid 5. Is it advisable to add another drive to each of these OSS''s to facilitate raid 6 for the targets? sda has only / partion and occupies the entire 1TB drive. ? ? ? _______________________________________________ Lustre-discuss mailing list Lustre-discuss at lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20090110/d782f610/attachment-0001.html
Arden Wiebe
2009-Jan-11 00:01 UTC
[Lustre-discuss] Optimal OSS OST drives for boxed deployment
Purchased 4 more 1TB spinpoint drives for the OSS''s.? This should allow for proper raid 6 if the boards, power supplies and backup power can handle the load.? --- On Sat, 1/10/09, Arden Wiebe <albert682 at yahoo.com> wrote: From: Arden Wiebe <albert682 at yahoo.com> Subject: Re: [Lustre-discuss] Optimal OSS OST drives for boxed deployment To: lustre-discuss at lists.lustre.org Date: Saturday, January 10, 2009, 1:48 PM The reason I ask is because I am at the tuning and configuration section of my deployment and that includes raiding the servers properly.? So far this is what it looks like unmounted from a client.? I''ve had it all mounted from the client connected to an OSS before but that is not ideal. root at lustreone ~]# cat /proc/fs/lustre/devices ? 0 UP mgc MGC192.168.0.19 at tcp 0c8fe823-9b73-df4d-b3d3-73eb8db70038 5 ? 1 UP ost OSS OSS_uuid 3 ? 2 UP obdfilter datafs-OST001e datafs-OST001e_UUID 3 ? 3 UP obdfilter datafs-OST001f datafs-OST001f_UUID 3 ? 4 UP obdfilter datafs-OST0020 datafs-OST0020_UUID 3 ? 5 UP obdfilter datafs-OST0021 datafs-OST0021_UUID 3 ? 6 UP obdfilter datafs-OST0022 datafs-OST0022_UUID 3 [root at lustreone ~]# Ouch! [root at lustretwo Desktop]# cat /proc/fs/lustre/devices ? 0 UP mgc MGC192.168.0.19 at tcp 19c4feb6-3285-f01f-b528-02dfeaef0b57 5 ? 1 UP ost OSS OSS_uuid 3 ? 2 UP obdfilter datafs-OST0023 datafs-OST0023_UUID 5 ? 3 UP obdfilter datafs-OST0024 datafs-OST0024_UUID 5 ? 4 UP obdfilter datafs-OST0025 datafs-OST0025_UUID 5 ? 5 UP obdfilter datafs-OST0026 datafs-OST0026_UUID 5 ? 6 UP obdfilter datafs-OST0027 datafs-OST0027_UUID 5 [root at lustretwo Desktop]#? [root at lustrethree Desktop]# cat /proc/fs/lustre/devices ? 0 UP mgs MGS MGS 11 ? 1 UP mgc MGC192.168.0.19 at tcp 3aba1efe-92c2-88dd-c06b-47be63d63f49 5 [root at lustrethree Desktop]# [root at lustrefour Desktop]# cat /proc/fs/lustre/devices ? 0 UP mgc MGC192.168.0.19 at tcp c9c83cf8-2965-4677-5b76-404d738e15bc 5 ? 1 UP mdt MDS MDS_uuid 3 ? 2 UP lov datafs-mdtlov datafs-mdtlov_UUID 4 ? 3 IN osc datafs-OST0000-osc datafs-mdtlov_UUID 5 ? 4 IN osc datafs-OST0001-osc datafs-mdtlov_UUID 5 ? 5 IN osc datafs-OST0002-osc datafs-mdtlov_UUID 5 ? 6 IN osc datafs-OST0003-osc datafs-mdtlov_UUID 5 ? 7 IN osc datafs-OST0004-osc datafs-mdtlov_UUID 5 ? 8 IN osc datafs-OST0005-osc datafs-mdtlov_UUID 5 ? 9 IN osc datafs-OST0006-osc datafs-mdtlov_UUID 5 ?10 IN osc datafs-OST0007-osc datafs-mdtlov_UUID 5 ?11 IN osc datafs-OST0008-osc datafs-mdtlov_UUID 5 ?12 IN osc datafs-OST0009-osc datafs-mdtlov_UUID 5 ?13 IN osc datafs-OST000a-osc datafs-mdtlov_UUID 5 ?14 IN osc datafs-OST000b-osc datafs-mdtlov_UUID 5 ?15 IN osc datafs-OST000c-osc datafs-mdtlov_UUID 5 ?16 IN osc datafs-OST000d-osc datafs-mdtlov_UUID 5 ?17 IN osc datafs-OST000e-osc datafs-mdtlov_UUID 5 ?18 IN osc datafs-OST000f-osc datafs-mdtlov_UUID 5 ?19 IN osc datafs-OST0010-osc datafs-mdtlov_UUID 5 ?20 IN osc datafs-OST0011-osc datafs-mdtlov_UUID 5 ?21 IN osc datafs-OST0012-osc datafs-mdtlov_UUID 5 ?22 IN osc datafs-OST0013-osc datafs-mdtlov_UUID 5 ?23 UP mds datafs-MDT0000 datafs-MDT0000_UUID 5 ?24 UP osc datafs-OST0014-osc datafs-mdtlov_UUID 5 ?25 UP osc datafs-OST0015-osc datafs-mdtlov_UUID 5 ?26 UP osc datafs-OST0016-osc datafs-mdtlov_UUID 5 ?27 UP osc datafs-OST0017-osc datafs-mdtlov_UUID 5 ?28 UP osc datafs-OST0018-osc datafs-mdtlov_UUID 5 ?29 UP osc datafs-OST0019-osc datafs-mdtlov_UUID 5 ?30 UP osc datafs-OST001a-osc datafs-mdtlov_UUID 5 ?31 UP osc datafs-OST001b-osc datafs-mdtlov_UUID 5 ?32 UP osc datafs-OST001c-osc datafs-mdtlov_UUID 5 ?33 UP osc datafs-OST001d-osc datafs-mdtlov_UUID 5 ?34 UP osc datafs-OST001e-osc datafs-mdtlov_UUID 5 ?35 UP osc datafs-OST001f-osc datafs-mdtlov_UUID 5 ?36 UP osc datafs-OST0020-osc datafs-mdtlov_UUID 5 ?37 UP osc datafs-OST0021-osc datafs-mdtlov_UUID 5 ?38 UP osc datafs-OST0022-osc datafs-mdtlov_UUID 5 ?39 UP osc datafs-OST0023-osc datafs-mdtlov_UUID 5 ?40 UP osc datafs-OST0024-osc datafs-mdtlov_UUID 5 ?41 UP osc datafs-OST0025-osc datafs-mdtlov_UUID 5 ?42 UP osc datafs-OST0026-osc datafs-mdtlov_UUID 5 ?43 UP osc datafs-OST0027-osc datafs-mdtlov_UUID 5 [root at lustrefour Desktop]#??????????? --- On Sat, 1/10/09, Arden Wiebe <albert682 at yahoo.com> wrote: From: Arden Wiebe <albert682 at yahoo.com> Subject: [Lustre-discuss] Optimal OSS OST drives for boxed deployment To: lustre-discuss at lists.lustre.org Date: Saturday, January 10, 2009, 1:40 PM I have two OSS each have six 1TB drives.? sda contains the kernel and the operating system. sdb,sdc,sdd,sde,sdf are the targets and make only a raid 5. Is it advisable to add another drive to each of these OSS''s to facilitate raid 6 for the targets? sda has only / partion and occupies the entire 1TB drive. ? ? ? _______________________________________________ Lustre-discuss mailing list Lustre-discuss at lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss -----Inline Attachment Follows----- _______________________________________________ Lustre-discuss mailing list Lustre-discuss at lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20090110/580d16e0/attachment.html
Peter Grandi
2009-Jan-11 19:23 UTC
[Lustre-discuss] Optimal OSS OST drives for boxed deployment
>>> I have two OSS each have six 1TB drives.? sda contains the >>> kernel and the operating system. sdb,sdc,sdd,sde,sdf are the >>> targets and make only a raid 5. >>> Is it advisable to add another drive to each of these OSS''s >>> to facilitate raid 6 for the targets?Why RAID6? Do you realize that it has a very different performance profile from RAID5? And that all parity RAID have some unpleasant combination of performance/availability.>>> sda has only / partion and occupies the entire 1TB drive.Bit of a waste, and no redundancy. [ ... ]> Purchased 4 more 1TB spinpoint drives for the OSS''s.Using the same drive type in all the drives or many drives in an array is not a wonderful idea, never mind having them purchased at the same time and delivered together. Still with 8 drive per OSS you can get pretty impressive performance if you use the right RAID setup.> This should allow for proper raid 6 if the boards, power > supplies and backup power can handle the load.?There is nothing "proper" about RAID6/5: http://WWW.BAARF.com/ Unless there are very good reasons (and there are AFAICS only two cases in which there are), one should nearly always use RAID10 (in some cases RAID1 and in very few cases RAID0). In your case the sort of layout I would use is: * Each drive should have two partitions (or more if more than one OST is desired), one say 10-30GB, and the other covering the rest of the disk minus the last say 10G or so. * The first partition of the first 2 disks should be a RAID1 with the root filesystem. One might want to put the MDT in there too. The other first partitions could be used for swap (no RAID) or for ''/var'' for example (RAID10). * The second (and further) partitions of all disks in one RAID10 for the OSTs. There is a case for leaving the 8th drive on each OSS as a hot spare, and keeping a 9th drive on a shelf as a cold spare. If you were really into maximum availability, you could do RAID10 across the two OSSes, using DRBD, there are a few HOWTOs on how to do that (but performance will suffer of course).