why will Centos 6 not boot from an mdraid 10 partition?
On 6.3.2012 16:29, William Warren wrote:> why will Centos 6 not boot from an mdraid 10 partition?Because grub cant read a mdraid 10 Make a /boot on mdraid 1 and the rest on mdraid 10 -- Kind Regards, Markus Falb -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: <http://lists.centos.org/pipermail/centos/attachments/20120306/206aa2f5/attachment-0004.sig>
On Tue, Mar 6, 2012 at 9:29 AM, William Warren <hescominsoon at emmanuelcomputerconsulting.com> wrote:> why will Centos 6 not boot from an mdraid 10 partition?It has to load code before you have the kernel that understands raid or how to detect it. That's why they call it booting. -- Les Mikesell lesmikesell at gmail.com
> i then have to redo my entire array...and loose space inside the> array. Plus if i raid1 it then i only have two bootable disks..at > least this way i have 4 bootable disks..:) Lose space? 100 or 200MB? Why the heck wouldn't you be able to spare 100 or 200MB of the gigantic size of today's drives?
>> Plus if i raid1 it then i only have two bootable disks..at leastthis way i have 4 bootable disks..:) No, you don't have 4. Please study the way a RAID10 array works.
On Wed, Mar 7, 2012 at 2:09 PM, Reindl Harald <h.reindl at thelounge.net> wrote:> >> If the future continues anything like the past, you'll be be able to >> buy something new with twice the speed and 10x the space by then and >> be better off starting over than allocating more than you need today. > > you are re-installing your servers permanently?For an unfortunately large number, yes.> i do not, i move the VMware-Images to new hosts / SAN storages > without any interruption and if you maintain them well this > works over many years and dist-upgradesOur servers dealing with financial exchange data live on the bleeding edge of capacity and can't afford the overhead of virtualization. So we end up replacing the backend servers wholesale every few years, re-purposing the older generation for less demanding tasks. But, within the realm of hardware with compatible drivers there's not a whole lot of difference in the effort to copy a VM image or a hardware image with clonezilla or a similar approach. I agree that VMware is great where it works, though. -- Les Mikesell lesmikesell at gmail.com
Les Mikesell wrote:> On Wed, Mar 7, 2012 at 3:23 PM, <m.roth at 5-cent.us> wrote: >> >>>> a) You think I, or a *lot* of other folks, are going to do that athome? (Please - I'm trying to get my fiancee to at *least* go from ? ? ? ?*shudder* Vista to Win7)>>> >>> If you leave them on, add up the power cost of running an old box foryears.>> >> Sorry, that doesn't work either: *everything* new seems to need a lotmore power than the older stuff. Certainly, last time I upgraded my own system, I had to buy one that was 150% the power of the old one.> > That was probably before power became a big thing for servers - in mostcases now power and cooling are the limiting factors for> expansion in a data center. Most of the new servers use 2.5" drives andwhile they might still use as much power per 1u of space due to using more blades in a chassis or having more CPUs and RAM, we get much more performance from the same space and power consumption. It's not such a big deal for desktops, but you can get small low power systems if you look around - or just use a laptop that will sleep when you close the lid. Heh. Many of the new servers we are getting are all on the order of 48 or 64 cores, and they eat and drink power. The same UPS that would handle six 4 or 8 core boxes can handle *three*, if we're lucky, when a clustering job's running.... mark
On Thursday, March 08, 2012 10:52:02 AM Les Mikesell wrote:> Yes, part of the power savings are deceptive - they only kick in when > the CPUs are idle and your users would be one of the rare cases that > peg them for long intervals. I think this is getting better in the > current generation but haven't followed the latest changes.In scientific computing, there is no such thing as 'enough cores' and if 3 48 core servers physically fit in the space of three older 6 or 8 core servers, then the users will want to fill that space and get 3 more 48 core servers, and so your power density has doubled. So the '150%' power increase is (if I'm reading Mark correctly) per *rack unit* not per core. And, again, in this space you don't get any savings in power, since this sort of computing eats cores for breakfast. And virtualization to save power will not address this type of user's need. I live in the same sort of world, just on a smaller scale, and my biggest power consumer is storage, not compute, but I thoroughly understand Mark's points.
On Wednesday, March 07, 2012 05:06:13 PM Les Mikesell wrote:> It's not such a big deal for desktops, but you can get small low power > systems if you look around - or just use a laptop that will sleep when > you close the lid.FWIW, Aleutia (www.aleutia.com) makes some nice really low power units. While they come by default from the factory preloaded with Ubuntu, they would be great CentOS machines.
On Thursday, March 08, 2012 12:37:30 PM Ross Walker wrote:> On Mar 8, 2012, at 11:06 AM, Lamar Owen <lowen at pari.edu> wrote: > > I live in the same sort of world, just on a smaller scale, and my biggest power consumer is storage, not compute, but I thoroughly understand Mark's points.> So, get more power and UPS.So, can I put you down as being willing to donate the $2.5 million necessary to increase our power capacity (I'm looking out the door at two of our four 1MVA 12.4KV to 480/277 transformers (that we, not the utility, own), and any upgrade will involve the incoming buried primary) and get a couple or three more Mitsubishi 500KVA units? No? It's a great tax writeoff, being that we are a 501(c)(3) public not-for-profit foundation.....we'll give you a nice tax receipt. :-) Oh, and the $1.2 million for an additional 100 tons of redundant HVAC while we're at it....> The specs are published, so power consumption shouldn't be a "surprise".It's not a surprise, it's just more cost than just the servers themselves, and budgets are tight.
On Thursday, March 08, 2012 01:15:59 PM Les Mikesell wrote:> Usually your whole building is designed around a certain amount of > heat load and data centers designed a few years back are probably > already maxed out due to the earlier rounds of density increases. So > you will need at least more A/C and probably real estate too.And don't forget the floor load. Our EMC Clariions are heavy enough that I can't use a tile under them with any holes of any kind in them (especially vents) or I have tile surface deflection that's out of spec. And our floor in the main data center is rated 1,500 lbs (avoirdupois) per square foot. And the subfloor loading has to be considered, as well as how much the underfloor will 'flow' in CFM......