I''ve run into a problem with grub and an installed raid card. The fix is temporarily eluding me. Basicly I have a machine with two 9GB drives on an aic7xxx controller and an IBM ServeRaid 3H with two logical drives at the current time. The device map ( /boot/grub/device.map is: (fd0) /dev/fd0 (hd2) /dev/sda (hd3) /dev/sdb (hd0) /dev/sdc (hd1) /dev/sdd where sda1 is /boot, sda2 is / etc. The problem pops up when I define a new raid drive and the device map changes. I added a new Raid0 drive today (sdd1) which bumped /dev/sda from hd1 to hd2. Now grub halts trying to find the kernel because it''s trying to boot from hd1 instead of hd2. I''ve bypassed the problem (until next reboot) by going into the grub command line and changing the boot device to hd2. (Thank God it''s grub and not lilo. You get a chance to fix errors on the fly). What do I have to change in the system to write this change out to the grub boot permanently? I seem to recall that it''s a combination of "grub-install --recheck", fix the device mapping, make some kind of change to /etc/grub.conf and do a grub-install again. I had the damn thing working with RH9->Centos 3.3->Centos 3.4 via upgrades, but it failed doing a Centos 4 install (instead of upgrade). I can''t remember what I did to fix it last time this happened.