similar to: Kernel panic after removing SW RAID1 partitions, setting up ZFS.

Displaying 20 results from an estimated 6000 matches similar to: "Kernel panic after removing SW RAID1 partitions, setting up ZFS."

2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
> In article <6566355.ijNRhnPfCt at tesla.schoolpathways.com>, > Benjamin Smith <lists at benjamindsmith.com> wrote: >> System is CentOS 6 all up to date, previously had two drives in MD RAID >> configuration. >> >> md0: sda1/sdb1, 20 GB, OS / Partition >> md1: sda2/sdb2, 1 TB, data mounted as /home >> >> Installed kmod ZFS via yum,
2019 Apr 09
0
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
In article <6566355.ijNRhnPfCt at tesla.schoolpathways.com>, Benjamin Smith <lists at benjamindsmith.com> wrote: > System is CentOS 6 all up to date, previously had two drives in MD RAID > configuration. > > md0: sda1/sdb1, 20 GB, OS / Partition > md1: sda2/sdb2, 1 TB, data mounted as /home > > Installed kmod ZFS via yum, reboot, zpool works fine. Backed up
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2012 Feb 29
7
Software RAID1 with CentOS-6.2
Hello, Having a problem with software RAID that is driving me crazy. Here's the details: 1. CentOS 6.2 x86_64 install from the minimal iso (via pxeboot). 2. Reasonably good PC hardware (i.e. not budget, but not server grade either) with a pair of 1TB Western Digital SATA3 Drives. 3. Drives are plugged into the SATA3 ports on the mainboard (both drives and cables say they can do 6Gb/s). 4.
2006 Mar 14
2
Help. Failed event on md1
Hi all, This morning I received this notification from mdadm: This is an automatically generated mail message from mdadm running on server-mail.mydomain.kom A Fail event had been detected on md device /dev/md1. Faithfully yours, etc. In /proc/mdstat I see this: Personalities : [raid1] md1 : active raid1 sdb2[2](F) sda2[0] 77842880 blocks [2/1] [U_] md0 : active raid1 sdb1[1] sda1[0]
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2015 Mar 17
3
unable to recover software raid1 install
Hello All, on a Centos5 system installed with software raid I'm getting: raid1: raid set md127 active with 2 out of 2 mirrors md:.... autorun DONE md: Autodetecting RAID arrays md: autorun..... md : autorun DONE trying to resume form /dev/md1 creating root device mounting root device mounting root filesystem ext3-fs : unable to read superblock mount :
2011 Jan 24
1
adding raid1 to running system
I have followed the procedure on the Centos page: http://wiki.centos.org/HowTos/CentOS5ConvertToRAID My setup is slightly different. I am using two partitions, / and /home. I have setup swap as a file, /home/swapfile. My hard drives are 500Gig sda and sdb. In modifying the instructions for initializing sdb I have: used /dev/md0 for / (sdb1) used /dev/md1 for /home (sdb2) In section 3.6 The
2008 Jun 10
1
raid1 disk format?
If you have a disk with several partitions set up as members of a raid1 md devices, can you make a dd image of that disk to replace its matching drive with identical partitions or are there differences between the mirrored partitions? -- Les Mikesell lesmikesell at gmail.com
2014 Jul 25
2
Convert "bare partition" to RAID1 / mdadm?
I have a large disk full of data that I'd like to upgrade to SW RAID 1 with a minimum of downtime. Taking it offline for a day or more to rsync all the files over is a non-starter. Since I've mounted SW RAID1 drives directly with "mount -t ext3 /dev/sdX" it would seem possible to flip the process around, perhaps change the partition type with fdisk or parted, and remount as
2007 Sep 25
2
mdadm problem.
So I'm trying to RAID-1 this system which has two identical disks installed in it, and it isn't working for some reason. I started by doing a CentOS-4 install on /dev/sda1 as root, and with /dev/sda2 as my swap. I finish the install, yum update, and then I want to make the mirrors. I copy the partition table from one disk to the other: # sfdisk -d /dev/sda | sfdisk /dev/sdb I create
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2019 Apr 11
1
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
On Wed, 10 Apr 2019 08:38:04 -0700, Benjamin Smith <lists at benjamindsmith.com> wrote >I drove to the site, picked up the machine, and last night found that the >problem wasn't anything to do with mdadm, but rather setting a partition to >GPT. For some reason, you *cannot* have a partition of type GPT and expect >Linux to boot. (WT F/H?!?) If you want to boot a BIOS
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2022 Apr 24
3
Installing mdadm and C7 on new computer
On 04/23/2022 09:19 PM, H wrote: > On 04/19/2022 09:57 AM, Roberto Ragusa wrote: >> On 4/18/22 1:27 PM, H wrote: >>> I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with
2008 Dec 12
1
Upgrade to new drives in raid, larger
Hi all, As part of my raid experience, I have yet to have to do this, but was wondering how you guys would attempt it. I have 3 drives in a raid 1, with one as a hot spare. They are 250gb with all space used by two raid devices, 1 with boot, the other with LVMs filling them up. Now, lets say down the road I want to put in 500gb drives and replace them....yikes. I was thinking of taking out the