similar to: ext3 on Linux software RAID1

Displaying 20 results from an estimated 5000 matches similar to: "ext3 on Linux software RAID1"

2002 Oct 03
4
Auditing filesystems for Linux?
Does anyone know of any Linux-based filesystem that does file-level auditing and logs based on username? Does ext2/3 do such auditing (stock or with patches)? I would like a filesystem that can be told to audit and log file deletions and log the username that deleted the file (similar to auditing on NTFS). I know, I should be using file permissions to prevent this type of deletion from
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2007 Mar 29
2
EXT3 fs error on RAID1 device
Hi all. I have a Dell SC440 running Centos 4.4. It has two 500GB disks in a RAID1 array using linux software raid (md1 is / and md0 is /boot). Recently the root file system was remounted read-only for some reason. The logs don't show anything unusual, presumably the file system was read-only before anythng was logged. Running dmesg showed this error repeated many times: EXT3-fs error (device
2023 Jan 09
2
RAID1 setup
Hi > Continuing this thread, and focusing on RAID1. > > I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn > it off if I want). What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all. > > I am planning two groupings of RAID1 (it has 4 bays). > > There is also an internal USB boot port. >
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2008 Apr 01
1
RAID1 migration - /dev/md1 is not there
I am trying to convert an existing IDE one-disk system to RAID1 using the general strategy found here: http://lists.centos.org/pipermail/centos/2005-March/003813.html But I am stuck on one thing - when I went to create the second md device with mdadm, # mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/hdb2 missing mdadm: error opening /dev/md1: No such file or directory And indeed,
2015 Mar 17
3
unable to recover software raid1 install
Hello All, on a Centos5 system installed with software raid I'm getting: raid1: raid set md127 active with 2 out of 2 mirrors md:.... autorun DONE md: Autodetecting RAID arrays md: autorun..... md : autorun DONE trying to resume form /dev/md1 creating root device mounting root device mounting root filesystem ext3-fs : unable to read superblock mount :
2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
System is CentOS 6 all up to date, previously had two drives in MD RAID configuration. md0: sda1/sdb1, 20 GB, OS / Partition md1: sda2/sdb2, 1 TB, data mounted as /home Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the /home data 2x, then stopped the sd[ab]2 partition with: mdadm --stop /dev/md1; mdadm --zero-superblock /dev/sd[ab]1; Removed /home in /etc/fstab. Used
2008 Jan 18
1
Recover lost data from LVM RAID1
Guys, The other day while working on my old workstation it get frozen and after reboot I lost almost all data unexpectedly. I have a RAID1 configuration with LVM. 2 IDE HDDs. md0 .. store /boot (100MB) -------------------------- /dev/hda2 /dev/hdd1 md1 .. store / (26GB) /dev/hda3 /dev/hdd2 The only info that still rest in was that, that I restore after the fresh install. It seems that the
2008 Jan 18
1
HowTo Recover Lost Data from LVM RAID1 ?
Guys, The other day while working on my old workstation it got frozen and after reboot I lost almost all data unexpectedly. I have a RAID1 configuration with LVM. 2 IDE HDDs. md0 .. store /boot (100MB) -------------------------- /dev/hda2 /dev/hdd1 md1 .. store / (26GB) -------------------------- /dev/hda3 /dev/hdd2 The only info that still rest in was that, that I restore after the fresh
2010 Dec 04
2
Fiddling with software RAID1 : continue working with one of two disks failing?
Hi, I'm currently experimenting with software RAID1 on a spare PC with two 40 GB hard disks. Normally, on a desktop PC with only one hard disk, I have a very simple partitioning scheme like this : /dev/hda1 80 MB /boot ext2 /dev/hda2 1 GB swap /dev/hda3 39 GB / ext3 Here's what I'd like to do. Partition a second hard disk (say, /dev/hdb) with three
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2007 Oct 17
1
strangest thing with Raid1 and samba
Hi everyone, I am having the strangest thing on my debian server with raid1 configuration. Having it setup so that there are 2 partitions (both raid1) on one disk. On the first partition is debian/linux and the second the /home(this is about 270G. I am unable to map a share to the second partition. The error is just the same as if you create a share and edit a wrong "path ="
2005 Nov 28
1
centos4.2:raid1:grub
Hi! Whew, I have googled around a lot w/this but can't quite seem to come up w/the right answer. This system works beautifully w/nothing wrong w/it. But my goal is to be able to test the raid system by just unplugging hda to mimic a faulty drive and have it just carry on and boot from hdc. md0 = hda1/hdc1 /boot (primary boot partitions on both drives) md1 = hda2/hdc2 / is it possible to
2012 Feb 29
7
Software RAID1 with CentOS-6.2
Hello, Having a problem with software RAID that is driving me crazy. Here's the details: 1. CentOS 6.2 x86_64 install from the minimal iso (via pxeboot). 2. Reasonably good PC hardware (i.e. not budget, but not server grade either) with a pair of 1TB Western Digital SATA3 Drives. 3. Drives are plugged into the SATA3 ports on the mainboard (both drives and cables say they can do 6Gb/s). 4.
2002 Oct 29
1
Caveats to mounting ext3 as ext2?
I have a partition that is formatted as ext3 however, due to apparent performance issues (not sure if they are related to journaling - data=ordered, BTW) I have mounted this file system as ext2. The file system was clean when it was shutdown, and has been cleanly mounted as ext2, and I currently have no problems with the partition. My question is, is there any harm in mounting this partition as
2002 Nov 25
3
Ordered vs. journal real-worl performance
Maybe I should've started a new thread with this question (it was in the /proc/sys/vm/bdflush thread), so I am now :) According to tests performed for this article: http://www-106.ibm.com/developerworks/linux/library/l-fs8/ "ext3's data=journal mode is incredibly well-suited to situations where data needs to be read from and written to disk at the same time." This is the
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2014 Jul 25
2
Convert "bare partition" to RAID1 / mdadm?
I have a large disk full of data that I'd like to upgrade to SW RAID 1 with a minimum of downtime. Taking it offline for a day or more to rsync all the files over is a non-starter. Since I've mounted SW RAID1 drives directly with "mount -t ext3 /dev/sdX" it would seem possible to flip the process around, perhaps change the partition type with fdisk or parted, and remount as