similar to: Mirror After?

Displaying 20 results from an estimated 9000 matches similar to: "Mirror After?"

2015 Aug 05
8
CentOS 5 grub boot problem
I am trying to upgrade my system from 500GB drives to 1TB. I was able to partition and sync the raid devices, but I cannot get the new drive to boot. This is an old system with only IDE ports. There is an added Highpoint raid card which is used only for the two extra IDE ports. I have upgraded it with a 1TB SATA drive and an IDE-SATA adapter. I did not have any problems with the system
2011 Mar 21
4
mdraid on top of mdraid
Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the storage array surviving a physical network switch
2015 Aug 05
1
CentOS 5 grub boot problem
On 8/5/2015 11:27 AM, m.roth at 5-cent.us wrote: > Bowie Bailey wrote: >> I am trying to upgrade my system from 500GB drives to 1TB. I was able >> to partition and sync the raid devices, but I cannot get the new drive >> to boot. >> >> This is an old system with only IDE ports. There is an added Highpoint >> raid card which is used only for the two extra
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2015 Aug 05
1
CentOS 5 grub boot problem
m.roth at 5-cent.us wrote: > Bowie Bailey wrote: >> I am trying to upgrade my system from 500GB drives to 1TB. I was able >> to partition and sync the raid devices, but I cannot get the new drive >> to boot. >> >> This is an old system with only IDE ports. There is an added Highpoint >> raid card which is used only for the two extra IDE ports. I have
2011 Mar 29
4
VMware vSphere Hypervisor (free ESXi) and mdraid
Can I combine VMWare ESXi (free version) virtualization and CentOS mdraid level 1? Any pointers how to do it? I never used VMWare before. - Jussi -- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi at greenspot.fi * http://www.greenspot.fi
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote: > On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote: > >> 3 - Can additional drive(s) be added later with a changein RAID level >> without current data loss? > > Only some systems support that sort of restriping, and its a dangerous > activity (if the power fails or system crashes midway through
2019 Oct 03
2
CentOS 8 Broken Installation
P? Thu, 3 Oct 2019 07:38:05 -0500 Valeri Galtsev <galtsev at kicp.uchicago.edu> skrev: > > On Oct 3, 2019, at 6:24 AM, G?nther J. Niederwimmer > > <gjn at gjn.priv.at> wrote: > > > > > > 07:00.0 Serial Attached SCSI controller [0107]: Intel Corporation > > C602 chipset 4-Port SATA Storage Control Unit [8086:1d6b] (rev 06)? > > > > what
2017 Feb 15
3
RAID questions
Hello, Just a couple questions regarding RAID. Here's thesituation. I bought a 4TB drive before I upgraded from 6.8 to 7.3. I'm not too far into this that Ican't start over. I wanted disk space to backup 3 other machines. I way overestimated what I needed for full, incremental and image backups with UrBackup.I've used less than 1TB so far. I would like to add an additional drive
2011 Jul 18
2
Kernel 2.6.32.41 raid bug on squeeze
Hi, I''m having issues on intel and amd, when booting it doesn''t assemble the raid devices. It seems to work ok in Lenny but when you upgrade to squeeze it won''t boot. Anyone else with same issue? Ian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2010 Mar 08
11
ZFS for my home RAID? Or Linux Software RAID?
Hello All, I build a new Storage Server to backup my data, keep archives of client files, etc I recently had a near loss of important items. So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2 x 3Ware 8 port RAID cards, 8gb RAM, dual AMD Opertron. I have a 1tb boot drive and I put in 8 x 1.5tb Seagate 7200 drives. In the future I want to fill the other 8 SATA bays
2009 Oct 22
4
Upgrading CentOS 5.3 from local mirror
Good afternoon folks. Earlier today, I started upgrading a few of our servers to 5.4 based on input from the list. So far, all has gone well. I have about 6 servers (not very many, but still) that need to be upgraded. Instead of taking precious bits from the mirrors for each upgrade, I was curious if I could rsync the 5.4 directory from a local mirror, and configure yum to use that repo, if
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message:
2011 Apr 13
1
Expanding RAID 10 array, WAS: 40TB File System Recommendations
On 4/13/11, Rudi Ahlers <Rudi at softdux.com> wrote: >> to expand the array :) > > I haven't had problems doing it this way yet. I finally figured out my mistake creating the raid devices and got a working RAID 0 on two RAID 1 arrays. But I wasn't able to add another RAID 1 component to the array with the error mdadm: add new device failed for /dev/md/mdr1_3 as 2:
2010 Jun 11
1
Linux software RAID 1.2 superblocks
Hi, Just to bring one more Debian concern to the Syslinux table: the default metadata format in upstream mdadm changed to 1.2, which means MD superblocks at the beginning of the partition, after a 4 KB hole. Is our favorite bootloader prepared to handle such situations? -- Thanks, Feri.
2012 Jan 29
2
Advise on recovering 2TB RAID1
Hi all, I have one drive fails on a software 2TB RAID1. I have removed the failed partition from mdraid and now ready to replace the failed drive. I want to ask for opinion if there is better way to do that other than: 1. Put the new HDD. 2. Use parted to recreate the same partition scheme. 3. Use mdadm to rebuild the RAID. Especially #2 is rather tricky. I have to create an exact partition
2013 Aug 24
10
Help interpreting RAID1 space allocation
I''ve created a test volume and copied a bulk of data to it, however the results of the space allocation are confusing at best. I''ve tried to capture the history of events leading up to the current state. This is all on a Debian Wheezy system using a 3.10.5 kernel package (linux-image-3.10-2-amd64) and btrfs tools v0.20-rc1 (Debian package 0.19+20130315-5). The host uses an
2017 Jun 30
2
mdraid doesn't allow creation: device or resource busy
Dear fellow CentOS users, I have never experienced this problem with hard disk management before and cannot explain it to myself on any rational basis. The setup: I have a workstation for testing, running latest CentOS 7.3 AMD64. I am evaluating oVirt and a storage-ha as part of my bachelors thesis. I have already been running a RAID1 (mdraid, lvm2) for the system and some oVirt 4.1 testing.
2015 Feb 19
2
CentOS 7: software RAID 5 array with 4 disks and no spares?
On 2/18/2015 8:20 PM, Chris Murphy wrote: > On Wed, Feb 18, 2015 at 3:37 PM, Niki Kovacs<info at microlinux.fr> wrote: >> >Le 18/02/2015 23:12, Chris Murphy a ?crit : >>> >> >>> >>"installer is organized around mount points" is correct, and what gets >>> >>mounted on mount points? Volumes, not partitions. >> >
2009 Jan 15
2
3Ware 9650SE tuning advice
Hello fellow sysadmins! I've assembled a whitebox system with a SuperMicro motherboard, case, 8GB of memory and a single quad core Xeon processor. I have two 9650SE-8LPML cards (8 ports each) in each server with 12 1TB SATA drives total. Three drives per "lane" on each card. CentOS 5.2 x86_64. I'm looking for advice on tuning this thing for performance. Especially for the