Displaying 20 results from an estimated 7000 matches similar to: "raid"
2007 Nov 29
1
RAID, LVM, extra disks...
Hi,
	This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 ->  36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 ->  18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and 
sdf.
What should I do if I
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has
done this for pitfalls, errors, or if I am just wrong.
Centos 5.x, software raid, 250gb drives.
2 drives in mirror, one spare. All same size.
2 devices in the mirror, one boot (about 100MB), one that fills the rest of
disk and contains LVM partitions.
I was thinking of taking out the spare and adding a 500gb drive. 
I
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
  mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
  mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
  mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2007 May 29
0
Re: CentOS Digest, Vol 28, Issue 28
HI  i'm so new to centos , Where can  get it ? 
  thank you
centos-request at centos.org wrote:
  Send CentOS mailing list submissions to
centos at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos
or, via email, send a message with subject or body 'help' to
centos-request at centos.org
You can reach the person
2007 May 29
0
Re: CentOS Digest, Vol 28, Issue 28
HI  i'm so new to centos , Where can  get it ? 
  thank you
centos-request at centos.org wrote:
  Send CentOS mailing list submissions to
centos at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos
or, via email, send a message with subject or body 'help' to
centos-request at centos.org
You can reach the person
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because  
two drives became unavailable.  After adjusting the cables on several  
occasions and shutting down and restarting, I was able to see the  
drives again.  This is when I snatched defeat from the jaws of  
victory.  Please, someone with vast knowledge of how RAID 5 with mdadm  
works, tell me if I have any chance at all
2008 Dec 12
1
Upgrade to new drives in raid, larger
Hi all,
As part of my raid experience, I have yet to have to do this, but was
wondering how you guys would attempt it.
I have 3 drives in a raid 1, with one as a hot spare.
They are 250gb with all space used by two raid devices, 1 with boot, the
other with LVMs filling them up.
Now, lets say down the road I want to put in 500gb drives and replace
them....yikes.
I was thinking of taking out the
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my 
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, 
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home.  From several how-to 
concerning raid1 installation, I must put each partition on a different 
md devices. I've asked times ago if it's more correct create the
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions
md0,md1 and md2 with md2 as 400+ gigs
Now it is almost 36 hours the status is
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
      104320 blocks [2/2] [UU]
        resync=DELAYED
md1 : active raid1 hdb2[1] hda2[0]
      4096448 blocks [2/2] [UU]
        resync=DELAYED
md2 : active raid1
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information.  Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3 
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
       3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all,
I'm setting up Centos4.2 on 2x80GB SATA drives.
The partition scheme is like this:
/boot = 300MB 
/ = 9.2GB
/home = 70GB
swap = 500MB
The RAID is RAID 1.
md0 = 300MB = /boot
md1 = 9.2GB = LVM
md2 = 70GB = LVM
md3 = 500MB = LVM
Now, the confusing part is:
1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then 
create the LV.
2. When setting up RAID 1, should I
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello,
I have a weird problem after adding new PV do LMV volume group.
It seems the error comes out only during boot time. Please read the story.
I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens
SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm.
First pair of disks has always two arrays (md0, md1). Small md0 is used
for booting and the rest - md1
2007 Mar 02
3
3.0.4 ACPI support and Opteron 2210 ?
Hello,
I originally posted this to xen-users, but someone suggested I post it here.
I am having ACPI problems on a PenguinComputing Altus1600 system.
It has 2x  dual core Opteron 2210 processors.
The system boots with a standard Debian or Ubuntu SMP kernel, with ACPI
enabled. However the xen live cd, binary xen install, as well as my own custom
compile of  xen 3.0.4 from source  will not boot.
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2         97G  918M   91G   1% /
tmpfs            16G     0   16G   0% /dev/shm
/dev/md1        485M   54M  407M  12% /boot
/dev/md3        3.4T  198M  3.2T   1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
      511936 blocks super 1.0
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.?
I assume raid0 means striped activity in a paralleled mode at lease 
similar to raid0 in mdadm.
How can I measure the btrfs read speed since it is copy-on-write which 
is not the norm in mdadm raid0.?
Perhaps I cannot use the same approach in btrfs to determine the 
performance.
Secondly, I see a methodology for raid10 using
2007 Apr 25
2
Raid 1 newbie question
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root at server admin]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
      1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0]
      77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0]
      104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root at
2011 Apr 14
3
Debian Squeeze hangs with kernel 2.6.32-5-xen-686
Hi all!
After upgrading to Squeeze, I am watching a Xen VMHost that after a
while it hangs. This did not happen when I was using Xen with Debian
Lenny (in this case as with Squeeze, the Xen components are from Debian
repositories).
In each case I connected a keyboard and monitor to the computer and the
screen remained black without answering any key.
This problem seems to also affect domUs,
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck 
everything. But I cannot.
The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, 
/, swap). OS is up-to-date CentOS 5.
So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file 
systems, and try to run
	fsck -y /dev/md0
	fsck -y /dev/md1
	fsck -y /dev/md2
For each try I get an error message:
2010 Nov 04
1
orphan inodes deleted issue
Dear All,
My servers running on CentOS 5.5 x86_64 with kernel 2.6.18.194.17.4.el
gigabyte motherboard and 2 harddisks (seagate 500GB).
My CentOS box configured RAID 1, yesterday and today I had the same
problem on 2 servers with same configuration. See the following error
messages for details:
EXT3-fs: INFO: recovery required on readonly filesystem.
EXT3-fs: write access will be enabled during