similar to: RAID 10 on Install?

Displaying 20 results from an estimated 6000 matches similar to: "RAID 10 on Install?"

2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2012 May 28
1
Disk geometry problem.
Hi all. I have a CentOS server: CentOS release 5.7 (Final) 2.6.18-274.3.1.el5 x86_64 I have two SSD disks attached: smartctl -i /dev/sdc smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: INTEL SSDSA2CW120G3 Serial Number: CVPR13010957120LGN Firmware
2007 Jun 14
0
(no subject)
I installed a fresh copy of Debian 4.0 and Xen 3.1.0 SMP PAE from the binaries. I had a few issues getting fully virtualized guests up and running, but finally managed to figure everything out. Now I''m having a problem with paravirtualized guests and hoping that someone can help. My domU config: # # Configuration file for the Xen instance dev.umucaoki.org, created # by xen-tools
2013 Feb 10
1
Strange error pop up on my box
I'm getting a strange error pop up on my box: ############################# No more mirrors are available Required data could not be found on any of the configured software sources. There were no more download mirrors that could be tried. More details failure: repodata/filelists.sqlite.bz2 from elrepo: [Errno 256] No more mirrors to try. ############################# How to figure out
2014 Sep 30
1
Centos 6 Software RAID 10 Setup
I am setting up a Centos 6.5 box to host some Openvz containers. I have a 120gb SSD I am going to use for boot, / and swap. Should allow for fast boots. Have a 4TB drive I am going to mount as /backup and use to move container backups too etc. The remaining four 3TB drives I am putting in a software RAID 10 array and mount as /vz and all the containers will go there. It will have by far the
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2011 Mar 09
0
Re: "open_ctree failed", unable to mount the fs
Hi, I''ve got similar problem - after HW failure - two of three disks became unavailable and now I''m unable to btrfsck. Btrfs lays on /dev/sdb1,/dev/sdc1,/dev/sdd1 - sdc+sdd became unavailable Here is kernel log: ============================================ MarĀ  9 23:56:28 ftp2 kernel: [121492.593338] ata7.00: exception Emask 0x60 SAct 0x1 SErr 0x800 action 0x6 frozen MarĀ  9
2014 Aug 29
3
*very* ugly mdadm issue
We have a machine that's a distro mirror - a *lot* of data, not just CentOS. We had the data on /dev/sdc. I added another drive, /dev/sdd, and created that as /dev/md4, with --missing, made an ext4 filesystem on it, and rsync'd everything from /dev/sdc. Note that we did this on *raw*, unpartitioned drives (not my idea). I then umounted /dev/sdc, and mounted /dev/md4, and it looked fine; I
2013 Mar 01
1
Reorg of a RAID/LVM system
I have a system with 4 disk drives, two 512 Gb and two 1 Tb. It look like this: CentOS release 5.9 (Final) Disk /dev/sda: 500.1 GB, 500107862016 bytes Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes Disk /dev/sdc: 500.1 GB, 500107862016 bytes Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes ================================================================= Disk /dev/sda: 500.1 GB, 500107862016 bytes
2011 Dec 29
2
CentOS 6 x86_64 can't detect raid 10
Dear All, I just got a new server with the following specifications: motherboard : Intel S5500BC CPU : Xeon Quad Core 2.6Ghz RAM : 8GB HDD : 4 x 2TB SATA with configured raid 10 using raid embedded server. The problem is the centos installer can't detect raid virtual disk. I can't find any log error with the following error messages during installation process:
2010 Oct 15
2
puppet-lvm and volume group issues
Trying to setup a volume group with puppet lvm and this:- volume_group { "my_vg": ensure => present, physical_volumes => "/dev/sdb /dev/sdc /dev/sdd", require => [ Physical_volume["/dev/sdb"], Physical_volume["/dev/sdc"], Physical_volume["/dev/sdd"] ] } Fails with this in the debug
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello, I have a weird problem after adding new PV do LMV volume group. It seems the error comes out only during boot time. Please read the story. I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm. First pair of disks has always two arrays (md0, md1). Small md0 is used for booting and the rest - md1
2009 Jun 16
1
Xen vs. iSCSI
[previously sent to rhelv5 list, apologies to those on both] I've got a problem I can reproduce easily enough, but really I fail to understand what's going wrong. I've got a 5.3 Dom0, which is running three guests. One is Fedora 10, that runs with local flat files, and works fine. One is Nexenta 2 (opensolaris-based), and that runs off of physical partitions, and seems to work
2011 Sep 08
1
HBA port
Hi, I have a host which is connected to SAN via single Fibre channel HBA (qlogic). I have several LUNS assigned to this (sdc, sdd). I added another single port HBA to this host. I can now see two world wide names. Now the confusion is which world wide name does sdc and sdd is/was using. scsi_id -g -u -s /block/sdc only gives wwid but I need the wwn for sdc and sdd. Thanks Paras.
2013 Oct 07
2
Some questions after devices addition to existing raid 1 btrfs filesystem
Hi, I have added 2x2Tb to my existing 2x2Tb raid 1 btrfs filesystem and then ran a balance: # btrfs filesystem show Total devices 4 FS bytes used 1.74TB devid 3 size 1.82TB used 0.00 path /dev/sdd devid 4 size 1.82TB used 0.00 path /dev/sde devid 2 size 1.82TB used 1.75TB path /dev/sdc devid 1 size 1.82TB used 1.75TB path /dev/sdb # btrfs
2012 Aug 23
1
Order of sata/sas raid cards
Hi. I bought a new Adaptec 6405 card including new (much larger) SAS drives (arrays). I need to copy content of the current SATA (old adaptec 2405) drives to the new SAS drives. When I put the new controller into the machine, the card is seen and I can see that the kernel loads the new drives and the old drives. The problem is that the new drives are loaded as SDA and SDB, which then stops the
2023 Mar 01
1
EL9/udev generates wrong device nodes/symlinks with HPE Smart Array controller
Hi, I see some strange and dangerous things happening on a HPE server with HPE Smart Array controller where EL9 ends up with wrong device nodes/symlinks to the attached disks/raid volumes: (I didn't touch anything here but at 08:09 some symlinks were changed) /dev/disk/by-id/: lrwxrwxrwx 1 root root 9 Mar 1 07:57 scsi-0HP_LOGICAL_VOLUME_00000000 -> ../../sdc lrwxrwxrwx 1 root root 10
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I