similar to: CentOS 6 x86_64 can't detect raid 10

Displaying 20 results from an estimated 10000 matches similar to: "CentOS 6 x86_64 can't detect raid 10"

2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2010 Dec 01
12
Fsck, parent transid verify failed
Hi folks! Been using btrfs for quite a while now, worked great until now. Got power-loss on my machine and now i have the "parent transid verify failed on X wanted X found X" problem. So I can''t get it to mount. My btrfs is spread over sda (2tb), sdc(2tb), sdd(1tb). Is this something that an offline fsck could fix ? If so is the fsck-util being developed ? Is there a way to
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2009 Apr 24
3
extend raid volume - new drive
Hi there, I have a system with the following: # fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9471 75971385 83 Linux /dev/sda3
2013 Oct 07
2
Some questions after devices addition to existing raid 1 btrfs filesystem
Hi, I have added 2x2Tb to my existing 2x2Tb raid 1 btrfs filesystem and then ran a balance: # btrfs filesystem show Total devices 4 FS bytes used 1.74TB devid 3 size 1.82TB used 0.00 path /dev/sdd devid 4 size 1.82TB used 0.00 path /dev/sde devid 2 size 1.82TB used 1.75TB path /dev/sdc devid 1 size 1.82TB used 1.75TB path /dev/sdb # btrfs
2007 May 27
1
raid
Hi, I have 4 x 500GB PATA Harddisk and I like to have them striped and mirrored. What is the best practise? Raid 0+1 or 1+0.. And how do i go about it? Thanks
2014 Aug 29
3
*very* ugly mdadm issue
We have a machine that's a distro mirror - a *lot* of data, not just CentOS. We had the data on /dev/sdc. I added another drive, /dev/sdd, and created that as /dev/md4, with --missing, made an ext4 filesystem on it, and rsync'd everything from /dev/sdc. Note that we did this on *raw*, unpartitioned drives (not my idea). I then umounted /dev/sdc, and mounted /dev/md4, and it looked fine; I
2020 Nov 23
3
Replacing SW RAID-1 with SSD RAID-1
On 11/23/20 10:46 AM, Simon Matter wrote: >> Hi, >> >> I want to replace my hard drives based SW RAID-1 with SSD's. >> >> What would be the recommended procedure? Can I just remove one drive, >> replace with SSD and rebuild, then repeat with the other drive? > > I suggest to "mdadm --fail" one drive, then "mdadm --remove" it.
2010 Oct 15
2
puppet-lvm and volume group issues
Trying to setup a volume group with puppet lvm and this:- volume_group { "my_vg": ensure => present, physical_volumes => "/dev/sdb /dev/sdc /dev/sdd", require => [ Physical_volume["/dev/sdb"], Physical_volume["/dev/sdc"], Physical_volume["/dev/sdd"] ] } Fails with this in the debug
2011 Sep 08
1
HBA port
Hi, I have a host which is connected to SAN via single Fibre channel HBA (qlogic). I have several LUNS assigned to this (sdc, sdd). I added another single port HBA to this host. I can now see two world wide names. Now the confusion is which world wide name does sdc and sdd is/was using. scsi_id -g -u -s /block/sdc only gives wwid but I need the wwn for sdc and sdd. Thanks Paras.
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2007 Jul 23
2
GFS/LVM/RAID1 recovery question
I have a (CentOS4.5) cluster in which the servers mount a GFS partition which is an LVM2 logical volume created as a mirror of two iSCSI- connected drives (with a third for the log). The LV was created using a command along the lines of: lvcreate -m 1 ... /dev/sdb /dev/sdc /dev/sdd where sd[bc] are the mirrored (iSCSI) PVs in the VG and sdd is the log. I have this working and can write data
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol 3: option shared-brick-count 3 Sincerely, Artem --
2017 Feb 18
3
usb drives & Orico ORICO 9548U3-BK
Everyone, Is there a way to manually assign usb drives to a specified device label. Is there a way to force two usb drives to be labeled as /dev/sdc and /dev/sdd? I decided to build an archive server for the purpose of backing up other fedora/centos desktops at the office. I built a machine and have installed Centos 7.3 on it with all updates current. I also purchased a 3.0 usb sata drive
2013 Jan 03
33
Option LABEL
Hallo, linux-btrfs, please delete the option "-L" (for labelling) in "mkfs.btrfs", in some configurations it doesn''t work as expected. My usual way: mkfs.btrfs -d raid0 -m raid1 /dev/sdb /dev/sdc /dev/sdd ... One call for some devices. Wenn I add the option "-L mylabel" then each device gets the same label, and therefore some other programs
2012 Aug 23
1
Order of sata/sas raid cards
Hi. I bought a new Adaptec 6405 card including new (much larger) SAS drives (arrays). I need to copy content of the current SATA (old adaptec 2405) drives to the new SAS drives. When I put the new controller into the machine, the card is seen and I can see that the kernel loads the new drives and the old drives. The problem is that the new drives are loaded as SDA and SDB, which then stops the
2018 Apr 14
2
Getting glusterfs to expand volume size to brick size
Hi, I have a 3-brick replicate volume, but for some reason I can't get it to expand to the size of the bricks. The bricks are 25GB, but even after multiple gluster restarts and remounts, the volume is only about 8GB. I believed I could always extend the bricks (we're using Linode block storage, which allows extending block devices after they're created), and gluster would see the
2007 Sep 07
2
Installation troubles
I have a new machine I'm trying to install Centos 5.0 on and I'm not getting very far. The system is 2 dual core xeons (5160, 3.0 GHZ) w/ 8GB ram. It has two 320 GB disks on the motherboard controller (Supermicro X7DAE+), and 8 750 GB disks on a 3ware 9650SE-8ml, pcie (x4) controller card. The 8 disks are set up as two raid 5 volumes (4 disks each). There is a scsi card in the machine