similar to: hotplug Backup-hdd

Displaying 20 results from an estimated 8000 matches similar to: "hotplug Backup-hdd"

2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2014 Mar 17
1
Slow RAID resync
OK todays problem. I have a HP N54L Microserver running centos 6.5. In this box I have a 3x2TB disk raid 5 array, which I am in the process of extending to a 4x2TB raid 5 array. I've added the new disk --> mdadm --add /dev/md0 /dev/sdb And grown the array --> mdadm --grow /dev/md0 --raid-devices=4 Now the problem the resync speed is v slow, it refuses to rise above 5MB, in general
2017 Aug 18
4
Problem with softwareraid
Hello all, i have already had a discussion on the software raid mailinglist and i want to switch to this one :) I am having a really strange problem with my md0 device running centos7. after a new start of my server the md0 was gone. now after trying to find the problem i detected the following: Booting any installed kernel gives me NO md0 device. (ls /dev/md* doesnt give anything). a 'cat
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2007 May 27
1
dealing with mke2fs -T option
Hi, I have a doubt if I use the mke2fs option the right way. I formatted two different disks, one with $ mke2fs -b 4096 -E stride=16 -m 1 -T news /dev/sdd and the other with $ mke2fs -b 4096 -E stride=16 -m 1 -T largefile4 /dev/sde sdd is supposed to get files between 8k and 16k. sde will handle files with a fixed size of 32Mb. Then I tried this : $ dd if=/dev/zero of=/mount-sdx/file bs=4k
2010 Sep 13
3
Proper procedure when device names have changed
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool: mirror sdd sde mirror sdf sdg Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unhappy about the mirrors are no longer matched up and one device is "missing". What is the proper procedure to deal with this? -brian -- This message posted from
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2014 Sep 30
1
Centos 6 Software RAID 10 Setup
I am setting up a Centos 6.5 box to host some Openvz containers. I have a 120gb SSD I am going to use for boot, / and swap. Should allow for fast boots. Have a 4TB drive I am going to mount as /backup and use to move container backups too etc. The remaining four 3TB drives I am putting in a software RAID 10 array and mount as /vz and all the containers will go there. It will have by far the
2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed: md0 is made up of two 250G disks on which the OS and a very large /var partions resides for a number of virtual machines. md1 is made up of two 2T disks on which /home resides. Challenge is that disk 0 of md0 is the problem and it has a 524M /boot partition outside of the raid partition. My plan is to back up /home (md1) and at a
2017 Feb 18
3
usb drives & Orico ORICO 9548U3-BK
Everyone, Is there a way to manually assign usb drives to a specified device label. Is there a way to force two usb drives to be labeled as /dev/sdc and /dev/sdd? I decided to build an archive server for the purpose of backing up other fedora/centos desktops at the office. I built a machine and have installed Centos 7.3 on it with all updates current. I also purchased a 3.0 usb sata drive
2013 Jan 12
2
selinux + kvm virtualization + smartd problem
Hello, I'm using HP homeserver where host system run CentOS 6.3 with KVM virtualization with SELinux enabled, guests too run the same OS (but without SELinux, but this does not matter). Host system installed on mirrors based on sda and sdb physical disks. sd{c..f} disks attached to KVM guest (whole disks, not partitions; needed to use zfs (zfsonlinux) benefit features). Problem is that disks
2010 May 28
2
permanently add md device
Hi All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
Dear all, Right out of the blue glusterfs is not working fine any more every now end the it stops working telling me, Endpoint not connected and writing core files: [root at tuepdc /]# file core.15288 core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), SVR4-style, from 'glusterfs' My Version: [root at tuepdc /]# glusterfs --version glusterfs 3.2.0 built on Apr 22 2011
2007 Oct 07
1
Replacing failed software RAID drive
CentOS release 4.5 Hi All: First of all I will admit to being spoiled by my MegaRAID SCSI RAID controllers. When a drive fails on one of them I just replace the drive and carry on with out having to do anything else. I now find myself in the situation where I have a failed drive on a non-MegaRAID controller, specifically an Adaptec 29160 SCSI controller. The system is an Acer G700 with 8
2013 Oct 07
2
Some questions after devices addition to existing raid 1 btrfs filesystem
Hi, I have added 2x2Tb to my existing 2x2Tb raid 1 btrfs filesystem and then ran a balance: # btrfs filesystem show Total devices 4 FS bytes used 1.74TB devid 3 size 1.82TB used 0.00 path /dev/sdd devid 4 size 1.82TB used 0.00 path /dev/sde devid 2 size 1.82TB used 1.75TB path /dev/sdc devid 1 size 1.82TB used 1.75TB path /dev/sdb # btrfs
2017 Sep 20
4
xfs not getting it right?
Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2009 Nov 20
3
steadily increasing/high loadavg without i/o wait or cpu utilization
Hi all, I just installed centos 5.4 xen-kernel on intel core i5 machine as dom0. After some hours of syncing a raid10 array (8 sata disk) I noticed a steadily increasing loadavg. I think without reasonable i/o wait or cpu utilization the loadavg on this system should be very lower. If this loadavg is normal I would be greatful if somone could explain why. The screenshots below show that there is