search for: raid4

Displaying 20 results from an estimated 45 matches for "raid4".

Did you mean: raid
2009 Sep 24
5
OT: What's wrong with RAID5
Hi all, Sorry for the OT. I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1. The problem is, from the docs it says that it only supports either RAID-DP or RAID4. What I want to achieve is Max Storage Capacity, so I change it from RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the capacity is the same. Now, why RAID5 is not supported? I believe using RAID5, I can get more...
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] [==================>..] resync = 90.2% (880765600/976222720) finish=44.6min speed=35653K/sec # cat /sys/block/md11/md/mismatch_cnt 1439285488...
2019 Jan 22
2
C7 and mdadm
...bad one. why it was there in the box, rather than where I started looking...) Brought it up, RAID not working. I finally found that I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I could add the new drive. But: it's now cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active (auto-read-only) raid5 sdg1[8](S) sdh1[7] sdf1[4] sde1[3] sdd1[2] sdc1[1] 23441313792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/5] [_UUUU_U] bitmap: 0/30 pages [0KB], 65536KB chunk unused devices: <none> and I can't mount it (it's xfs, btw). *Sho...
2015 Feb 16
4
Centos 7.0 and mismatched swap file
On Mon, Feb 16, 2015 at 6:47 AM, Eliezer Croitoru <eliezer at ngtech.co.il> wrote: > I am unsure I understand what you wrote. > "XFS will create multiple AG's across all of those > devices," > Are you comparing md linear/concat to md raid0? and that the upper level XFS > will run on top them? Yes to the first question, I'm not understanding the second
2013 Jan 04
2
Syslinux 5.00 - Doesn't boot my system / Not passing the kernel options to the kernel?
...umber Start (sector) End (sector) Size Code Name 1 2048 264191 128.0 MiB FD00 Boot 2 264192 41943006 19.9 GiB FD00 Bank The partitions are part of a mdraid setup: testVM ~ # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda2[0] sdb2[1] 20838311 blocks super 1.2 [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 131060 blocks super 1.0 [2/2] [UU] sda1/sdb1 = /dev/md0 -> /boot (ext4) sda2/sdb2 = /dev/md1 -> Luks encrypted LVM2 container with / and other volumes This is...
2016 Nov 05
3
Avago (LSI) SAS-3 controller, poor performance on CentOS 7
...ChipRevision(0x02), BiosVersion(08.25.00.00) mpt3sas_cm0: Protocol=( scsi0 : Fusion MPT SAS Host mpt3sas_cm0: sending port enable !! mpt3sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b00aee5510), phys(8) mpt3sas_cm0: port enable: SUCCESS # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid1 sda2[0] sdb2[1] 999360 blocks super 1.0 [2/2] [UU] md1 : active raid5 sdf3[6] sde3[4] sdd3[3] sdc3[2] sda3[0] sdb3[1] 19528458240 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_] [====>................] recovery = 22.1% (86437493...
2017 Aug 18
4
Problem with softwareraid
...d the following: Booting any installed kernel gives me NO md0 device. (ls /dev/md* doesnt give anything). a 'cat /proc/partitions show me now /dev/sd[a-d]1 partition. partprobe and a mdadm assemble gives me "disk busy" [root at quad live]# cat mdstat Personalities : [raid6] [raid5] [raid4] [raid10] unused devices: <none> [root at quad ~]# partprobe device-mapper: remove ioctl on WDC_WD20EFRX-68AX9N0_WD-WMC301255087p1 failed: Device or resource busy Warning: parted was unable to re-read the partition table on /dev/mapper/WDC_WD20EFRX-68AX9N0_WD-WMC301255087 (Device or resource...
2019 Jun 28
5
raid 5 install
On Jun 28, 2019, at 8:46 AM, Blake Hudson <blake at ispn.net> wrote: > > Linux software RAID?has only decreased availability for me. This has been due to a combination of hardware and software issues that are are generally handled well by HW RAID controllers, but are often handled poorly or unpredictably by desktop oriented hardware and Linux software. Would you care to be more
2015 Feb 16
0
Centos 7.0 and mismatched swap file
...angement. This extends > to using raid1 + linear instead of raid10 if some redundancy is > desired. The other plus is that growing linear arrays is cake. They just get added to the end of the concat, and xfs_growfs is used. Takes less than a minute. Whereas md raid0 grow means converting to raid4, then adding the device, then converting back to raid0. And further, linear grow can be any size drive, whereas clearly with raid0 the drive sizes must all be the same. -- Chris Murphy
2017 Aug 19
0
Problem with softwareraid
...installed kernel gives me NO md0 device. (ls /dev/md* > doesnt give anything). a 'cat /proc/partitions show me now > /dev/sd[a-d]1 partition. partprobe and a mdadm assemble gives me "disk > busy" > > [root at quad live]# cat mdstat > Personalities : [raid6] [raid5] [raid4] [raid10] > unused devices: <none> > > [root at quad ~]# partprobe > device-mapper: remove ioctl on WDC_WD20EFRX-68AX9N0_WD-WMC301255087p1 > failed: Device or resource busy > >>>>>>>>>>>>>>>>>>>>>>>>>...
2019 Jul 01
0
raid 5 install
...port has been historically poor. My comments are limited to mdadm. I've experienced three faults when using Linux software raid (mdadm) on RH/RHEL/CentOS and I believe all of them resulted in more downtime than would have been experienced without the RAID: ??? 1) A single drive failure in a RAID4 or 5 array (desktop IDE) caused the entire system to stop responding. The result was a degraded (from the dead drive) and dirty (from the crash) array that could not be rebuilt (either of the former conditions would have been fine, but not both due to buggy Linux software). ??? 2) A single dri...
2004 Sep 13
1
throughput of 300MB/s
Hello, are there any experiences with samba as a _really_ fast server? Assuming if the filesystem and network is fast enough, has anyone managed to get a throughput in samba of of let's say 300 MB/s ? Are there any benchmarks? regards, Martin
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
...0 1,4G 0% /sys/fs/cgroup /dev/md125 194M 80M 101M 45% /boot /dev/sde1 917G 88M 871G 1% /mnt The root partition (/dev/md127) only shows 226 G of space. So where has everything gone? [root at nestor:~] # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md125 : active raid1 sdc2[2] sdd2[3] sdb2[1] sda2[0] 204736 blocks super 1.0 [4/4] [UUUU] md126 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 4095936 blocks super 1.2 [4/4] [UUUU] md127 : active raid5 sdc3[2] sdb3[1] sdd3[4] sda3[0] 240087552 blocks super 1.2 level 5, 512k...
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
...0 1,4G 0% /sys/fs/cgroup /dev/md125 194M 80M 101M 45% /boot /dev/sde1 917G 88M 871G 1% /mnt The root partition (/dev/md127) only shows 226 G of space. So where has everything gone? [root at nestor:~] # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md125 : active raid1 sdc2[2] sdd2[3] sdb2[1] sda2[0] 204736 blocks super 1.0 [4/4] [UUUU] md126 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 4095936 blocks super 1.2 [4/4] [UUUU] md127 : active raid5 sdc3[2] sdb3[1] sdd3[4] sda3[0] 240087552 blocks super 1.2 level 5, 512k...
2011 Jun 24
1
How long should resize2fs take?
...00G resize2fs 1.41.11 (14-Mar-2010) Resizing the filesystem on /dev/mapper/data-data to 786432000 (4k) blocks. Time passes. :D It's an LVM comprising 4x2TB disks in RAID10 and 4x500GB in RAID10. $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdi1[0] sdg1[1] sdf1[3] sdh1[2] 976767872 blocks 64K chunks 2 near-copies [4/4] [UUUU] md0 : active raid10 sda1[2] sdc1[3] sdb1[1] sdd1[0] 3907023872 blocks 64K chunks 2 near-copies [4/4] [UUUU] Disks are 7200RPM SATA disks. It's ~2TB full of data which is mo...
2012 Jul 10
1
Problem with RAID on 6.3
...00 0000 * 00001c0 0002 ffee ffff 0001 0000 88af e8e0 0000 00001d0 0000 0000 0000 0000 0000 0000 0000 0000 * 00001f0 0000 0000 0000 0000 0000 0000 0000 aa55 0000200 So far, so normal. This works fine under 2.6.32-220.23.1.el6.x86_64 Personalities : [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid5 sdj3[2] sdi2[1] sdk4[3] sdh1[0] 5860537344 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] However, I just patched to CentOS 6.3 and on reboot this array failed to be built. The 2.6.32-279 kernel complained that /dev/sdj was too similar to /dev/sdj3. But I reb...
2019 Jul 01
5
raid 5 install
...ZoL) code has been stable for years. In recent months, the BSDs have rebased their offerings from Illumos to ZoL. The macOS port, called O3X, is also mostly based on ZoL. That leaves Solaris as the only major OS with a ZFS implementation not based on ZoL. > 1) A single drive failure in a RAID4 or 5 array (desktop IDE) Can I take by ?IDE? that you mean ?before SATA?, so you?re giving a data point something like twenty years old? > 2) A single drive failure in a RAID1 array (Supermicro SCSI) Another dated tech reference, if by ?SCSI? you mean parallel SCSI, not SAS. I don?t mind...
2019 Oct 28
1
NFS shutdown issue
...0 0 LABEL=rsnapshot /rsnapshot xfs defaults 0 0 /etc/exports /data 10.0.223.0/22(rw,async,no_root_squash) /rsnapshot 10.0.223.0/22(ro,sync,no_root_squash) mdrad info [root at linux-fs01 ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md125 : active raid5 sda[3] sdb[2] sdc[1] sdd[0] 134217728 blocks super external:/md127/0 level 5, 64k chunk, algorithm 0 [4/4] [UUUU] md126 : active raid5 sda[3] sdb[2] sdc[1] sdd[0] 5440012288 blocks super external:/md127/1 level 5, 64k chunk, algorithm 0 [4/4] [UUUU] md127 : inact...
2009 Apr 17
0
problem with 5.3 upgrade or just bad timing?
...see which of (or is it both?) of the md devices is generating these errors. system is running centos 5.3 64bit: # uname -a Linux xenmaster.dimension-x.local 2.6.18-128.1.6.el5xen #1 SMP Wed Apr 1 09:53:14 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md1 : active raid5 sdk1[2] sdj1[4] sdi1[3] sdh1[0] sdg1[1] 976783616 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md0 : active raid5 sdf1[3] sde1[1] sdd1[4](S) sdc1[0] sdb1[2] 2197715712 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> here is...
2012 Dec 16
1
Upgraded to Syslinux 5.00 - Failed to load ldlinux.c32
...da1 and sdb1 are MD raid 1 (md127) and containing /boot (ext4) - sda2 and sdb2 are MD raid 1 (md126) and containing a LVM volume containing the rootfs (ext4) and other logical volumes root at sysresccd /root % cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sda2[0] sdb2[1] 20662200 blocks super 1.2 [2/2] [UU] md127 : active raid1 sda1[0] sdb1[1] 307188 blocks super 1.0 [2/2] [UU] Thanks. -- Regards, Igor