similar to: mdadm quandary

Displaying 20 results from an estimated 1000 matches similar to: "mdadm quandary"

2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2002 May 29
3
rsync 2.5.5, HPUX, getting unexplained error at main.c(578)
I compiled rsync-2.5.5 on HPUX 11.11, using the +DA2.0W and +O3 options. invoking a simple rsync to transfer a file works (I ran a diff on the file, no changes) e.g: sdx1 214: ./rsync --rsh='/usr/bin/ssh -x' --rsync-path=/usr/local/src/rsync-2.5.5/rsync /scratch/chuck/tmp.test sdx2:/scratch/chuck However, adding the -a option yields an unexplained error: (In all of the following cases
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 18:47, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 15:03, mark ha scritto: >>> >>>> I've no idea what happened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>>
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2019 Jan 22
2
C7 and mdadm
A user's system had a hard drive failure over the weekend. Linux RAID 6. I identified the drive, brought the system down (8 drives, and I didn't know the s/n of the bad one. why it was there in the box, rather than where I started looking...) Brought it up, RAID not working. I finally found that I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I could add the new
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2018 Jun 01
1
[PATCH v2] daemon: inspect: better handling windows drive mapping.
I saw several Windows disk images which contains strange registry entry for mapped drives: "\\DosDevices\\Y:"=hex(3):00,00,00,00,00,00,00,00,00,00,00,00 Which is decoded something like diskID = 0x0, partition starts at 0 bytes offset from the start of the disk. In addition to a Windows disk image, I have attached dummy disk and made xfs file system on a whole device without
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a *second* bad drive. Actually, I'm starting to wonder about that particulare hot-swap bay. Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable way to make either one active. Actually, I would have expected the linux
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2015 Jun 10
2
[PATCH] New API: btrfs_replace_start
Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> --- daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++ generator/actions.ml | 19 +++++++++++++++++++ tests/btrfs/test-btrfs-devices.sh | 8 ++++++++ 3 files changed, 67 insertions(+) diff --git a/daemon/btrfs.c b/daemon/btrfs.c index 39392f7..acc300d 100644 --- a/daemon/btrfs.c +++
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
在 2015年06月12日 17:12, Pino Toscano 写道: > On Friday 12 June 2015 10:58:34 Pino Tsao wrote: >> Hi, >> >> 在 2015年06月11日 17:43, Pino Toscano 写道: >>> Hi, >>> >>> On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote: >>>> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> >>>> --- >>>> daemon/btrfs.c
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
Hi, 在 2015年06月11日 17:43, Pino Toscano 写道: > Hi, > > On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote: >> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> >> --- >> daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++ >> generator/actions.ml | 19 +++++++++++++++++++ >> tests/btrfs/test-btrfs-devices.sh |
2014 Oct 07
2
umount problem
I've got a usb HD mounted, and it has been mounted since the weekend, and has been kept busy during that period. now I"m done with it an want to umount it, but neither umount nor the on-screen icon (when right-clicked) will let me do it: it is /dev/sdd1, mounted as /media/seagateusb. when root tries to umount it we get this: # umount /media/seagateusb umount: /media/seagateusb: device
2015 Mar 17
2
Fail to set up UEFI syslinux on ArchLinux USB Flash Drive
On Tue, Mar 17, 2015 at 4:16 AM, Ferenc Wagner <wferi at niif.hu> wrote: > alex lupu via Syslinux <syslinux at zytor.com> writes: > > ?>? > Obviously it would work IF I moved the vmlinuz > > and initramfs ?files from /dev/sdd2 to /dev/sdd1. > > > > I figured that would probably be considered non-standard Arch > ? ...? > > The standard
2015 Mar 18
0
Fail to set up UEFI syslinux on ArchLinux USB Flash Drive
alex lupu <alupu01 at gmail.com> writes: > On Tue, Mar 17, 2015 at 4:16 AM, Ferenc Wagner <wferi at niif.hu> wrote: > >> alex lupu via Syslinux <syslinux at zytor.com> writes: >> >>?>? Obviously it would work IF I moved the vmlinuz >>> and initramfs ?files from /dev/sdd2 to /dev/sdd1. >>> >>> I figured that would probably be