search for: sdi1

Displaying 20 results from an estimated 40 matches for "sdi1".

Did you mean: sda1
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a *second* bad drive. Actually, I'm starting to wonder about that particulare hot-swap bay. Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable way to make either one active. Actually, I would have expected the linux RAID to replace a failed one with a spare.... Clues for the poor? I *really* don't want to freak out the user by taking it down, and build...
2019 Jan 29
2
C7, mdadm issues
...appened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>> >>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>> /dev/sdi1... >>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find >>>> a reliable way to make either one active. >>>> >>>> Actually, I would have expected the linux RAID to replace a failed >>>> one with a spare.... >>>...
2019 Jan 30
4
C7, mdadm issues
...st week >>>>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>>>> that particulare hot-swap bay. >>>>>> >>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>>> /dev/sdi1... >>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find >>>>>> a reliable way to make either one active. >>>>>> >>>>>> Actually, I would have expected the linux RAID to replace a failed >>>>&gt...
2006 Nov 16
1
Regarding debugocfs
Hi experts, My customer issued debugocfs to check for file_size and extent info but values such as file_size, alloc_size, next_free_ext were 0. (/dev/sdi1 contains datafiles and arc files) # debugocfs -a 0 /dev/sdi1 debugocfs 1.0.10-PROD1 Fri Mar 5 14:35:29 PST 2004 (build fcb0206676afe0fcac47a99c90de0e7b) file_extent_0: file_number = 128 disk_offset = 1433600 curr_master = 0 file_lock = OCFS_DLM_NO_LOCK oin_node_map = 000000000000000000000000000000...
2019 Jan 29
2
C7, mdadm issues
...ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a >> reliable way to make either one active. >> >> Actually, I would have expected the linux RAID to replace a failed one >> with a spare.... >> >> Clues for the poor? I *really* don&...
2019 Jan 30
2
C7, mdadm issues
...has a *second* bad drive. Actually, I'm starting >>>>>>>> to wonder about that particulare hot-swap bay. >>>>>>>> >>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>>>>> /dev/sdi1... >>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet >>>>>>>> to find a reliable way to make either one active. >>>>>>>> >>>>>>>> Actually, I would have expected the linux RAID to repl...
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped. md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21] 19533803520 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU] [=============>.......] reshape = 69.0% (1347861324/1953380352) finish=461...
2019 Jan 30
1
C7, mdadm issues
...lly, I'm >>>>>>>>>> starting to wonder about that particulare hot-swap bay. >>>>>>>>>> >>>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've >>>>>>>>>> added /dev/sdi1... >>>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have >>>>>>>>>> yet to find a reliable way to make either one active. >>>>>>>>>> >>>>>>>>>> Actually, I would have...
2019 Jan 30
0
C7, mdadm issues
...I was working on last week >>>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>>> that particulare hot-swap bay. >>>>> >>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>> /dev/sdi1... >>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find >>>>> a reliable way to make either one active. >>>>> >>>>> Actually, I would have expected the linux RAID to replace a failed >>>>> one with a spa...
2019 Jan 30
3
C7, mdadm issues
...has a *second* bad drive. Actually, I'm starting to wonder about >>>>>>>> that particulare hot-swap bay. >>>>>>>> >>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>>>>> /dev/sdi1... >>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find >>>>>>>> a reliable way to make either one active. >>>>>>>> >>>>>>>> Actually, I would have expected the linux RAID to repl...
2019 Jan 30
0
C7, mdadm issues
...>>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>>>>> that particulare hot-swap bay. >>>>>>> >>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>>>> /dev/sdi1... >>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find >>>>>>> a reliable way to make either one active. >>>>>>> >>>>>>> Actually, I would have expected the linux RAID to replace a failed &gt...
2019 Jan 30
0
C7, mdadm issues
...bad drive. Actually, I'm starting >>>>>>>>> to wonder about that particulare hot-swap bay. >>>>>>>>> >>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>>>>>> /dev/sdi1... >>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet >>>>>>>>> to find a reliable way to make either one active. >>>>>>>>> >>>>>>>>> Actually, I would have expected the li...
2019 Jan 30
0
C7, mdadm issues
...>>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>>>>> that particulare hot-swap bay. >>>>>>> >>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>>>> /dev/sdi1... >>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find >>>>>>> a reliable way to make either one active. >>>>>>> >>>>>>> Actually, I would have expected the linux RAID to replace a failed &gt...
2009 Sep 24
4
mdadm size issues
...evice Boot Start End Blocks Id System /dev/sda1 1 243201 1953512001 83 Linux .... I go about creating the array as follows # mdadm --create --verbose /dev/md3 --level=6 --raid-devices=10 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: size set to 1953511936K Continue creating array? As you can see mdadm sets the size to 1.9T. Looking around there was this limitation on older versions of mdadm if they are the 32 bit v...
2015 Jul 13
0
Re: Migrate Win2k3 to KVM
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 07/13/2015 09:58 AM, Ruzsinszky Attila wrote: >> Here is the first problem. I can't mount sdi1! sdi1 not >> existent. >>> :-( >> >> What if you partprobe /dev/sdi? >> > Nothing. Empty output. When partprobe properly works (ie encounters no error), it doesn't print anything to console, it just adds the appropriate entries in /dev. (I guess some answ...
2019 Jan 29
0
C7, mdadm issues
Il 29/01/19 15:03, mark ha scritto: > I've no idea what happened, but the box I was working on last week has a > *second* bad drive. Actually, I'm starting to wonder about that > particulare hot-swap bay. > > Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but > see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable > way to make either one active. > > Actually, I would have expected the linux RAID to replace a failed one > with a spare.... > > Clues for the poor? I *really* don't want to freak out th...
2019 Jan 29
0
C7, mdadm issues
...;>> I've no idea what happened, but the box I was working on last week has >>> a *second* bad drive. Actually, I'm starting to wonder about that >>> particulare hot-swap bay. >>> >>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a >>> reliable way to make either one active. >>> >>> Actually, I would have expected the linux RAID to replace a failed one >>> with a spare.... >>> >>> Clues f...
2019 Jan 31
0
C7, mdadm issues
...bad drive. Actually, I'm starting to wonder about >>>>>>>>> that particulare hot-swap bay. >>>>>>>>> >>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added >>>>>>>>> /dev/sdi1... >>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to >>>>>>>>> find >>>>>>>>> a reliable way to make either one active. >>>>>>>>> >>>>>>>>> A...
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2007 Aug 23
1
Transport endpoint not connected after crash of one node
...ocfs2 ppsdb102 /dev/sdb1 ocfs2 ppsdb102 /dev/sdc1 ocfs2 ppsdb102 /dev/sdd1 ocfs2 ppsdb102 /dev/sde1 ocfs2 ppsdb102 /dev/sdf1 ocfs2 ppsdb102 /dev/sdg1 ocfs2 ppsdb102 /dev/sdh1 ocfs2 ppsdb102 /dev/sdi1 ocfs2 ppsdb102 /dev/sdj1 ocfs2 ppsdb102 /dev/sdk1 ocfs2 ppsdb102 /dev/sdl1 ocfs2 ppsdb102, ppsdb101 /dev/sdm1 ocfs2 ppsdb102 /dev/sdn1 ocfs2 ppsdb102 /dev/sdo1 ocfs2 ppsdb102 /dev/sdp1 ocfs2 ppsd...