search for: 65536kb

Displaying 20 results from an estimated 32 matches for "65536kb".

2014 Jan 24
4
Booting Software RAID
.../dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU] My question is if sda one fails will it still boot on sdb? Did the install process write...
2014 Dec 03
7
DegradedArray message
...dadm running on desk4 A DegradedArray event had been detected on md device /dev/md0. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [raid1] md0 : active raid1 dm-2[1] 243682172 blocks super 1.1 [2/1] [_U] bitmap: 2/2 pages [8KB], 65536KB chunk md1 : active raid1 dm-3[0] dm-0[1] 1953510268 blocks super 1.1 [2/2] [UU] bitmap: 3/15 pages [12KB], 65536KB chunk unused devices: <none> & q Held 314 messages in /var/spool/mail/root You have mail in /var/spool/mail/root Ran a madam query against both raid partition...
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
...'m making big progress. The system booted again, though it feels a bit sluggish. Here's the current state of things. [root at alphamule:~] # cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdb2[1] sda2[0] 512960 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md126 : inactive sda1[0](S) 16777216 blocks super 1.2 md127 : active raid1 sda3[0] 959323136 blocks super 1.2 [2/1] [U_] bitmap: 8/8 pages [32KB], 65536KB chunk unused devices: <none> Now how can I make my RAID array whole again? For the record, /dev/sda is intact,...
2017 Sep 20
4
xfs not getting it right?
Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=12...
2014 Dec 03
0
DegradedArray message
...had been detected on md device /dev/md0. > > Faithfully yours, etc. > > P.S. The /proc/mdstat file currently contains the following: > > Personalities : [raid1] > md0 : active raid1 dm-2[1] > 243682172 blocks super 1.1 [2/1] [_U] > bitmap: 2/2 pages [8KB], 65536KB chunk > > md1 : active raid1 dm-3[0] dm-0[1] > 1953510268 blocks super 1.1 [2/2] [UU] > bitmap: 3/15 pages [12KB], 65536KB chunk > > unused devices: <none> Could be a bad drive, as digimer alludes in his reply. OTOH, I had a perfectly good drive get kicked ou...
2014 Dec 03
0
DegradedArray message
...t had been detected on md device /dev/md0. > > Faithfully yours, etc. > > P.S. The /proc/mdstat file currently contains the following: > > Personalities : [raid1] > md0 : active raid1 dm-2[1] > 243682172 blocks super 1.1 [2/1] [_U] > bitmap: 2/2 pages [8KB], 65536KB chunk > > md1 : active raid1 dm-3[0] dm-0[1] > 1953510268 blocks super 1.1 [2/2] [UU] > bitmap: 3/15 pages [12KB], 65536KB chunk the reason why one drive was kicked out (above [_U] ) will be in /var/log/messages. If it is also part of md1 then it should be manually remove...
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
...m booted again, > though it feels a bit sluggish. Here's the current state of things. > > [root at alphamule:~] # cat /proc/mdstat > Personalities : [raid1] > md125 : active raid1 sdb2[1] sda2[0] > 512960 blocks super 1.0 [2/2] [UU] > bitmap: 0/1 pages [0KB], 65536KB chunk > > md126 : inactive sda1[0](S) > 16777216 blocks super 1.2 > > md127 : active raid1 sda3[0] > 959323136 blocks super 1.2 [2/1] [U_] > bitmap: 8/8 pages [32KB], 65536KB chunk > > unused devices: <none> > > Now how can I make my R...
2018 Dec 04
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:10, Gordon Messmer a ?crit : > The system should boot normally if you disconnect sdb. Have you > tried that? Unfortunately that didn't work. The boot process stops here: [OK] Reached target Basic System. Now what ? -- Microlinux - Solutions informatiques durables 7, place de l'?glise - 30730 Montpezat Site : https://www.microlinux.fr Blog :
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
....x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F) sdg1[3] sde1[1] sdd1[0] 1949480960 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U] bitmap: 15/15 pages [60KB], 65536KB chunk smartctl reports this for sdf: 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 6 So it's got 6 bad blocks, 1 pending for remapping. Can I clear the error and rebuild?...
2014 Dec 04
2
DegradedArray message
...t; > > > Faithfully yours, etc. > > > > P.S. The /proc/mdstat file currently contains the following: > > > > Personalities : [raid1] > > md0 : active raid1 dm-2[1] > > 243682172 blocks super 1.1 [2/1] [_U] > > bitmap: 2/2 pages [8KB], 65536KB chunk > > > > md1 : active raid1 dm-3[0] dm-0[1] > > 1953510268 blocks super 1.1 [2/2] [UU] > > bitmap: 3/15 pages [12KB], 65536KB chunk > > > the reason why one drive was kicked out (above [_U] ) will > be in /var/log/messages. If it is also part o...
2019 Jul 08
2
Server fails to boot
First some history. This is an Intel MB and processor some 6 years old, initially running CentOS 6. It has 4 x 1TB sata drives set up in two mdraid 1 mirrors. It has performed really well in a rural setting with frequent power cuts which the UPS has dealt with and auto shuts down the server after a few minutes and then auto restarts when power is restored. The clients needed a Windoze server
2017 Sep 20
3
xfs not getting it right?
...RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >> 499976512 blocks super 1.2 [2/2] [UU] >> bitmap: 0/4 pages [0KB], 65536KB chunk > > RAID 1 has no "layout" (for RAID, that usually refers to striping in > RAID levels 0/5/6), so there's nothing for a filesystem to detect or > optimize for. Are you saying there is no difference between a RAID1 and a non-raid device as far as xfs is concerned? W...
2016 Nov 05
3
Avago (LSI) SAS-3 controller, poor performance on CentOS 7
...[UU] md1 : active raid5 sdf3[6] sde3[4] sdd3[3] sdc3[2] sda3[0] sdb3[1] 19528458240 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_] [====>................] recovery = 22.1% (864374932/3905691648) finish=417.0min speed=121538K/sec bitmap: 1/30 pages [4KB], 65536KB chunk unused devices: <none> CentOS 7: # dmesg | grep -i mpt mpt3sas version 04.100.00.00 loaded mpt3sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (32674176 kB) mpt3sas0: MSI-X vectors supported: 8, no of cores: 12, max_msix_vectors: 8 mpt3sas 0000:01:00.0: irq 45 for MSI/MSI-X...
2017 Sep 20
0
xfs not getting it right?
...ed to detect the layout of a md-RAID devices when creating the > file system, but it doesn?t seem to do that: > > > # cat /proc/mdstat > Personalities : [raid1] > md10 : active raid1 sde[1] sdd[0] > 499976512 blocks super 1.2 [2/2] [UU] > bitmap: 0/4 pages [0KB], 65536KB chunk > > > # mkfs.xfs /dev/md10p2 > meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 > blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 > data =...
2017 Sep 20
1
xfs not getting it right?
...RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >> 499976512 blocks super 1.2 [2/2] [UU] >> bitmap: 0/4 pages [0KB], 65536KB chunk >> >> >> # mkfs.xfs /dev/md10p2 >> meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 >> blks >> = sectsz=512 attr=2, projid32bit=1 >> = crc=1 finobt=0, spars...
2017 Sep 20
0
xfs not getting it right?
...to detect the layout of a md-RAID devices when creating the > file system, but it doesn?t seem to do that: > > > # cat /proc/mdstat > Personalities : [raid1] > md10 : active raid1 sde[1] sdd[0] > 499976512 blocks super 1.2 [2/2] [UU] > bitmap: 0/4 pages [0KB], 65536KB chunk RAID 1 has no "layout" (for RAID, that usually refers to striping in RAID levels 0/5/6), so there's nothing for a filesystem to detect or optimize for. The chunk size above is for the md-RAID write-intent bitmap; that's not exposed information (for any RAID system that I...
2019 Jan 22
2
C7 and mdadm
...d add the new drive. But: it's now cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active (auto-read-only) raid5 sdg1[8](S) sdh1[7] sdf1[4] sde1[3] sdd1[2] sdc1[1] 23441313792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/5] [_UUUU_U] bitmap: 0/30 pages [0KB], 65536KB chunk unused devices: <none> and I can't mount it (it's xfs, btw). *Should* I make it readwrite, or is there something else I should do? mark
2020 Sep 16
0
storage for mailserver
...ind of stumbled across this setup by accident when I added an NVMe SSD to an existing RIAD1 array consisting of 2 HDDs. # cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sda1[2](W) sdb1[4](W) nvme0n1p1[3] 485495616 blocks super 1.0 [3/3] [UUU] bitmap: 3/4 pages [12KB], 65536KB chunk See how we have 3 devices in the above RAID1 array, 2 x HDDs, marked with a (W) indicating they are in --write-mostly mode, and one SSD (MVMe) device. I just went for 3 devices in the array as it started life as a 2 x HDD array and I added the third SSD device, but you can mix and match...
2020 Sep 18
0
Drive failed in 4-drive md RAID 10
...raid10 num-devices=4 > UUID=942f512e:2db8dc6c:71667abc:daf408c3 > > /proc/mdstat: > Personalities : [raid10] > md127 : active raid10 sdf1[2](F) sdg1[3] sde1[1] sdd1[0] > 1949480960 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U] > bitmap: 15/15 pages [60KB], 65536KB chunk > > smartctl reports this for sdf: > 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always > - 1 > 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline > - 6 > > So it's got 6 bad blocks, 1 pending for remapping....
2013 Nov 14
4
First Time Setting up RAID
Arch = x86_64 CentOS-6.4 We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP application and DBMS host. The ERP application will likely eventually have web access but at the moment only dedicated client applications can connect to it. I am researching how to best set this system up for use as a production host