search for: md3

Displaying 20 results from an estimated 113 matches for "md3".

Did you mean: md
2007 Dec 01
2
Looking for Insights
...age array acting as the shared storage between the two nodes in the cluster. This cluster is configured for reliability and not load balancing. All work is handled by one node or the other not both. There are two 100GB RAID 5 logical drives in the MSA500. Linux sees them as /dev/md2 and /dev/md3 respectively. Running cat /proc/mdstat shows them as "active multipath" and otherwise healthy. There is a nightly shell script that runs and backs up information via tar to a USB external drive. The last thing the script does before unmounting the USB drive is to run the sync comman...
2008 Aug 17
0
Confusing output from pvdisplay
/dev/md3 is a raid5 array consisting of 4*500Gb disks pvs and pvscan both display good info: % pvscan | grep /dev/md3 PV /dev/md3 VG RaidDisk lvm2 [1.36 TB / 0 free] % pvs /dev/md3 PV VG Fmt Attr PSize PFree /dev/md3 RaidDisk lvm2 a- 1.36T 0 But pvdisplay......
2007 Aug 27
3
mdadm --create on Centos5?
Is there some new trick to making raid devices on Centos5? # mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdc1 mdadm: error opening /dev/md3: No such file or directory I thought that worked on earlier versions. Do I have to do something udev related first? -- Les Mikesell lesmikesell at gmail.com
2007 Mar 20
1
centos raid 1 question
Hi, im having this on my screen and dmesg im not sure if this is an error message. btw im using centos 4.4 with 2 x 200GB PATA drives. md: md0: sync done. RAID1 conf printout: --- wd:2 rd:2 disk 0, wo:0, o:1, dev:hda2 disk 1, wo:0, o:1, dev:hdc2 md: delaying resync of md5 until md3 has finished resync (they share one or more physical units) md: syncing RAID array md5 md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc. md: using maximum available idle IO bandwith (but not more than 200000 KB/sec) for reconstruction. md: using 128k window, over a total of 2048192 b...
2009 Sep 19
3
How does LVM decide which Physical Volume to write to?
Hi everyone. This isn't specifically a CentOS question, since it could apply for any distro but I hope someone can answer it anyway. I took the following steps but was puzzled by the outcome of the test at the end: 1. Create a RAID1 array called md3 with two 750GB drives 2. Create a RAID1 array called md9 with two 500GB drives 3. Initialise md3 then md9 as physical volumes (pvcreate) 4. Create a new volume group called "3ware" with md3 (helps me remember what controller the disks are on) 5. Use vgextend and add md9 to the 3ware vol...
2012 Jul 22
1
btrfs-convert complains that fs is mounted even if it isn't
Hi, I''m trying to run btrfs-convert on a system that has three raid partitions (boot/md1, swap/md2 and root/md3). When I boot a rescue system from md1, and try to run "btrfs-convert /dev/md3", it complains that /dev/md3 is already mounted, although it definitely is not. The only partition mounted is /dev/md1 because of the rescue system. When I replicate the setup in a local VM, booting the res...
2008 Feb 25
2
ext3 errors
...ectors initially but I think that is fixed now and I can't see any hardware errors being logged (the system/log files are on different drives). About once a week, I get an error like this, and the partition switches to read-only. --- Feb 24 04:48:20 linbackup1 kernel: EXT3-fs error (device md3): htree_dirblock_to_tree: bad entry in directory #869973: directory entry across bloc ks - offset=0, inode=3915132787, rec_len=42464, name_len=11 Feb 24 04:48:20 linbackup1 kernel: Aborting journal on device md3. Feb 24 04:48:20 linbackup1 kernel: ext3_abort called. Feb 24 04:48:20 linbackup1 ker...
2007 Oct 17
2
Hosed my software RAID/LVM setup somehow
CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1. I rebooted one of my machines after doing some changes to RAID/LVM and now the two RAID partitions that I made changes to are "gone". I cannot boot into the system. On bootup it tells me that the devices md2 and md3 are busy or mounted and drops me to the repair shell. When I run fs check manually it just tells me the same. mdadm --misc --detail tells me that md2 and md3 are active and fine. I wanted to comment out the md2 and md3 devices in fstab (and hoped then be able to boot) but I get a "read-onl...
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array. Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0]...
2008 Aug 22
1
Growing RAID5 on CentOS 4.6
I have 4 disks in a RAID5 array. I want to add a 5th. So I did mdadm --add /dev/md3 /dev/sde1 This worked but, as expected, the disk isn't being used in the raid5 array. md3 : active raid5 sde1[4] sdd4[3] sdc3[2] sdb2[1] sda1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] So then I tried the next step: mdadm --grow --raid-devices=5 /dev/md3 But no...
2006 Dec 27
1
Software RAID1 issue
When a new system CentOS-4.4 is built the swap partition is always reversed... Note md3 below, the raidtab is OK, I have tried various raid commands to correct. swapoff -a raidstop /dev/md3 mkraid /dev/md3 --really-force swapon -a And then I get a proper ourput for /proc/mdstat, but when I reboot /proc/mdstat again reads as below, with md3 [0] [1] reversed. [root]# cat /proc/...
2018 Apr 30
1
Gluster rebalance taking many years
...8192893 435 8192458 1% /dev tmpfs 8199799 8029 8191770 1% /dev/shm tmpfs 8199799 1415 8198384 1% /run tmpfs 8199799 16 8199783 1% /sys/fs/cgroup /dev/md3 110067712 29199861 80867851 27% /home /dev/md1 131072 363 130709 1% /boot gluster1:/web 2559860992 63694442 2496166550 3% /web tmpfs 8199799 1 8199798 1%...
2009 Sep 24
4
mdadm size issues
...heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 243201 1953512001 83 Linux .... I go about creating the array as follows # mdadm --create --verbose /dev/md3 --level=6 --raid-devices=10 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: size set to 1953511936K Continue creating array? As you can see mdadm sets the size to...
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all, I'm setting up Centos4.2 on 2x80GB SATA drives. The partition scheme is like this: /boot = 300MB / = 9.2GB /home = 70GB swap = 500MB The RAID is RAID 1. md0 = 300MB = /boot md1 = 9.2GB = LVM md2 = 70GB = LVM md3 = 500MB = LVM Now, the confusing part is: 1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then create the LV. 2. When setting up RAID 1, should I make those separated partitions for /, /home, and swap? Or, should I just make one big RAID device? The future purpose of using...
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
...sted a separate thread might be better. current state: - servers with 10TB hdds - 2 hdds build up a sw raid1 - each raid1 is a brick - so 5 bricks per server - Volume info (complete below): Volume Name: workdata Type: Distributed-Replicate Number of Bricks: 5 x 3 = 15 Bricks: Brick1: gls1:/gluster/md3/workdata Brick2: gls2:/gluster/md3/workdata Brick3: gls3:/gluster/md3/workdata Brick4: gls1:/gluster/md4/workdata Brick5: gls2:/gluster/md4/workdata Brick6: gls3:/gluster/md4/workdata etc. - workload: the (un)famous "lots of small files" setting - currently 70% of the of the volume is us...
2018 Apr 30
0
Gluster rebalance taking many years
...and [2018-04-30 04:20:55.193615] > > the gluster info > Volume Name: web > Type: Distribute > Volume ID: bdef10eb-1c83-410c-8ad3-fe286450004b > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 > Transport-type: tcp > Bricks: > Brick1: gluster1:/home/export/md3/brick > Brick2: gluster1:/export/md2/brick > Brick3: gluster2:/home/export/md3/brick > Options Reconfigured: > nfs.trusted-sync: on > nfs.trusted-write: on > cluster.rebal-throttle: aggressive > features.inode-quota: off > features.quota: off > cluster.shd-wait-qlength: 1...
2023 Mar 26
1
hardware issues and new server advice
...if you decide to go with more disks for the raids, use several (not the built-in ones) controllers. Well, we have to take what our provider (hetzner) offers - SATA hdds or sata|nvme ssds. Volume Name: workdata Type: Distributed-Replicate Number of Bricks: 5 x 3 = 15 Bricks: Brick1: gls1:/gluster/md3/workdata Brick2: gls2:/gluster/md3/workdata Brick3: gls3:/gluster/md3/workdata Brick4: gls1:/gluster/md4/workdata Brick5: gls2:/gluster/md4/workdata Brick6: gls3:/gluster/md4/workdata etc. Below are the volume settings. Each brick is a sw raid1 (made out of 10TB hdds). file access to the backends...
2018 Apr 30
2
Gluster rebalance taking many years
2007 Dec 18
1
How can I extract the AIC score from a mixed model object produced using lmer?
...to compile a list of the AIC scores for those models so that I can quickly summarize and rank the models by AIC. When I do logistic regression, I can easily generate this kind of list by creating the model objects using glm, and doing: > md <- c("md1.lr", "md2.lr", "md3.lr") > aic <- c(md1.lr$aic, md2.lr$aic, md3.lr$aic) > aic2 <- cbind(md, aic) but when I try to extract the AIC score from the model object produced by lmer I get: > md1.lme$aic NULL Warning message: In md1.lme$aic : $ operator not defined for this S4 class, returning NULL So....
2008 Jul 17
2
lvm errors after replacing drive in raid 10 array
...'t find all physical volumes for volume group vg0. Volume group for uuid not found: I4Gf5TUB1M1TfHxZNg9cCkM1SbRo8cthCTTjVHBEHeCniUIQ03Ov4V1iOy2ciJwm Aborting. Failed to activate snapshot exception store. So then I try # pvdisplay --- Physical volume --- PV Name /dev/md3 VG Name vg0 PV Size 903.97 GB / not usable 3.00 MB Allocatable yes PE Size (KByte) 4096 Total PE 231416 Free PE 44536 Allocated PE 186880 PV UUID yIIGF9-9f61-QPk8-q6q1-wn4D-iE1x-MJI...