Displaying 20 results from an estimated 2000 matches similar to: "How does LVM decide which Physical Volume to write to?"
2007 Nov 29
1
RAID, LVM, extra disks...
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and
sdf.
What should I do if I
2011 Apr 02
3
Best way to extend pv partition for LVM
I've replaced disks in a hardware RAID 1 with larger disks and enlarged
the array. Now I have to find a way to tell LVM about the extra space.
It seems there are two ways:
1. delete partition with fdisk and recreate a larger one. This is
obviously a bit tricky if you do not want to lose data, I haven't
investigated further yet.
2. create another partition on the disk, pvcreate another
2012 Jan 17
2
Transition to CentOS - RAID HELP!
Hi Folks,
I've inherited an old RH7 system that I'd like to upgrade to
CentOS6.1 by means of wiping it clean and doing a fresh install.
However, the system has a software raid setup that I wish to keep
untouched as it has data on that I must keep. Or at the very least, TRY
to keep. If all else fails, then so be it and I'll just recreate the
thing. I do plan on backing up
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is
running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives
in two slots of a 4-slot hot swap bay configured like this:
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End
2016 May 18
4
enlarging partition and its filesystem
Hi all!
I've got a VM at work running C6 on HyperV (no, its not my fault,
that's what the company uses. I'd rather gag myself than own one
of th ose things.)
I ran out of disk space in the VM, so the admin enlarged the virtual disk.
but now I realize I don't know how to enlarge the partition and its
filesystem.
I'll be googling, but in case I miss it, it'd be great if
2007 Dec 01
2
Looking for Insights
Hi Guys,
I had a strange problem yesterday and I'm curious as to what everyone
thinks.
I have a client with a Red Hat Enterprise 2.1 cluster. All quality HP
equipment with an MSA 500 storage array acting as the shared storage
between the two nodes in the cluster.
This cluster is configured for reliability and not load balancing. All
work is handled by one node or the other not both.
2007 Mar 20
1
centos raid 1 question
Hi,
im having this on my screen and dmesg im not sure if this is an error
message. btw im using centos 4.4 with 2 x 200GB PATA drives.
md: md0: sync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:hda2
disk 1, wo:0, o:1, dev:hdc2
md: delaying resync of md5 until md3 has finished resync (they share one or
more physical units)
md: syncing RAID array md5
md: minimum _guaranteed_
2008 Jul 17
2
lvm errors after replacing drive in raid 10 array
I thought I'd test replacing a failed drive in a 4 drive raid 10 array on
a CentOS 5.2 box before it goes online and before a drive really fails.
I 'mdadm failed, removed', powered off, replaced drive, partitioned with
sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'.
Everything seems fine until I try to create a snapshot lv. (Creating a
snapshot lv
2012 Jul 22
1
btrfs-convert complains that fs is mounted even if it isn't
Hi,
I''m trying to run btrfs-convert on a system that has three raid
partitions (boot/md1, swap/md2 and root/md3). When I boot a rescue
system from md1, and try to run "btrfs-convert /dev/md3", it complains
that /dev/md3 is already mounted, although it definitely is not. The
only partition mounted is /dev/md1 because of the rescue system. When I
replicate the setup in a
2007 Aug 27
3
mdadm --create on Centos5?
Is there some new trick to making raid devices on Centos5?
# mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdc1
mdadm: error opening /dev/md3: No such file or directory
I thought that worked on earlier versions. Do I have to do something
udev related first?
--
Les Mikesell
lesmikesell at gmail.com
2008 Feb 25
2
ext3 errors
I recently set up a new system to run backuppc on centOS 5 with the
archive stored on a raid1 of 750 gig SATA drives created with 3 members
with one specified as "missing". Once a week I add the 3rd partition,
let it sync, then remove it. I've had a similar system working for a
long time using a firewire drive as the 3rd member, so I don't think the
raid setup is the cause
2007 Oct 17
2
Hosed my software RAID/LVM setup somehow
CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1.
I rebooted one of my machines after doing some changes to RAID/LVM and now
the two RAID partitions that I made changes to are "gone". I cannot boot
into the system.
On bootup it tells me that the devices md2 and md3 are busy or mounted and
drops me to the repair shell. When I run fs check manually it just tells
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally
Through df -i I got the approximate number of files is 63694442
[root at CentOS-73-64-minimal ~]# df -i
Filesystem Inodes IUsed IFree IUse%
Mounted on
/dev/md2 131981312 30901030 101080282 24% /
devtmpfs 8192893 435 8192458 1%
/dev
tmpfs
2008 Aug 22
1
Growing RAID5 on CentOS 4.6
I have 4 disks in a RAID5 array. I want to add a 5th. So I
did
mdadm --add /dev/md3 /dev/sde1
This worked but, as expected, the disk isn't being used in the raid5 array.
md3 : active raid5 sde1[4] sdd4[3] sdc3[2] sdb2[1] sda1[0]
2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
So then I tried the next step:
mdadm --grow --raid-devices=5 /dev/md3
But now I have
2009 Sep 24
4
mdadm size issues
Hi,
I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit)
All 10 drives are 2T in size.
device sd{a,b,c,d,e,f} are on my motherboard
device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below)
#lspci
06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller
The controller is set to JBOD the drives.
All
2009 Sep 23
1
steps to add a new physical disk to existing LVM setup in new centos box?
Not a centos specific question here, but if anyone can save me from
shooting myself in the foot would appreciate any pointers.... I have an
older centos 4.7 box that I recently replaced with a newer one running
centos 5.3. I'd like to add the hard disk from the older one to the
newer one before I scrap it for good. I don't care about the data on
it, would just like the extra drive
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2006 Dec 27
1
Software RAID1 issue
When a new system CentOS-4.4 is built the swap partition is always
reversed...
Note md3 below, the raidtab is OK, I have tried various raid commands to
correct.
swapoff -a
raidstop /dev/md3
mkraid /dev/md3 --really-force
swapon -a
And then I get a proper ourput for /proc/mdstat,
but when I reboot /proc/mdstat again reads as below, with md3 [0] [1]
reversed.
[root]# cat /proc/mdstat
2023 Mar 26
1
hardware issues and new server advice
Hi,
sry if i hijack this, but maybe it's helpful for other gluster users...
> pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
> I would choose LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several (not the built-in ones)