similar to: OT: clear error from premature disk removal from LVM

Displaying 20 results from an estimated 4000 matches similar to: "OT: clear error from premature disk removal from LVM"

2008 Aug 31
2
LVM and hotswap (USB/iSCSI) devices?
Hi list, I'm having one of those 'I'm stupid' -problems with LVM on CentOS 5.2. I've been working with traditional partitions until now, but I've finally been sold on the theoretical benefits of using LVM, but for now I only have a huge pile of broken filesystems to show for my efforts. My scenario; I attach a disk, either over USB or iSCSI. I create a PV on this
2014 Oct 27
3
"No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
I'm trying to extend a logical volume and I'm doing as follow: 1- Run `fdisk -l` command and this is the output: Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier:
2007 Jun 14
0
(no subject)
I installed a fresh copy of Debian 4.0 and Xen 3.1.0 SMP PAE from the binaries. I had a few issues getting fully virtualized guests up and running, but finally managed to figure everything out. Now I''m having a problem with paravirtualized guests and hoping that someone can help. My domU config: # # Configuration file for the Xen instance dev.umucaoki.org, created # by xen-tools
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav. On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl > wrote: > Hello, > > I would really appreciate some help/guidance with this problem. First of > all sorry for the long message. I would file a bug, but do not know if it > is my fault, dm-cache, qemu or (probably) a combination of both. And i can > imagine some of
2013 Mar 01
1
Reorg of a RAID/LVM system
I have a system with 4 disk drives, two 512 Gb and two 1 Tb. It look like this: CentOS release 5.9 (Final) Disk /dev/sda: 500.1 GB, 500107862016 bytes Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes Disk /dev/sdc: 500.1 GB, 500107862016 bytes Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes ================================================================= Disk /dev/sda: 500.1 GB, 500107862016 bytes
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello, I have a weird problem after adding new PV do LMV volume group. It seems the error comes out only during boot time. Please read the story. I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm. First pair of disks has always two arrays (md0, md1). Small md0 is used for booting and the rest - md1
2016 May 06
2
resize lvm
> > From: Scott Robbins <scottro11 at gmail.com> > Date: May 06, 2016 12:32:55 PM > To: CentOS mailing list <centos at centos.org> > Subject: Re: [CentOS] resize lvm > > On Fri, May 06, 2016 at 06:19:35PM +0000, Wes James wrote: > > I have a laptop that I put centos 7 on and I started out with a 30gig partition. I resized the other part of the disk to allow
2012 May 23
1
pvcreate limitations on big disks?
OK folks, I'm back at it again. Instead of taking my J4400 ( 24 x 1T disks) and making a big RAID60 out of it which Linux cannot make a filesystem on, I'm created 4 x RAID6 which each are 3.64T I then do : sfdisk /dev/sd{b,c,d,e} <<EOF ,,8e EOF to make a big LVM partition on each one. But then when I do : pvcreate /dev/sd{b,c,d,e}1 and then pvdisplay It shows each one as
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello everyone, Anybody had the chance to test out this setup and reproduce the problem? I assumed it would be something that's used often these days and a solution would benefit a lot of users. If can be of any assistance please contact me. -- Met vriendelijke groet, Richard Landsman http://rimote.nl T: +31 (0)50 - 763 04 07 (ma-vr 9:00 tot 18:00) 24/7 bij storingen: +31 (0)6 - 4388
2018 Mar 14
2
ISCSI target + LVM Problem
Hello, thanks for the help. Yes And the commands to discovery iscsi results ok Such iscsiadm --mode discovery --type sendtargets And scsiadm -m node -T And the disks appear on pvdisplay 2018-03-14 16:23 GMT-03:00 Marcelo Roccasalva < marcelo-centos at irrigacion.gov.ar>: > On Wed, Mar 14, 2018 at 4:08 PM, marcos sr <msr.mailing at gmail.com> wrote: > > > >
2008 Jul 17
2
lvm errors after replacing drive in raid 10 array
I thought I'd test replacing a failed drive in a 4 drive raid 10 array on a CentOS 5.2 box before it goes online and before a drive really fails. I 'mdadm failed, removed', powered off, replaced drive, partitioned with sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'. Everything seems fine until I try to create a snapshot lv. (Creating a snapshot lv
2009 Mar 09
2
LSI Logic MegaRAID 8480 Storage controller
I have LSI Logic MegaRAID 8480 Storage controller I am having trouble reconfigure one of the hardware raid devices It is configured with 4 hardware raid logical volumes on /dev/sda /dev/sdb /dev/sdc /dev/sdd. I am in the middle of rebuilding the system, ?and at this point I am only using one of the volumes, /dev/sda /dev/sda, /dev/sdb, and /dev/sdd are all 2 drive raid1 mirrors I will be
2018 Mar 14
2
ISCSI target + LVM Problem
Hello, I have a LVM with 2 ISCSI disk mounted. The partition started presents problem such " i/o error". I unmounted the device, and restarted target server in scsi, wich as having some problems. After that i mapped iscsi and trying to mount partition again but: When i run the pvdisplay i get the following erro: read failed after 0 of 4096 at 0: Input/output error And cannot mount
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2016 May 07
0
resize lvm
On 05/06/2016 02:15 PM, Wes James wrote: > I found this: > > # lvextend -l +100%FREE /dev/myvg/testlv > > doing a search. What's the difference between 100%VG and 100%FREE? For the special case of "100%" there is no difference. For values less than 100% with a non-empty VG, the two are quite different, e.g., (50% of VG) != (50% of the free space in VG). -- Bob
2010 Oct 25
2
interesting kvm lvm collision issue
I've been running into a reproducible problem when using default LVM volume group names to present block devices for virtual machines in KVM, and I'm wondering why it is happening. On dom0 I make a default VolGroup00 for the operating system. I make a second VolGroup01 for logical volumes that will be block devices for virtual systems. In VolGroup01, I make two lv's for one system:
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2020 Mar 24
2
Building a NFS server with a mix of HDD and SSD (for caching)
Hi list, I'm building a NFS server on top of CentOS 8. It has 8 x 8 TB HDDs and 2 x 500GB SSDs. The spinning drives are in a RAID-6 array. They are 4K sector size. The SSDs are in RAID-1 array and with a 512bytes sector size. I want to use the SSDs as a cache using dm-cache. So here what I've done so far: /dev/sdb ==> SSD raid1 array /dev/sdd ==> spinning raid6 array I've
2014 Jul 16
1
anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery
I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks. Partitioning is lvm over raid. If i am using "logvol --grow i get "ValueError: not enough free space in volume group" Only workaround i can find is to add --maxsize=XXX where XXX is at least 640MB less than available. (10 extents or 320Mb per created logical volume) Following snippet is failing with
2020 Mar 24
0
Building a NFS server with a mix of HDD and SSD (for caching)
Hi, > Hi list, > > I'm building a NFS server on top of CentOS 8. > It has 8 x 8 TB HDDs and 2 x 500GB SSDs. > The spinning drives are in a RAID-6 array. They are 4K sector size. > The SSDs are in RAID-1 array and with a 512bytes sector size. > > > I want to use the SSDs as a cache using dm-cache. So here what I've done > so far: > /dev/sdb ==> SSD raid1