Displaying 20 results from an estimated 1000 matches similar to: "Shrinking a RAID array"
2010 Jun 29
2
fresh install of centos looking for non-existant /dev/hda : /dev/hda: open failed: No medium found
# lvm pvs
/dev/hda: open failed: No medium found
Couldn't find device with uuid r5HNPO-l18V-XfJ7-9RXY-AaWC-a4YY-3oL5h7.
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup01 lvm2 a- 232.72G 0
/dev/sdb1 VolGroup00 lvm2 a- 232.81G 32.00M
unknown device VolGroup00 lvm2 a- 232.72G 32.00M
I just installed the OS, did some tweaks, but did nothing to
2017 Feb 22
2
how to resize a partition of a disk define as a physical volume
I should have added the output of pvs:
[root ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/xvda2 cl_vm731611 lvm2 a-- 9.00g 0
PFree still show 0. It should show 5g.
Also:
[root ~]# pvdisplay /dev/xvda2
--- Physical volume ---
PV Name /dev/xvda2
VG Name cl_vm731611
PV Size 9.00 GiB / not usable 2.00 MiB
Allocatable
2012 Apr 22
1
problems with PV snapshots
Hello,
I have a Centos 6.2 clsuter with a CLVM partition on which I have a GFS2
file system.
The problem rises when I make a snapshot from my FC NetAPP FAS2020.
After I make the snapshot (it is a rw snapshot) of my LUN, I am not able
to mount it from any of my cluster nodes,
because the Physical Volume is seen two times one time on the standard
LVM partition
and the other time on the snapshot
2012 Jun 01
2
installation and configuration documentation for XCP
i''ve installed XCP 1.5-beta. i''m a little confused as to what has
happened. everything so far seems to work. however, i need more
information on what was done to my hard disk during the installation
and how was the file system set up.
in particular, i was investigating how to create a new logical volume
to place my ISO file to use as my ISO storage (SR). i notice (see
below with
2008 Jul 24
1
Help recovering from an LVM issue
Hi People
I just updated a CentOS 5.2 Server that is a Guest inside VMware ESX
3.50 Server using "yum update". As far as I can tell the only three
packages were updated
Jul 24 16:37:49 Updated: php-common - 5.1.6-20.el5_2.1.i386
Jul 24 16:37:50 Updated: php-cli - 5.1.6-20.el5_2.1.i386
Jul 24 16:37:50 Updated: php - 5.1.6-20.el5_2.1.i386
But when I rebooted the Server one of my
2007 Apr 01
2
CentOS 5 Dual Drive Confusion
I performed a test install of CentOS Beta 5 on a system with two ~60
GB drives. When installing CentOS 4 on this system, I normally work
thru setting up Software RAID with identically sized partitions on
each drive.
For my test, the CentOS installer only presented a single drive. I
took the default of letting it do what it wanted. After looking over
the system I have verified that /boot is
2017 Feb 22
4
how to resize a partition of a disk define as a physical volume
How do you resize the partition without loosing data?
gparted does not support LVM.
On Wed, Feb 22, 2017 at 8:37 AM, SysAdmin <admin at s-s.network> wrote:
> Hi,
>
> you need to resize partition /dev/xvda2, afterwards resize pv.
>
> Regards,
> Holger
>
> > -----Urspr?ngliche Nachricht-----
> > Von: CentOS [mailto:centos-bounces at centos.org] Im Auftrag
2010 Jul 20
2
LVM issue
Hi We use AoE disks for some of our systems. Currently, a 15.65Tb filesystem we have is full, I then extended the LVM by a further 4Tb but resize4fs could not handle a filesystem over 16Tb (CentOS 5.5). I then reduced the lvm by the same amount, and attempted to create a new LV, but get this error message in the process
lvcreate -v -ndata2 -L2T -t aoe
Test mode: Metadata will NOT be updated.
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi...
I have had a problem when I am going to detach one specific LVM partitions
of Xen, so I have been trying xm destroy <domain>, lvchange -an
<lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the
server with init 1 yet and nothing... I have seem two specific process
started xenwatch and xenbus, but I am not sure if this processes have
some action over
2010 May 31
1
Working example of logical storage pool and volume creation?
Hi all,
Does anyone have a working example of creation of a logical storage pool
and volume?
I'm hitting a wall getting logical volumes to work on RHEL 6 beta.
There's a single drive I'm trying to setup (sdc) as a libvirt managed
logical storage pool, but all volume creation on it fails.
Here's what I'm finding so far:
Prior to any storage pool work, only the host
2015 Feb 19
3
iostat a partition
Hey guys,
I need to use iostat to diagnose a disk latency problem we think we may be
having.
So if I have this disk partition:
[root at uszmpdblp010la mysql]# df -h /mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/MysqlVG-MysqlVol
9.9G 1.1G 8.4G 11% /mysql
And I want to correlate that to the output of fdisk -l, so that I can feed
the disk
2017 Feb 22
0
how to resize a partition of a disk define as a physical volume
Hi,
you need to resize partition /dev/xvda2, afterwards resize pv.
Regards,
Holger
> -----Urspr?ngliche Nachricht-----
> Von: CentOS [mailto:centos-bounces at centos.org] Im Auftrag von Bernard
> Fay
> Gesendet: Mittwoch, 22. Februar 2017 14:18
> An: CentOS mailing list
> Betreff: Re: [CentOS] how to resize a partition of a disk define as a
> physical volume
>
> I
2017 Feb 22
2
how to resize a partition of a disk define as a physical volume
Hello,
I have a CentOS VM with only one disk on a Xenserver.
The disk has 2 partitions:
/dev/xvda1 -> /boot
/dev/xvda2 -> a physical volume for LVM
I added 5GB to this disk via Xencenter to extend /dev/xvda2. Usually I
just have to do "pvresize /dev/xvda" to have the additional space added to
the disk. But for some reason it does not work for this disk.
[root ~]# pvresize
2024 Nov 11
1
[PATCH 2/2] nouveau/dp: handle retries for AUX CH transfers with GSP.
From: Dave Airlie <airlied at redhat.com>
eb284f4b3781 drm/nouveau/dp: Honor GSP link training retry timeouts
tried to fix a problem with panel retires, however it appears
the auxch also needs the same treatment, so add the same retry
wrapper around it.
This fixes some eDP panels after a suspend/resume cycle.
Fixes: eb284f4b3781 ("drm/nouveau/dp: Honor GSP link training retry
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it.
Like this :
# getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e
# file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e
trusted.gfid=0x00462be83e6149318bdadae1645c639e
trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2024 Jan 25
1
Upgrade 10.4 -> 11.1 making problems
Good morning,
hope i got it right... using:
https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3.1/html/administration_guide/ch27s02
mount -t glusterfs -o aux-gfid-mount glusterpub1:/workdata /mnt/workdata
gfid 1:
getfattr -n trusted.glusterfs.pathinfo -e text
/mnt/workdata/.gfid/faf59566-10f5-4ddd-8b0c-a87bc6a334fb
getfattr: Removing leading '/' from absolute path
2008 Aug 17
0
Confusing output from pvdisplay
/dev/md3 is a raid5 array consisting of 4*500Gb disks
pvs and pvscan both display good info:
% pvscan | grep /dev/md3
PV /dev/md3 VG RaidDisk lvm2 [1.36 TB / 0 free]
% pvs /dev/md3
PV VG Fmt Attr PSize PFree
/dev/md3 RaidDisk lvm2 a- 1.36T 0
But pvdisplay...
% pvdisplay /dev/md3
--- Physical volume ---
PV Name /dev/md3
2014 Oct 05
0
lvcreate error
Hello
I am unable create a new logical volume, I receive the following error
when using lvcreate
# lvcreate -L 1g -n system3_root hm
device-mapper: resume ioctl on failed: Invalid argument
Unable to resume hm-system3_root (253:7)
Failed to activate new LV.
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda4 hm lvm2 a-- 998.00g 864.75g
# gvs
VG #PV #LV #SN Attr VSize
2013 Dec 16
2
LVM recovery after pvcreate
Hi all,
I had centos 5.9 installed with one of its volumes (non-root) on LVM:
...
/dev/vgapps/lvapps /opt/apps ext3 defaults 1 2
...
Then installed centos 6.4 on this servers but without exporting this volume (I wanted to reuse it).
After that instead importing it I did:
# pvcreate /dev/sddlmac
# vgcreate vgapps /dev/sddlmac
And then realised that I should have
2018 Dec 05
6
LVM failure after CentOS 7.6 upgrade -- possible corruption
I've started updating systems to CentOS 7.6, and so far I have one failure.
This system has two peculiarities which might have triggered the
problem. The first is that one of the software RAID arrays on this
system is degraded. While troubleshooting the problem, I saw similar
error messages mentioned in bug reports indicating that sGNU/Linux
ystems would not boot with degraded software