Displaying 20 results from an estimated 8000 matches similar to: "can NOT delete LV (in use) problem..."
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi...
I have had a problem when I am going to detach one specific LVM partitions
of Xen, so I have been trying xm destroy <domain>, lvchange -an
<lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the
server with init 1 yet and nothing... I have seem two specific process
started xenwatch and xenbus, but I am not sure if this processes have
some action over
2017 Jul 06
0
logical volume is unreadable
On 07/06/2017 10:47 AM, Volker wrote:
> On 06.07.2017 15:35, Robert Nichols wrote:
>> That looks like a snapshot volume that became invalid because it was
>> filled to capacity. Such a snapshot is lost forever. It si your
>> responsibility to monitor snapshot usage to make sure it does not run
>> out of space. The base volume, lv-vm-tviewer_vorigin, should still have
2017 Jul 06
2
logical volume is unreadable
On 06.07.2017 15:35, Robert Nichols wrote:
> On 07/06/2017 04:43 AM, Volker wrote:
>> Hi all,
>>
>> one of my lv has become completely unaccessible. Every read access
>> results in a buffer io error:
>>
>> Buffer I/O error on dev dm-13, logical block 0, async page read
>>
>> this goes for every block in the lv. A ddrescue failed on every single
2005 Oct 14
4
HowTo copy a Logical Volume to another LV
hello all,
Hoping for some help on copying Logical Volumes.
I would like to copy an existing LV to a newly formed LV.
I don't want to do a snapshot of an existing LV.
Only way I've seen is to mount the to LV and:
mount /dev/vg00/lv00 /mnt/orig
mount /dev/vg00/lv01 /mnt/copy
cd /mnt/orig
tar cf - ./ |(cd /mnt/copy; tar xf - )
Is there a LV tool to do this?
Or an option used with
2010 Sep 11
5
vgrename, lvrename
Hi,
I want to rename some volume groups and logical volumes.
I was not surprised when it would not let me rename active volumes.
So I booted up the system using the CentOS 5.5 LiveCD,
but the LiveCD makes the logical volumes browsable using Nautilus,
so they are still active and I can't rename them.
Tried:
/usr/sbin/lvchange -a n VolGroup00/LogVol00
but it still says:
LV
2005 Jun 24
4
File System Size Limits?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Is there some limit on the size of a file system which can
be shared via samba? I'm trying to set up a file server
with a 100GB shared partition and it doesn't want to work.
I'm running Fedora Core 4, and Samba Version 3.0.14a-2.
The output from testparm looks like this:
[root@stitch samba]# testparm
Load smb config files from
2020 Jan 21
2
qemu hook: event for source host too
Hello, this is my first time posting on this mailing list.
I wanted to suggest a addition to the qemu hook. I will explain it
through my own use case.
I use a shared LVM storage as a volume pool between my nodes. I use
lvmlockd in sanlock mode to protect both LVM metadata corruption and
concurrent volume mounting.
When I run a VM on a node, I activate the desired LV with exclusive lock
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors.
Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit :
>On 1/21/20 9:10 AM, Guy Godfroy wrote:
>> Hello, this is my first time posting on this mailing list.
>>
>> I wanted to suggest a
2020 Jan 22
0
Re: qemu hook: event for source host too
On 1/21/20 9:10 AM, Guy Godfroy wrote:
> Hello, this is my first time posting on this mailing list.
>
> I wanted to suggest a addition to the qemu hook. I will explain it
> through my own use case.
>
> I use a shared LVM storage as a volume pool between my nodes. I use
> lvmlockd in sanlock mode to protect both LVM metadata corruption and
> concurrent volume mounting.
2008 Mar 05
1
LVM: how do I change the UUID of a LV?
I know how to change the UUID of Physical Volumes and Volume Groups, but
when I try to do the same for a Logical Volume, lvchange complains that
"--uuid" is not an option. Here is how I've been changing the others
(note that "--uuid" does not appear in the man pages for pvchange and
vgchange for lvm2-2.02.26-3.el5):
pvchange --uuid {pv dev}
vgchange --uuid {vg name}
Any
2006 Jul 27
1
Bug: lvchange delayed until re-boot. System lock up experienced.
Did a search for LVM at the CentOS bugzilla. Nothing seems to match this
scenario. If no one contradicts me, I'll also post this in the bug
reporting system. Wanted to a) get confirmation, if possible before
bugging it and b) warn other souls that may be adventurous too!
Summary: failings in LVM and kernel(?) seem to make a "freeze" possible.
1) Lvchange --permission=r seems to
2012 Jun 28
2
Strange du/df behaviour.
Hi all.
I have currently a server:
cat /etc/redhat-release
CentOS release 5.7 (Final)
uname -a
Linux host.domain.com 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44
EST 2012 x86_64 x86_64 x86_64 GNU/Linux
I have there a filesystem mounted:
/dev/vg0/paczki /home/paczki-workdir ext4
defaults,noatime 0 0
on which df gives strange output:
LANG=C df -h
2008 Aug 17
2
mirroring with LVM?
I'm pulling my hair out trying to setup a mirrored logical volume.
lvconvert tells me I don't have enough free space, even though I have
hundreds of gigabytes free on both physical volumes.
Command: lvconvert -m1 /dev/vg1/iscsi_deeds_data
Insufficient suitable allocatable extents for logical volume : 10240
more required
Any ideas?
Thanks!,
Gordon
Here's the output from the
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello,
I have a weird problem after adding new PV do LMV volume group.
It seems the error comes out only during boot time. Please read the story.
I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens
SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm.
First pair of disks has always two arrays (md0, md1). Small md0 is used
for booting and the rest - md1
2017 Dec 11
0
active/active failover
Hi Stefan,
I think what you propose will work, though you should test it thoroughly.
I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage. And re-synchronize when it comes back up.
Chances are if you weren't using the SAN volumes; you could have purchased
two servers
2017 Dec 11
2
active/active failover
Dear all,
I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
So my question is: can I really use glusterfs to do failover in the way described
2017 Dec 12
1
active/active failover
Hi Alex,
Thank you for the quick reply!
Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present.
This will prevent starting arrays in degraded state.
Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state.
Two new tests are added.
This fixes rhbz1527852.
Here is boot-benchmark
2017 Dec 14
2
Accessing crashed disk
On 13/12/17 21:42, Leon Fauster wrote:
> Am 13.12.2017 um 22:31 schrieb martin.wagner at mailbit.io:
>
>> I have a Centos server that crashed, it would no longer boot. I thought it was the disk with the OS that was the problem so I bought a new one and did a fresh install and now the computer is again up and running. But I'm having problems with accessing the old failed disk. I
2008 Aug 22
1
LVM not removing LV
I am using RHEL 5.1 with custom kernel.
I have a LV I am trying to remove and its keep complaining its open. I
have unmounted the filesystem, lsof shows nothing, fuser shows
nothing. I am certain a reboot will fix it, but I don't know why this
occurs. Can anyone shed some light on this?
Are there some other LVM hacks I can use for this?
TIA