Displaying 20 results from an estimated 100 matches similar to: "Bug: lvchange delayed until re-boot. System lock up experienced."
2017 Nov 07
0
Re: using LVM thin pool LVs as a storage for libvirt guest
Please don't use lvm thin for vm. In our hosting in Russia we have
100-150 vps on each node with lvm thin pool on ssd and have locks,
slowdowns and other bad things because of COW. After we switching to
qcow2 files on plain ssd ext4 fs and happy =).
2017-11-04 23:21 GMT+03:00 Jan Hutař <jhutar@redhat.com>:
> Hello,
> as usual, I'm few years behind trends so I have learned
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
Hello,
as usual, I'm few years behind trends so I have learned about LVM thin
volumes recently and I especially like that your volumes can be "sparse"
- that you can have 1TB thin volume on 250GB VG/thin pool.
Is it somehow possible to use that with libvirt?
I have found this post from 2014:
https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html
which says
2017 Nov 07
1
Re: using LVM thin pool LVs as a storage for libvirt guest
Do you have some comparasion of IO performance on thin pool vs. qcow2
file on fs?
In my case each VM would have its own thin volume. I just want to
overcommit disk-space.
Regards,
Jan
On 2017-11-07 13:16 +0300, Vasiliy Tolstov wrote:
>Please don't use lvm thin for vm. In our hosting in Russia we have
>100-150 vps on each node with lvm thin pool on ssd and have locks,
>slowdowns
2014 Apr 15
0
Botched kernel update
I have a 6.5 box that failed to boot after a kernel update. The reason is the
first arg on the kernel line:
title CentOS (2.6.32-431.11.2.el6.x86_64)
root (hd0,0)
kernel /tboot.gz ro root=/dev/mapper/vol0-lvol1 intel_iommu=on rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_DM KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vol0/lv
ol3
2011 Feb 17
0
Can't create mirrored LVM: Insufficient suitable allocatable extents for logical volume : 2560 more required
I'm trying to setup a LVM mirror on 2 iSCS targets, but can't.
I have added both /dev/sda & /dev/sdb to the LVM-RAID PV, and both
have 500GB space.
[root at HP-DL360 by-path]# pvscan
PV /dev/cciss/c0d0p2 VG LVM lvm2 [136.59 GB / 2.69 GB free]
PV /dev/sda VG LVM-RAID lvm2 [500.00 GB / 490.00 GB free]
PV /dev/sdb VG LVM-RAID lvm2 [502.70 GB /
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0
2003 Jan 06
0
smbd using alot of cpu
I have an smbd process that is using alot of cpu on HP-UX 11.11 using
Samba 2.2.3a. It looks like the application on the client side is trying
to open and close a ton of files, many of which do not exist. When I do
a trace on the smbd process, I see repeated calls to lstat64 to what
looks like all of the device files on my unix server. Here is a little
bit of the trace. It goes through all of the
2008 Jan 07
3
Strange Problem with dm-0
I began an update of one of our servers via yum and, coincidentally or
not, I have been getting the following logged into the message file since:
messages:Jan 7 15:55:51 inet07 kernel: post_create: setxattr failed,
rc=28 (dev=dm-0 ino=280175)
Now, this tells me that dev dm-0 is out of space but, what is dm-0?
So, can anyone tell me what is happening and why?
--
*** E-Mail is
2020 Jan 21
2
qemu hook: event for source host too
Hello, this is my first time posting on this mailing list.
I wanted to suggest a addition to the qemu hook. I will explain it
through my own use case.
I use a shared LVM storage as a volume pool between my nodes. I use
lvmlockd in sanlock mode to protect both LVM metadata corruption and
concurrent volume mounting.
When I run a VM on a node, I activate the desired LV with exclusive lock
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi...
I have had a problem when I am going to detach one specific LVM partitions
of Xen, so I have been trying xm destroy <domain>, lvchange -an
<lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the
server with init 1 yet and nothing... I have seem two specific process
started xenwatch and xenbus, but I am not sure if this processes have
some action over
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors.
Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit :
>On 1/21/20 9:10 AM, Guy Godfroy wrote:
>> Hello, this is my first time posting on this mailing list.
>>
>> I wanted to suggest a
2020 Jan 22
0
Re: qemu hook: event for source host too
On 1/21/20 9:10 AM, Guy Godfroy wrote:
> Hello, this is my first time posting on this mailing list.
>
> I wanted to suggest a addition to the qemu hook. I will explain it
> through my own use case.
>
> I use a shared LVM storage as a volume pool between my nodes. I use
> lvmlockd in sanlock mode to protect both LVM metadata corruption and
> concurrent volume mounting.
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals
do you know if conversion from lvm's raid10 to raid0 is
possible?
I'm fiddling with --splitmirrors but it gets me nowhere.
On "takeover" subject man pages says: "..between
striped/raid0 and raid10."" but no details, nowhere I could
find documentation, nor a howto.
many thanks, L.
2010 Sep 11
5
vgrename, lvrename
Hi,
I want to rename some volume groups and logical volumes.
I was not surprised when it would not let me rename active volumes.
So I booted up the system using the CentOS 5.5 LiveCD,
but the LiveCD makes the logical volumes browsable using Nautilus,
so they are still active and I can't rename them.
Tried:
/usr/sbin/lvchange -a n VolGroup00/LogVol00
but it still says:
LV
2017 Dec 11
2
active/active failover
Dear all,
I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
So my question is: can I really use glusterfs to do failover in the way described
2014 Jan 09
1
LVM thinpool snapshots broken in 6.5?
Hi,
I just installed a CentOS 6.5 System with the intention of using thinly
provisioned snapshots. I created the volume group, a thinpool and then a
logical volume. All of that works fine but when I create a snapshot
"mysnap" then the snapshot volume gets displayed in the "lvs" output
with the correct information but apparently no device nodes are created
under
2011 Oct 27
1
delete lvm problem: exited with non-zero status 5 and signal 0
hi,
I use the libvirt-python to manage my virtual machine. When I delete a
volume use vol.delete(0), sometimes it note me that has occur the error:
libvirtError: internal error '/sbin/lvremove
-f /dev/vg.vmms/lvm-v097222.sqa.cm4' exited with
non-zero status 5 and signal 0: Can't remove open
logical volume
2017 Jul 06
2
logical volume is unreadable
On 06.07.2017 15:35, Robert Nichols wrote:
> On 07/06/2017 04:43 AM, Volker wrote:
>> Hi all,
>>
>> one of my lv has become completely unaccessible. Every read access
>> results in a buffer io error:
>>
>> Buffer I/O error on dev dm-13, logical block 0, async page read
>>
>> this goes for every block in the lv. A ddrescue failed on every single
2017 Dec 11
0
active/active failover
Hi Stefan,
I think what you propose will work, though you should test it thoroughly.
I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage. And re-synchronize when it comes back up.
Chances are if you weren't using the SAN volumes; you could have purchased
two servers
2017 Dec 12
1
active/active failover
Hi Alex,
Thank you for the quick reply!
Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"