Displaying 20 results from an estimated 1000 matches similar to: "lvm over nbd?"
2008 Jul 24
1
Help recovering from an LVM issue
Hi People
I just updated a CentOS 5.2 Server that is a Guest inside VMware ESX
3.50 Server using "yum update". As far as I can tell the only three
packages were updated
Jul 24 16:37:49 Updated: php-common - 5.1.6-20.el5_2.1.i386
Jul 24 16:37:50 Updated: php-cli - 5.1.6-20.el5_2.1.i386
Jul 24 16:37:50 Updated: php - 5.1.6-20.el5_2.1.i386
But when I rebooted the Server one of my
2008 May 13
1
getting multiple network interfaces to work in 3.2
I have a couple of Xen 3.1 servers running on Ubuntu 7.10 AMD64 Server machines.
I decided to to upgrade these machines up to the latest release of
Ubuntu (8.04 LTS)
which upgrades Xen to version 3.2.
I upgraded the first machine, installed the xen component.
these machines all have two network interfaces one connected to the
WAN and the other
to the local LAN network. In a tutorial i read a
2015 Nov 07
2
mkfs.ext2 succeeds despite nbd write errors?
Hi,
So I've been hacking together an nbdkit plugin (similar to the "file"
plugin, but it splits the file up into chunks):
https://github.com/pepaslabs/nbdkit-chunks-plugin
I got it to the point of being a working prototype. Then I threw it
onto a raspberry pi, which it turns out only has a 50/50 shot of
fallocate() working correctly.
I'm checking the return code of
2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys,
I have setup my hard disc with 3 partitions:
1- 256MB on /boot;
2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN);
3- 498GB exported with vblade-persist to my network (eth1 for the AoE
protocol).
On dom0 hypervisor01:
vblade-persist setup 0 0 eth1 /dev/sda3
vblade-persist start all
How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s?
Including the
2004 Dec 05
1
Hardware PSTN Gateways?
I am thinking about setting up an asterisk PBX system for my
company. But since I can't be at all the locations all the time I am
setting up an automatic backup system where if the backup detects that
the primay is down it takes over the IP so calls can be made once
more. For this reason I want to setup a seperate HARDWARE PSTN
Gateway.
Are there any equiptment that can be plugged into
2010 Mar 08
4
Error with clvm
Hi,
I get this error when i try to start clvm (debian lenny) :
This is a clvm version with openais
# /etc/init.d/clvm restart
Deactivating VG ::.
Stopping Cluster LVM Daemon: clvm.
Starting Cluster LVM Daemon: clvmCLVMD[86475770]: Mar 8 11:25:27 CLVMD
started
CLVMD[86475770]: Mar 8 11:25:27 Our local node id is -1062730132
CLVMD[86475770]: Mar 8 11:25:27 Add_internal_client, fd = 7
2012 Feb 23
2
lockmanager for use with clvm
Hi,
i am setting up a cluster of kvm hypervisors managed with libvirt.
The storage pool is on iscsi with clvm. To prevent that a vm is
started on more than one hypervisor, I want to use a lockmanager
with libvirt.
I could only find sanlock as lockmanager, but AFSIK sanlock will not
work in my setup as I don't have a shared filesystem. I have dlm running
for clvm. Are there lockmanager
2017 Jul 27
2
Re: performance between guestfish and qemu-nbd
2017-07-27 20:18 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>:
> On Thu, Jul 27, 2017 at 06:34:13PM +0800, lampahome wrote:
> > I can mount qcow2 img to nbd devices through guestfish or qemu-nbd
> >
> > I'm curious about which performance is better?
>
> They do quite different things, they're not comparable.
>
> Can you specifically give the
2011 Oct 13
1
pvresize on a cLVM
Hi,
I'm needing to expand a LUN on my EMC CX4-120 SAN. (Well I already had done it).
On this LUN I had a PV of a cLVM VG. Know I need to run pvresize on it.
Has anybody done this on a cLVM PV ?
I'm trying to rescan the devices, but I can't "see" the new size. And,
googling on it I can only find RHEL5.2 responses.
Thanks in advance,
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on
with our CLVM cluster.
Background:
4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches.
Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage
array.
Machines are running CentOS 5.8 with the Xen kernels. These blades host
various VMs for a project. The iSCSI
2019 Jun 27
2
mkfs fails on qemu-nbd device
Hi All,
I am unable to figure out the issue here, when I try to create a filesystem
(ext4) on a virtual disk using qemu-nbd. This happens intermittently.
Following is the sequence of commands:-
$> qemu-img create -f qcow2 test.qcow2 30G
$> qemu-nbd --connect=/dev/nbd0 test.qcow2
$> *mkfs.ext4 /dev/nbd0*
* mkfs.ext4: Device size reported to be zero. Invalid partition specified,
or*
2020 May 28
2
Re: Provide NBD via Browser over Websockets
On Mon, 15 Oct 2018, Nir Soffer wrote:
> On Sat, Oct 13, 2018 at 9:45 PM Eric Wheeler <nbd@lists.ewheeler.net> wrote:
> Hello all,
>
> It might be neat to attach ISOs to KVM guests via websockets. Basically
> the browser would be the NBD "server" and an NBD client would run on the
> hypervisor, then use `virsh change-media vm1 hdc
2008 Aug 21
1
Shared Storage Options
Hello all.
I would like to canvas some opinions on options for shared storage in a Xen cluster. So
far I've experimented with using iSCSI and clvm which mixed success.
The primary concern I have with both of these options is that there seems to be no obvious
way to ensure exclusive access to the LUN/device to the VM I want to run. On a couple of
occasions during my playing I've
2009 Apr 29
3
GFS and Small Files
Hi all,
We are running CentOS 5.2 64bit as our file server.
Currently, we used GFS (with CLVM underneath it) as our filesystem
(for our multiple 2TB SAN volume exports) since we plan to add more
file servers (serving the same contents) later on.
The issue we are facing at the moment is we found out that command
such as 'ls' gives a very slow response.(e.g 3-4minutes for the
outputs of ls
2017 Jul 28
1
Re: performance between guestfish and qemu-nbd
2017-07-28 0:31 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>:
> On Fri, Jul 28, 2017 at 12:23:04AM +0800, lampahome wrote:
> > 2017-07-27 20:18 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>:
> >
> > > On Thu, Jul 27, 2017 at 06:34:13PM +0800, lampahome wrote:
> > > > I can mount qcow2 img to nbd devices through guestfish or qemu-nbd
> >
2012 Aug 02
1
XEN HA Cluster with LVM fencing and live migration ? The right way ?
Hi,
I am trying to build a rock solid XEN High availability cluster. The
platform is SLES 11 SP1 running on 2 HP DL585 both connected through HBA
fiber channel to the SAN (HP EVA).
XEN is running smoothly and I''m even amazed with the live migration
performances (this is the first time I have the chance to try it in such a
nice environment).
XEN apart the SLES heartbeat cluster is
2007 Aug 17
1
swap partition and live migration.
when performing a live migration i know the contents of the ram
is copied over two the new xen server, but how about the contents
of the swap partition.
currently I have the swap partition in a seperate loopback file
from the root partition. if i want to do a live migration, do i have to
give the new server access to the swap partition file along
with the root partition file? of can i just
2010 Feb 27
17
XEN and clustering?
Hi.
I''m using Xen on RHEL cluster, and I have strange problems. I gave raw
volumes from storage to Xen virtual machines. With windows, I have a
problem that nodes don''t see the volume as same one.... for example:
clusternode1# clusvcadm -d vm:winxp
clusternode1# dd if=/dev/mapper/winxp of=/node1winxp
clusternode2# dd if=/dev/mapper/winxp of=/node2winxp
clusternode3# dd
2007 Sep 08
1
Xen VMs on GFS
Hello list
I've installed cluster suite with 8 physical nodes, these are connected to a
SAN using CLVM and AOE protocol.
The cluster suite runs on the physical nodes/servers, in dom0. If I have to
use GFS, where do I install this? What is the right approach to using GFS
with Xen. I see 2 options:
1. I install GFS inside each domU (unprivileged domain, the actual VM).
2. I install GFS on
2011 Sep 09
17
High Number of VMs
Hi, I''m curious about how you guys deal with big virtualization
installations. To this date we only dealt with a small number of VM''s
(~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the
"storage guy" I find it quite convenient to present to the dom0s one
LUN per VM that makes live migration possible but without the cluster
file system or cLVM