Displaying 20 results from an estimated 800 matches similar to: "Problem booting Microsoft Windows KVM virtual machine"
2010 Oct 04
1
Xen domU crashes accessing to drbd disk if using maxmem.
Hello all,
I''ve just installed a new dom0 with openSUSE 11.3 (x86_64)
and I''m seeing domUs crashes when reading from disks.
The problem occours when in domU configuration I use
memory=1024
maxmem=2048
My setup is DRBD on LVM on Software RAID 10 and drbd
devices are used as disks for domUs, using
phy:/dev/drbd0,hda,w
phy:/dev/drbd1,hdb,w
The domU in test is HVM, I''m
2014 Oct 12
2
drbd
so I've had a drbd replica running for a while of a 16TB raid thats used
as a backuppc repository.
when I have rebooted the backuppc server, the replica doesn't seem to
auto-restart til I do it manually, and the backupc /data file system on
this 16TB LUN doesn't seem to automount, either.
I've rebooted this thing a few times in the 18 months or so its been
running... not
2006 Jun 12
1
kernel BUG at /usr/src/ocfs2-1.2.1/fs/ocfs2/file.c:494!
Hi,
First of all, I'm new to ocfs2 and drbd.
I set up two identical servers (Athlon64, 1GB RAM, GB-Ethernet) with Debian Etch, compiled my own kernel (2.6.16.20),
then compiled the drbd-modules and ocfs (modules and tools) from source.
The process of getting everything up and running was very easy.
I have one big 140GB partition that is synced with drbd (in c-mode) and has an ocfs2
2011 Apr 01
1
Node Recovery locks I/O in two-node OCFS2 cluster (DRBD 8.3.8 / Ubuntu 10.10)
I am running a two-node web cluster on OCFS2 via DRBD Primary/Primary
(v8.3.8) and Pacemaker. Everything seems to be working great, except during
testing of hard-boot scenarios.
Whenever I hard-boot one of the nodes, the other node is successfully fenced
and marked ?Outdated?
* <resource minor="0" cs="WFConnection" ro1="Primary" ro2="Unknown"
2012 Jul 04
0
kernel panic on redhat 5-7 x64
Hi all,
I am using OCFS2-1.4.7 for 2 servers which is running Red hat enterprise 5.7
kernel 2.6.18-274.el5.
OCFS2 I use for drdb for replicating master-master. My 2 servers was
installed HA-Proxy.
Yesterday, server web1 was down with the log kernel panic. And today, web2
was down too. After that, I trace the log file on these server and found
that the reason from ocfs2.
The log
2012 May 06
1
Ext3 and drbd read-only remount problem.
Hi all.
I have two hosts with drbd:
kmod-drbd83-8.3.8-1.el5.centos
drbd83-8.3.8-1.el5.centos
and kernel (CentOS 5.7):
2.6.18-308.4.1.el5
After a recent upgrade of kernel I have had two sitiuations when my ext3
filesystem on /dev/drbd0 became read-only. I've checked disks with smartctl
-t long, they are ok. There are no messages with disks problems in
/var/log/messages | dmesg. I've made
2011 May 30
0
Forward routed network bridge on system's vlan
Hi all,
I created a two node cluster that manages virtual machines with two
servers connected via a cross cable on the network 10.0.0.0/24. I want
that machines that runs on different servers in the network
172.16.0.0/24 can see all the others.
To make this possible I've configured a vlan on each server:
...
...
5: eth1.111 at eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
2012 Mar 08
1
Setting the default Hypervisor
Hi all,
I'm using libvirt with qemu-kvm and virtualbox on the same system.
Everything is working, but I want to change the default uri fo virsh.
At the moment, if i run:
# virsh uri
vbox:///system
and because of this, if I try to list my vm(s) in this way:
# virsh list --all
Id Name State
----------------------------------
the output is empty. I need always to pass the
2012 Apr 04
0
Bare libvirt web interface
Hi,
I set up a virtual environment based upon KVM/libvirt/pacemaker. When
managing this I cannot use virt-manager or virsh, since everything is
managed by the cluster. So when I need to migrate a vm I must drive the
operation from pacemaker (via crm shell program).
I'm ok with this kind of management, but what I need now is just a bare
web interface to make the users see just the position (=
2013 Feb 18
2
Kernel Error with Debian Squeeze, DRBD, 3.2.0-0.bpo.4-amd64 and Xen4.0
Hello List,
i am running Debian Squeeze and i installed DRBD, 3.2.0-0.bpo.4-amd64
and Xen4.0 from the Backports.
Sometimes i get such ugly Kernel message:
[257318.441757] BUG: unable to handle kernel paging request at ffff880025f19000
Log:
[256820.643918] xen-blkback:ring-ref 772, event-channel 16, protocol 1
(x86_64-abi)
[256830.802492] vif86.0: no IPv6 routers present
[256835.674481]
2003 Aug 23
0
RE: Re: That movie
An e-mail you sent to the following recipients was infected with a virus and was not delivered:
wdarrah@lynksystems.com
MessageID: T6434583d4c0a08027417c
Subject: Re: That movie
Attachment:
SMTP Messages: The operation completed successfully.
Scenarios/Incoming/Sophos Antivirus Content Scanner: Scanned by 'Sophos AV Interface for MIMEsweeper'.
SMTP Messages: The operation completed
2012 Apr 12
2
No way to obtain guest's cpu and mem usage?
Hi everybody,
I'm using the PHP API to make a web interface interact with the virtual
machines installed on some hypervisor.
Everything is fine, but I would like to find a way to get the guest's
cpu and mem usage, so that it should be possible to make some rrd
graphs. I didn't find out anything and also it seems looking around that
there is no way to obtain those data.
What is strange
2007 Jun 25
1
I/O errors in domU with LVM on DRBD
Hi,
Sorry for the need of the long winded email. Looking for some answers
to the following.
I am setting up a xen PV domU on top of a LVM partitioned DRBD
device. Everything was going just fine until I tried to test the
filesystems in the domU.
Here is my setup;
Dom0 OS: CentOS release 5 (Final)
Kernel: 2.6.18-8.1.4.el5.centos.plusxen
Xen: xen-3.0.3-25.0.3.el5
DRBD:
2011 Apr 15
0
ocfs2 1.6 2.6.38-2-amd64 kernel panic when unmount
Hello
We have an ocfs2 1.6 via drbd dual master.
drbd0 -> sda7 (node1)
drbd0 -> sda7 (node2)
ocfs2 1.6 2.6.38-2-amd64 kernel panic when unmount.
when unmount drbd0 on both node around the same time using dsh.
umount -v /dev/drbd0
the umount process hangs for a while, 30mins or so
pts/0 D+ 20:50 0:00 umount /dev/drbd0 -vvvvv
Then, one of the node kernel panics
Message from
2011 May 30
0
Quota Problem with Samba 3.5.8
Hello,
for some strange reason I can not get quota to work with Samba 3.5.8.
The quoata system itself works fine (using "repquota /mountpoint") and via
NFS, but Samba does not report the correct free space (df command in
smbclient).
Instead the real free space on the disk volume is shown to smb clients
(tested from Windows and smbclient).
The quotasystem in use is the new quota
2009 Jan 13
0
Some questions for understanding ocfs2 & drbd
Hello list,
If i take a drbd over two hosts configured as dual primary, i can access
files via ocfs2 from both sides.
For this, on both sides i'ld have to mount the ocfs2-partition locally
and both sides have their own ocfs-DLM, as far as i understood?
So in Detail:
1. /dev/drbd0 configured in dual primary, taking one partition from each
host
2. drbd0 is ocfs2 formatted
3. ocfs2-tools are
2016 May 10
1
weird network error
a previously rock solid reliable server of mine crashed last night, the
server was still running but eth0, a Intel 82574L using the e1000e
driver, went down. The server has a Supermicro X8DTE-F (dual Xeon
X5650, yada yada). server is a drbd master, so that was the first
thing to notice network issues. Just a couple days ago I ran yum
update to the latest, I do this about once a month.
2006 Jun 11
2
Centos 4.3 & drbd
Hiya,
I'm new to Centos but learning rapidly but I have been using FreeBSD. I'm
trying to setup a HA NFS server using 2 machines. Both machines are
running 4.3 updated to the latest via Yum.
I did yum groupinstall drbd-heartbeat and
yum install kernel-module-drbd-kernel-module-drbd-2.6.9-34.EL to match my
kernel.
The problem I have is that on 1 machine drbd works fine, but when I start
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0