Displaying 20 results from an estimated 300 matches similar to: "ocfs2 1.6 2.6.38-2-amd64 kernel panic when unmount"
2011 Apr 01
1
Node Recovery locks I/O in two-node OCFS2 cluster (DRBD 8.3.8 / Ubuntu 10.10)
I am running a two-node web cluster on OCFS2 via DRBD Primary/Primary
(v8.3.8) and Pacemaker. Everything seems to be working great, except during
testing of hard-boot scenarios.
Whenever I hard-boot one of the nodes, the other node is successfully fenced
and marked ?Outdated?
* <resource minor="0" cs="WFConnection" ro1="Primary" ro2="Unknown"
2006 Jun 12
1
kernel BUG at /usr/src/ocfs2-1.2.1/fs/ocfs2/file.c:494!
Hi,
First of all, I'm new to ocfs2 and drbd.
I set up two identical servers (Athlon64, 1GB RAM, GB-Ethernet) with Debian Etch, compiled my own kernel (2.6.16.20),
then compiled the drbd-modules and ocfs (modules and tools) from source.
The process of getting everything up and running was very easy.
I have one big 140GB partition that is synced with drbd (in c-mode) and has an ocfs2
2014 Oct 12
2
drbd
so I've had a drbd replica running for a while of a 16TB raid thats used
as a backuppc repository.
when I have rebooted the backuppc server, the replica doesn't seem to
auto-restart til I do it manually, and the backupc /data file system on
this 16TB LUN doesn't seem to automount, either.
I've rebooted this thing a few times in the 18 months or so its been
running... not
2010 Jun 03
2
Tracking down hangs
We're using a storage solution involving two SunFire X4500 servers using
DRBD to replicate a 15TB partition across the network with ocfs2 on top.
We're sharing the partition from one server over NFS and the other is
mounted read-only at present. The DBRD backing store is software RAID 60
on 40 disks.
We've been seeing periodic issues whereby our NFS clients (Debian Lenny)
are very
2012 Jul 04
0
kernel panic on redhat 5-7 x64
Hi all,
I am using OCFS2-1.4.7 for 2 servers which is running Red hat enterprise 5.7
kernel 2.6.18-274.el5.
OCFS2 I use for drdb for replicating master-master. My 2 servers was
installed HA-Proxy.
Yesterday, server web1 was down with the log kernel panic. And today, web2
was down too. After that, I trace the log file on these server and found
that the reason from ocfs2.
The log
2012 May 06
1
Ext3 and drbd read-only remount problem.
Hi all.
I have two hosts with drbd:
kmod-drbd83-8.3.8-1.el5.centos
drbd83-8.3.8-1.el5.centos
and kernel (CentOS 5.7):
2.6.18-308.4.1.el5
After a recent upgrade of kernel I have had two sitiuations when my ext3
filesystem on /dev/drbd0 became read-only. I've checked disks with smartctl
-t long, they are ok. There are no messages with disks problems in
/var/log/messages | dmesg. I've made
2013 Feb 18
2
Kernel Error with Debian Squeeze, DRBD, 3.2.0-0.bpo.4-amd64 and Xen4.0
Hello List,
i am running Debian Squeeze and i installed DRBD, 3.2.0-0.bpo.4-amd64
and Xen4.0 from the Backports.
Sometimes i get such ugly Kernel message:
[257318.441757] BUG: unable to handle kernel paging request at ffff880025f19000
Log:
[256820.643918] xen-blkback:ring-ref 772, event-channel 16, protocol 1
(x86_64-abi)
[256830.802492] vif86.0: no IPv6 routers present
[256835.674481]
2007 Jun 25
1
I/O errors in domU with LVM on DRBD
Hi,
Sorry for the need of the long winded email. Looking for some answers
to the following.
I am setting up a xen PV domU on top of a LVM partitioned DRBD
device. Everything was going just fine until I tried to test the
filesystems in the domU.
Here is my setup;
Dom0 OS: CentOS release 5 (Final)
Kernel: 2.6.18-8.1.4.el5.centos.plusxen
Xen: xen-3.0.3-25.0.3.el5
DRBD:
2011 May 30
0
Quota Problem with Samba 3.5.8
Hello,
for some strange reason I can not get quota to work with Samba 3.5.8.
The quoata system itself works fine (using "repquota /mountpoint") and via
NFS, but Samba does not report the correct free space (df command in
smbclient).
Instead the real free space on the disk volume is shown to smb clients
(tested from Windows and smbclient).
The quotasystem in use is the new quota
2009 Jan 13
0
Some questions for understanding ocfs2 & drbd
Hello list,
If i take a drbd over two hosts configured as dual primary, i can access
files via ocfs2 from both sides.
For this, on both sides i'ld have to mount the ocfs2-partition locally
and both sides have their own ocfs-DLM, as far as i understood?
So in Detail:
1. /dev/drbd0 configured in dual primary, taking one partition from each
host
2. drbd0 is ocfs2 formatted
3. ocfs2-tools are
2016 May 10
1
weird network error
a previously rock solid reliable server of mine crashed last night, the
server was still running but eth0, a Intel 82574L using the e1000e
driver, went down. The server has a Supermicro X8DTE-F (dual Xeon
X5650, yada yada). server is a drbd master, so that was the first
thing to notice network issues. Just a couple days ago I ran yum
update to the latest, I do this about once a month.
2006 Jun 11
2
Centos 4.3 & drbd
Hiya,
I'm new to Centos but learning rapidly but I have been using FreeBSD. I'm
trying to setup a HA NFS server using 2 machines. Both machines are
running 4.3 updated to the latest via Yum.
I did yum groupinstall drbd-heartbeat and
yum install kernel-module-drbd-kernel-module-drbd-2.6.9-34.EL to match my
kernel.
The problem I have is that on 1 machine drbd works fine, but when I start
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate
2005 Apr 19
2
xenU and drbd
Hi,
i''ve a problem with drbd 0.7.10 module on an xenU OS (testing).
I''ve compiling drbd with "make clean all" then "make install" without error.
modprobe drbd OK with no error too and now and trying to start drdb
/etc/init.d/drbd start
Starting DRBD resources: can not open /dev/drbd0: No such device or
address
[ d0 can not open /dev/drbd0: No such device
2010 Nov 04
0
Problem booting Microsoft Windows KVM virtual machine
Hi all,
I'm having problems with a vm's startup, I cloned the entire disk of a
Windows 2000 with dd on a drbd device, that disk was configured with two
partitions. I'm able to see all the partitions contents by using kpartx
and mount them:
# kpartx -l /dev/drbd0
drbd0p1 : 0 202751488 /dev/drbd0 32
drbd0p2 : 0 285567360 /dev/drbd0 202751520
The problem is that when i try to startup
2010 Oct 04
1
Xen domU crashes accessing to drbd disk if using maxmem.
Hello all,
I''ve just installed a new dom0 with openSUSE 11.3 (x86_64)
and I''m seeing domUs crashes when reading from disks.
The problem occours when in domU configuration I use
memory=1024
maxmem=2048
My setup is DRBD on LVM on Software RAID 10 and drbd
devices are used as disks for domUs, using
phy:/dev/drbd0,hda,w
phy:/dev/drbd1,hdb,w
The domU in test is HVM, I''m
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work
2009 Feb 09
1
Problema to Mount drbd0
Hi!
I found a similar problem in an user maillist about my problem, but didnt
work here.
Im using a CentOS 5.2, with a Xen 2.6.18 kernel and a compiled Drbd 8.
So, a load a ocfs2 module and use the mkfs.ocfs2 successfully with
/dev/drbd0
But, when i try to mount it a take this log:
# mount.ocfs2 /dev/drbd0 /mnt
mount.ocfs2: I/O error on channel while trying to join the group
# tail