similar to: 'modprobe -d ocfs'

Displaying 20 results from an estimated 700 matches similar to: "'modprobe -d ocfs'"

2003 Oct 03
1
starting asterisk?
I'm trying to figure out how to start *..... Rh7.3,CVS,TE410P,TA750 If I just try the way the docs spell it out "/usr/sbin/asterisk -vvvc" it fails...... /var/log/asterisk/messages Oct 3 22:23:34 WARNING[1024]: File chan_zap.c, Line 610 (zt_open): Unable to open '/dev/zap/channel': No such device Oct 3 22:23:34 ERROR[1024]: File chan_zap.c, Line 4930 (mkintf): Unable
2003 Feb 11
0
samba 2.2.7a and multiple logins...
Hello I use a new samba server with LDAP support as PDC for some Windows2000 and WindowsXP machines. All is working nice. The useer can login with a user/password in the domain, we have remote profiles, the user can access the samba shares and print etc. But after some time (< 1 day) we get a smbstatus output like the attached file. Users are connected multiple times with the same share
2009 Nov 04
4
[PATCH server] Update daemons to use new QMF.
This patch updates dbomatic, taskomatic and host-register to use the new C++ wrapped ruby QMF bindings. It also fixes a couple of bugs along the way including the 0 cpu bug for host-register. This is a compilation of work done by myself and Arjun Roy. Signed-off-by: Ian Main <imain at redhat.com> --- src/db-omatic/db_omatic.rb | 111 ++++++-------
2005 Aug 04
2
Can't load ocfs on ia64 EL3u3
# uname -r 2.4.21-20.ELsmp [root@hp2620-2 root]# rpm -qa | grep ocfs ocfs-tools-1.0.10-1 ocfs-support-1.0.10-1 ocfs-2.4.21-EL-1.0.13-1 [root@hp2620-2 root]# /sbin/load_ocfs /sbin/insmod ocfs node_name=hp2620-2-eth1 ip_address=10.1.2.101 cs=1913 guid=8D65C7CEECF9EBE638BD0013215BD6E9 comm_voting=1 ip_port=7000 insmod: ocfs: no module by that name found load_ocfs: insmod failed [root@hp2620-2 root]#
2012 Sep 04
1
virt-sparsify broken after recent changes
With 1.9.37 the following script worked fine, with 1.9.39 it fails. Any idea what the issue is? Olaf ... mount -o /dev/vda1 /sysroot/ [ 1.396411] EXT4-fs (vda1): mounting ext2 file system using the ext4 subsystem [ 1.533090] EXT4-fs (vda1): mounted filesystem without journal. Opts: (null) guestfsd: main_loop: proc 1 (mount) took 0.45 seconds libguestfs: recv_from_daemon: 40 bytes: 20 00
2009 Oct 20
1
Fencing OCFS
I people, i install the ocfs in my virtual machine, with centos 5.3 and xen. But when i turn off the machine1, the ocfs start the fencing off the machine2. I read the doc in oracle.com, but could not solve the problem. Someone help? My conf and my package version. cluster.conf node: ip_port = 7777 ip_address = 192.168.1.35 number = 0 name = x1 cluster
2003 Jul 03
1
load_ocfs problem
Hello, I'm trying to implement the Oracle RAC on a firewire shared disk. I've patched the kernel (for multiple login firewire support), and both nodes can see the shared disk. Then, i've installed the files necessary for ocfs: ocfs-2.4.20-smp-1.0.8-4.i386.rpm ocfs-support-1.0.8-4.i386.rpm ocfs-tools-1.0.8-4.i386.rpm However, when i do a '/sbin/load_ocfs' this is what i get:
2005 Nov 17
1
Startup error- new install
Looking for any ideas where I need to look to fix this: I'm installing RHEL3 AS (update 4) on Dell PowerEdge 6850's. I've installed the hugemem kernels on these boxes and need to install and run ocfs. Kernel: ------- 2.4.21-27.0.4.ELhugemem Loaded the ocfs rpm's --------------------- # rpm -qa | grep ocfs ocfs-2.4.21-EL-smp-1.0.14-1 ocfs-support-1.1.5-1 ocfs-2.4.21-EL-1.0.14-1
2008 Sep 18
0
Ocfs2-users Digest, Vol 57, Issue 14
I think I might have miss understood where it is failing, has this file been added to the DB on the web site or does it fail when you try to onfigure this? Carle Simmonds Infrastructure Consultant Technology Services Experian UK Ltd __________________________________________________ Tel: +44 (0)115 941 0888 (main switchboard) Mobile: +44 (0)7813 854834 E-Mail: carle.simmonds at uk.experian.com
2008 Sep 18
2
o2hb_do_disk_heartbeat:982:ERROR
Hi everyone; I have a problem on my 10 nodes cluster with ocfs2 1.2.9 and the OS is RHEL 4.7 AS. 9 nodes can start o2cb service and mount san disks on startup however one node can not do that. My cluster configuration is : node: ip_port = 7777 ip_address = 192.168.5.1 number = 0 name = fa01 cluster = ocfs2 node: ip_port =
2012 Aug 06
0
Problem with mdadm + lvm + drbd + ocfs ( sounds obvious, eh ? :) )
Hi there First of all apologies for the lenghty message, but it's been a long weekend. I'm trying to setup a two node cluster with the following configuration: OS: Debian 6.0 amd64 ocfs: 1.4.4-3 ( debian package ) drbd: 8.3.7-2.1 lvm2: 2.02.66-5 kernel: 2.6.32-45 mdadm: 3.1.4-1+8efb9d1+squeeze1 layout: 0- 2 36GB scsi disks in a raid1 array , with mdadm. 1- 1 lvm2 VG above the raid1 ,
2007 Sep 06
1
60% full and writes fail..
I have a setup with lot's of small files (Maildir), in 4 different volumes and for some reason the volumes are full when they reach 60% usage (as reported by df ). This was ofcourse a bit of a supprise for me .. lots of failed writes, bounced messages and very angry customers. Has anybody on this list seen this before (not the angry customers ;-) ? Regards, =paulv # echo "ls
2011 Feb 28
2
ocfs2 crash with bugs reports (dlmmaster.c)
Hi, After problem described in http://oss.oracle.com/pipermail/ocfs2-users/2010- December/004854.html we've upgraded kernels and ocfs2-tools on every node. The present versions are: kernel 2.6.32-bpo.5-amd64 (from debian lenny-backports) ocfs2-tolls 1.4.4-3 (from debian squeeze) We didn't noticed any problems in logs untill last friday, when the whole ocfs2 cluster crashed. We know
2007 Jul 07
2
Adding new nodes to OCFS2?
I looked around, found older post which seems not applicable anymore. I have a cluster of 2 nodes right now, which has 3 OCFS2 file systems. All the file systems were formatted with 4 node slots. I added the two news nodes (by hand, by ocfs2console and o2cb_ctl), so my /etc/ofcfs/cluster.conf looks right: node: ip_port = 7777 ip_address = 192.168.201.1 number = 0
2008 Oct 03
2
OCFS2 with Loop device
hi there i try to setup OCFS2 with loop device /dev/loop0 i've 4 servers running SLES10 SP2. internal ip's: 192.168.55.1, .2, .3 and .6 my cluster.conf: -------------------------------------------- node: ip_port = 7777 ip_address = 192.168.55.1 number = 0 name = www cluster = cawww node: ip_port = 7777 ip_address = 192.168.55.2
2004 Feb 09
0
RES: RES: RES: ocfs installation error on RHAS 2.1
Well, I've compiled and installed Kernel 2.4.9-37 uniprocessor. I've made ocfs setup using version 1.0.9-9 uniprocessor. I've generated ocfs.conf using ocfstool. When I start ocfs using /etc/init.d/ocfs start, I receive the following errors: [root@RAC1 root]# /etc/init.d/ocfs start Loading OCFS: /sbin/insmod ocfs node_name=RAC1.localdomain ip_address=192.168.61 2 ip_port=7000
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2008 Apr 23
2
Transport endpoint is not connected
Hi, I want to share a partition of an external disk between 3 machines. It is located on a Sun StoreTek 3510 and bound (via multipathing) to all 3 machines. One machine (say mach1) is the node I used to create the filesystem and setup /etc/ocfs2/cluster.conf. I copied the file to the other 2 machines. All use the same /etc/multipath.conf[1], too. All start /etc/init.d/ocfs2 at boot, all are
2005 Jul 26
1
Linux in-kernel keys support
Hi all, I recently made a patch to openssh 4.1p1 to allow it to use the in-kernel key management provided by 2.6.12 or later Linux kernels. I've attached the patch (which is still only a proof-of-concept, for instance its very verbose right now) to this mail. Now, my question is, is this a completely insane idea and would (a later version of) the patch have a chance of making it into the
2010 Sep 03
1
Servers reboot - may be OCFS2 related
Hello. What we have: 2x Debian 5.0 x64 - 2.6.32-20~bpo50+1 from backports DRBD + OCFS2 1.4.1-1 I have both node reboot every day on my tests. On heavy load it have to 1-3 hour to reboot, on idle about 20 hours. I not sure what it is a OCFS2 related but if I not mount OCFS2 partition - I don`t get this reboot. In attach screenshoot with system console on what error. Nothing special in logs.