similar to: OCFS2 and iSCSI

Displaying 20 results from an estimated 10000 matches similar to: "OCFS2 and iSCSI"

2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
Hi, I setup two nodes, 192.168.20.20, and 192.168.20.21, The os is Ubuntu1204 with Kernel version 3.0: root at Server21:~# uname -a Linux Server21 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21. Server20: Feb 27 09:29:31 Server20 kernel:
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
Hi, I setup two nodes, 192.168.20.20, and 192.168.20.21, The os is Ubuntu1204 with Kernel version 3.0: root at Server21:~# uname -a Linux Server21 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21. Server20: Feb 27 09:29:31 Server20 kernel:
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2010 Oct 20
1
OCFS2 + iscsi: another node is heartbeating in our slot (over scst)
Hi, I'm building a cluster containing two nodes with seperate common storage server. On storage server i have volume with ocfs2 fs which is sharing this volume via iscsi target. When node connected to the target i can local mount volume on node and using it. Unfortunately. on storage server ocfs2 logged to dmesg: Oct 19 22:21:02 storage kernel: [ 1510.424144]
2009 Jun 19
1
[PATCH] ocfs2: Provide the ocfs2_dlm_lvb_valid() stack API.
The Lock Value Block (LVB) of a DLM lock can be lost when nodes die and the DLM cannot reconstruct its state. Clients of the DLM need to know this. ocfs2's internal DLM, o2dlm, explicitly zeroes out the LVB when it loses track of the state. This is not a standard behavior, but ocfs2 has always relied on it. Thus, an o2dlm LVB is always "valid". ocfs2 now supports both o2dlm and
2011 Jul 06
2
Slow umounts on SLES10 patchlevel 3 ocfs2
Hi, we are using a SLES10 Patchlevel 3 with 12 Nodes hosting tomcat application servers. The cluster was running some time (about 200 days) without problems. Recently we needed to shutdown the cluster for maintenance and experianced very long times for the umount of the filesystem. It took something like 45 minutes each node and filesystem (12 x 45 minutes shutdown time). As a result the planned
2010 Apr 14
2
[PATCH 1/2] ocfs2/dlm: Make o2dlm domain join/leave messages KERN_NOTICE
o2dlm join and leave messages are more than informational as they are required is debugging locking issues. This patch changes them from KERN_INFO to KERN_NOTICE. Signed-off-by: Sunil Mushran <sunil.mushran at oracle.com> --- fs/ocfs2/dlm/dlmdomain.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/ocfs2/dlm/dlmdomain.c b/fs/ocfs2/dlm/dlmdomain.c index
2009 Feb 21
1
GFS2/OCFS2 scalability
Andreas Dilger wrote: > On Feb 20, 2009 20:23 +0300, Kirill Kuvaldin wrote: >> I'm evaluating different cluster file systems that can work with large >> clustered environment, e.g. hundreds of nodes connected to a SAN over >> FC. >> >> So far I looked at OCFS2 and GFS2, they both worked nearly the same >> in terms of performance, but since I ran my
2011 Mar 01
1
OCFS2 shared volume getting slow when you add more nodes
Hello, I have a cluster with two nodes, with SLES10 as base system. First I powered on one node, and the system is working just fine. Then, when a second node was added, the performance came down pretty bad. Any hints or ideas about this behaviour? TIA, M -- Saludos, Mauro Parra-Miranda Consultor Senior Novell - mparra at novell.com openSUSE Developer - mauro at openSUSE.org BB PIN - 22600AE9
2007 Feb 06
1
ocfs2-tools-1.2.2 compile.
Hi, The ocfs2 package compiled perfectly, but tools did not. The test setup is using opensuse10.1 - updates applied For "ocfs2-tools-1.2.2": In file included from include/ocfs2.h:60, from alloc.c:32: include/ocfs2_fs.h: In function ?ocfs2_fast_symlink_chars?: include/ocfs2_fs.h:566: warning: implicit declaration of function ?offsetof? include/ocfs2_fs.h:566: error: expected
2007 Nov 06
1
Issues with iSCSI, Hosts Crashing
Two questions (and then a bonus one), kind of interrelated, but, first, some basic info. I'm using OCFS2 on OpenSUSE 10.2, kernel 2.6.18.8(-0.7). There are three nodes in the OCFS2 cluster, backed by Openfiler iSCSI storage. First, I'm using Openfiler and iSCSI volumes to back my OCFS2 file system. The nodes that are part of the OCFS2 cluster use the file system as a shared storage
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd Hi On a drbd Primary node, when attempting to mount our cluster partition: sudo mount -t ocfs2 /dev/drbd1 /cluster we get: mount.ocfs2: Unable to access cluster service while trying to join the group We then call: sudo dpkg-reconfigure ocfs2-tools Setting cluster stack "o2cb": OK Starting O2CB cluster ocfs2: OK And all is well: Aug 22 13:48:23 uc1 kernel: [
2011 Oct 18
12
Unable to stop cluster as heartbeat region still active
Hi, I have a 2 nodes ocfs2 cluster running UEK 2.6.32-100.0.19.el5, ocfs2console-1.6.3-2.el5, ocfs2-tools-1.6.3-2.el5. My problem is that all the time when i try to run /etc/init.d/o2cb stop it fails with this error: Stopping O2CB cluster CLUSTER: Failed Unable to stop cluster as heartbeat region still active There is no active mount point. I tried to manually stop the heartdbeat with
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone I have some questions with the OCFS2 when using it as vm-store. With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4. As the network configure change, there are some issues as the log below. Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog? Why the host ZHJD-VM6 is blocked until it reboot
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone I have some questions with the OCFS2 when using it as vm-store. With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4. As the network configure change, there are some issues as the log below. Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog? Why the host ZHJD-VM6 is blocked until it reboot
2009 Jun 05
1
ocfs2 in sles11 vs. sles10
I'm trying to mount a ocfs2 volume (created on sles11) on my sles10sp2 server. I created the volume with these options: "mkfs.ocfs2 -C 128k -L CLUFS -M cluster -N 16 /dev/sdc". (dev/sdc is an iscsi device). It works to mount the volume with "mount.ocfs2 /dev/sdc /ocfs2" on my boxes (with o2cb configured for both nodes). When creating files on the ocfs2 volume I cant see
2011 Feb 02
2
Ofcs2 Questions!
Hello, First of all, i am new at the list and i have several questions about ocfs2 performance. Where i am working i am having huge performance problens with ocfs2. Let me tell my envoriment. 3 Xen VirtualMachines withs ocfs2 mounting an LUN exported over iSCSI. ( acctualy 3 LUNS, 3 ocfs2 clusters ) I am not the one who configured the envoriment, but it is making the performance of my MAIL
2008 Mar 04
1
OCFS2 strage freezes
Good day, everyone. I have SAN server build with Openfiler OS, with iSCSI mode turned on. I have two nodes, which connect to that server via iSCSI, using one of two active iSCSI partitions. I've installed ocfs2 1.3.3 with kernel 2.6.23.1, configured it, made ocfs2 partition and was successful in mounting it on both nodes. Everything works just fine, I can upload file from one node and
2010 Jun 16
1
Why OCFS2 with RAC
I have been a user of OCFS2 for quite some time now (2 years or so) and a user of Oracle RAC for several years as well. My usage of these two is completely independent though, the cluster filesystem is for application level usage (web servers, etc.) only because RAC can manage its own shared storage directly. And that brings me to my observation/question... I keep seeing a lot of messages
2012 Sep 14
2
HA-OCFS2?
Is it possible to create a highly-available OCFS2 cluster (i.e., A storage cluster that mitigates the single point of failure [SPoF] created by storing an OCFS2 volume on a single LUN)? The OCFS2 Project Page makes this claim... > OCFS2 is a general-purpose shared-disk cluster file system for Linux capable of providing both high performance and high availability. ...but without backing-up