similar to: how can I get the 64bit JBD patch?

Displaying 20 results from an estimated 10000 matches similar to: "how can I get the 64bit JBD patch?"

2006 Aug 01
1
AW: ocfs2_search_chain: Group Descriptor has bad signature
I'm using ocfs2 and all modules from Suse (SLES9), no self compilations. Here are the details: * 32-bit machine (writing to ocfs2 partition/LUN and where the corruption was reported): Kernel: 2.6.5-7.257-bigsmp #1 SMP i686 i386 GNU/Linux OCFS2 rpms: ocfs2console-1.2.1-4.2 ocfs2-tools-1.2.1-4.2 o2cb_ctl -V: o2cb_ctl version 1.2.1 /etc/init.d/o2cb status: Module "configfs":
2010 Jan 18
1
Getting Closer (was: Fencing options)
One more follow on, The combination of kernel.panic=60 and kernel.printk=7 4 1 7 seems to have netted the culrptit: E01-netconsole.log:Jan 18 09:45:10 E01 (10,0):o2hb_write_timeout:137 ERROR: Heartbeat write timeout to device dm-12 after 60000 milliseconds E01-netconsole.log:Jan 18 09:45:10 E01 (10,0):o2hb_stop_all_regions:1517 ERROR: stopping heartbeat on all active regions.
2006 Jun 25
1
Error while Mounting
I am attempting to setup a 2 node ocfs2 cluster. At this point, I have the latest 1.2.1 version of the tools on both nodes. They are not running identical kernels (one is 2.6.16.18, the other is 2.6.17.1) both are using the kernels built in OCFS2 modules, not using from source. I can mount my iscsi volume on either node individually, but when I attempt to mount two nodes, I get the following
2005 Apr 22
2
[2.6 patch] fs/jbd/: possible cleanups
This patch contains the following possible cleanups: - make needlessly global functions static - #if 0 the following unused global functions: - journal.c: __journal_internal_check - journal.c: journal_ack_err - remove the following write-only global variable: - journal.c: current_journal - remove the following unneeded EXPORT_SYMBOL's: - journal.c: journal_check_used_features -
2008 Sep 04
4
[PATCH 0/3] ocfs2: Switch over to JBD2.
ocfs2 currently uses the Journaled Block Device (JBD) for its journaling. This is a very stable and tested codebase. However, JBD is limited by architecture to 32bit block numbers. This means an ocfs2 filesystem is limited to 2^32 blocks. With a 4K blocksize, that's 16TB. People want larger volumes. Fortunately, there is now JBD2. JBD2 adds 64bit block number support and some other
2001 Oct 09
2
Assert in jbd-kernel.c
Hello. I have installed the ext3 file system on a test system, and sometimes I have a problem: I get an assert from within jbd-kernel.c, and whatever prgram was writing to the disk when this happens is unable to continue. The system is a server I built, which I named "dax". It is running Debian unstable, and I updated it to all the latest packages in Debian unstable as of today.
2006 Jun 30
1
Unable to mount node2 mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /u02/oradata/orcl
I currenlty have the setup below, both nodes can see the shared drive ( confirmed with fdisk -l ) However I am unable to mount the shared device from node (2) after I mounted from node(1) I get the follwoign error mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /u02/oradata/orcl OS Red Hat uname -r --------------------------------- 2.6.9-22.ELsmp OCFS version
2005 Oct 12
2
Unable to access cluster service
hello, I'm running Ubuntu Breezy with the OCFS2 modules in the standard kernel. I installed ocfs2console and ocfs2-tools I've formatted a partition with ocfs2. But I can't add any node or mount the device(with the ocfs2console). because I get a "Unable to access cluster service" I can't find the cause nor the solution to this. root@lenaeja:~# /etc/init.d/o2cb status
2023 May 04
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: > > > On 5/4/23 4:02 PM, Heming Zhao wrote: > > On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote: > >> > >> > >> On 5/4/23 2:21 PM, Heming Zhao wrote: > >>> On Thu, May 04, 2023 at 10:27:46AM +0800, Joseph Qi wrote: > >>>> > >>>> >
2023 May 05
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/5/23 12:20 AM, Heming Zhao wrote: > On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: >> >> >> On 5/4/23 4:02 PM, Heming Zhao wrote: >>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote: >>>> >>>> >>>> On 5/4/23 2:21 PM, Heming Zhao wrote: >>>>> On Thu, May 04, 2023 at 10:27:46AM +0800, Joseph
2010 Mar 18
1
OCFS2 works like standalone
I have installed OCFS2 on two nodes SuSE 10. Seems all works superb and nice from the first sight. But, /dev/sda ocfs2 rac1 is not sharing through net (port 7777) with rac0. On both nodes I have 500Mb /dev/sda disks that are mounted (and are ocfs2). But they did not share the content with each other (files and folders in it). So when I am creating the file in one node I am expecting to
2011 Jul 06
2
Slow umounts on SLES10 patchlevel 3 ocfs2
Hi, we are using a SLES10 Patchlevel 3 with 12 Nodes hosting tomcat application servers. The cluster was running some time (about 200 days) without problems. Recently we needed to shutdown the cluster for maintenance and experianced very long times for the umount of the filesystem. It took something like 45 minutes each node and filesystem (12 x 45 minutes shutdown time). As a result the planned
2023 May 08
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
Sorry for reply late, I am a little bit busy recently. On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote: > > > On 5/5/23 12:20 AM, Heming Zhao wrote: > > On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: > >> > >> > >> On 5/4/23 4:02 PM, Heming Zhao wrote: > >>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote:
2023 May 09
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/9/23 12:40 AM, Heming Zhao wrote: > Sorry for reply late, I am a little bit busy recently. > > On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote: >> >> >> On 5/5/23 12:20 AM, Heming Zhao wrote: >>> On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: >>>> >>>> >>>> On 5/4/23 4:02 PM, Heming Zhao wrote:
2017 Jun 19
1
core dump on ocfs2
Hi everybody I'm finding on my server a lot of a lot of errors like this: Jun 19 14:22:45 posta2 kernel: [885017.412902] BUG: soft lockup - CPU#2 stuck for 22s! [dovecot-lda:11955] Jun 19 14:22:45 posta2 kernel: [885017.412906] Modules linked in: ocfs2(E) jbd2 quota_tree dm_service_time dm_multipath ocfs2_dlmfs(E) ocfs2_stack_o2cb(E) ocfs2_dlm( E) ocfs2_nodemanager(E)
2009 Feb 04
1
Strange dmesg messages
Hi list, Something went wrong this morning and we have a node ( #0 ) reboot. Something blocked the NFS access from both nodes, one rebooted and the another we restarted the nfsd and it brought him back. Looking at node #0 - the one that rebooted - logs everything seems normal, but looking at the othere node dmesg's we saw this messages: First the o2net detected that node #0 was dead: (It
2005 Aug 23
1
Minor fs/Kconfig cleanup
Hello, the current ocfs2 code nicely adds "select JBD" for EXT3_FS, but doesn't remove the following line, which is obsolete after that. Here's a patch: Index: linux-2.6.12/fs/Kconfig =================================================================== --- linux-2.6.12.orig/fs/Kconfig +++ linux-2.6.12/fs/Kconfig @@ -140,7 +140,6 @@ config EXT3_FS_SECURITY config JBD
2009 Sep 24
1
strange fencing behavior
I have 10 servers in a cluster running Debian Etch with 2.6.26-bpo.2 with a backport of ocfs2-tools-1.4.1-1 I'm using AoE to export the drives from a Debian Lenny server in the cluster. My problem is if I mount the ocfs2 partition on the server that is exporting it via AoE it fences the entire cluster. Looking at the logs exporting the ocfs2 partition doesn't give much information...
2005 Jul 19
1
[2.6 patch] fs/jbd/: cleanups
This patch contains the following cleanups: - make needlessly global functions static - journal.c: remove the unused global function __journal_internal_check and move the check to journal_init - remove the following write-only global variable: - journal.c: current_journal - remove the following unneeded EXPORT_SYMBOL: - journal.c: journal_recover Signed-off-by: Adrian Bunk
2006 Aug 16
2
RedHat Node Panic Weekly
See earlier post - May 10th "Node Panic" Can anyone tell me what might be happening here? I have a 3 node cluster running under RH AS 4 (2.6.9-34.ELsmp) with ocfs2 v. 1.2.1. I've upgraded to 1.2.1 as suggested in the previous post, but one or more of my nodes continues to panic weekly: Aug 16 15:29:02 linux96 kernel: (6670,2):ocfs2_extend_file:787 ERROR: bug expression: