Displaying 20 results from an estimated 10000 matches similar to: "ocfs2 very slow"
2023 Apr 21
1
[PATCH] ocfs2: fix missing reset j_num_trans for sync
fstest generic case 266 272 281 trigger hanging issue when umount.
I use 266 to describe the root cause.
```
49 _dmerror_unmount
50 _dmerror_mount
51
52 echo "Compare files"
53 md5sum $testdir/file1 | _filter_scratch
54 md5sum $testdir/file2 | _filter_scratch
55
56 echo "CoW and unmount"
57 sync
58 _dmerror_load_error_table
59 urk=$($XFS_IO_PROG -f -c "pwrite
2023 Apr 22
1
[PATCH] ocfs2: fix missing reset j_num_trans for sync
Sorry, please pause this patch review.
When I was investigating fstest generic failed case 347 361, I found
the wake_up() action should move out the 'if()' area. The correct way
is calling wake_up() unconditionally.
Thanks,
Heming
On 4/21/23 4:36 PM, Heming Zhao wrote:
> fstest generic case 266 272 281 trigger hanging issue when umount.
>
> I use 266 to describe the root cause.
2023 May 05
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/5/23 12:20 AM, Heming Zhao wrote:
> On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote:
>>
>>
>> On 5/4/23 4:02 PM, Heming Zhao wrote:
>>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote:
>>>>
>>>>
>>>> On 5/4/23 2:21 PM, Heming Zhao wrote:
>>>>> On Thu, May 04, 2023 at 10:27:46AM +0800, Joseph
2023 Apr 30
2
[PATCH 1/2] ocfs2: fix missing reset j_num_trans for sync
fstest generic cases 266 272 281 trigger hanging issue when umount.
I use 266 to describe the root cause.
```
49 _dmerror_unmount
50 _dmerror_mount
51
52 echo "Compare files"
53 md5sum $testdir/file1 | _filter_scratch
54 md5sum $testdir/file2 | _filter_scratch
55
56 echo "CoW and unmount"
57 sync
58 _dmerror_load_error_table
59 urk=$($XFS_IO_PROG -f -c "pwrite
2023 May 08
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
Sorry for reply late, I am a little bit busy recently.
On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote:
>
>
> On 5/5/23 12:20 AM, Heming Zhao wrote:
> > On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote:
> >>
> >>
> >> On 5/4/23 4:02 PM, Heming Zhao wrote:
> >>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote:
2023 May 09
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/9/23 12:40 AM, Heming Zhao wrote:
> Sorry for reply late, I am a little bit busy recently.
>
> On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote:
>>
>>
>> On 5/5/23 12:20 AM, Heming Zhao wrote:
>>> On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote:
>>>>
>>>>
>>>> On 5/4/23 4:02 PM, Heming Zhao wrote:
2006 Jun 12
1
kernel BUG at /usr/src/ocfs2-1.2.1/fs/ocfs2/file.c:494!
Hi,
First of all, I'm new to ocfs2 and drbd.
I set up two identical servers (Athlon64, 1GB RAM, GB-Ethernet) with Debian Etch, compiled my own kernel (2.6.16.20),
then compiled the drbd-modules and ocfs (modules and tools) from source.
The process of getting everything up and running was very easy.
I have one big 140GB partition that is synced with drbd (in c-mode) and has an ocfs2
2011 Dec 06
0
script for check drbd at mount ocfs2 at boot in debian
Hi to all, I am mounting a simple cluster for use
mail/virtualbox/samba in two servers, in active/active mode (not for
virtualbox).
In my testings some times the drbd system needs wait some time for
start (sync protocols), or because some times the node are down many
days.
I create a script for wait for a sync, and check that the "ip of
interconect cluster" are up an running. and go up
2007 Oct 05
4
(no subject)
Good day,
I've got a question regarding the usage of rsync that I just cannot
figure out. I've done a fare hunt for the answer, but I'm stumped.
Here is the situation.
I have two pc's running linux and using rsync to perform a backup from
server1 to server2. For example: rsync -avzr -e 'ssh
-i/root/.ssh/id_rsa' --delete /home/samba/admin/software
2013 Mar 21
0
filesystem is going read-only
I have a cluster with 2 SLES 11 SP1 servers and I'm running ocfs2 in
order to keep a disk mounted on both servers. It has been working
perfectly for a long time but last Friday the ocfs filesystem became
read only. I unmounted, run fsck.ocfs2 and the problem was solved for a
few hours and then happened again.
the errors found in the log are:
Mar 15 14:12:01 server2 kernel:
2013 Jan 18
1
unable to unmount drdb+ocfs2 with bind-mount active
Hi all,
i?m not sure if my problem is realted to ocfs2 or to drbd, so i x-post
this post to both lists.
I?ve drbd-volume [v 8.3.9] (dual-primary) with ocfs2 [v 1.6.3] as a
filesystem.
If I add a "bind-mount" like
/var/log/ispconfig/httpd/blog.schaal-24.de
/srv/www/clients/client2/web323/log none bind,nobootwait 0 0
to /etc/fstab i`m unable to run umount /srv/www (which is
2011 Dec 29
0
ocfs2 with RHCS and GNBD on RHEL?
Does anyone have OCFS2 running with the "Red Hat Cluster Suite" on RHEL?
I'm trying to create a more or less completely fault tolerant solution with two storage servers syncing storage with dual-primary DRBD and offering it up via multipath to nodes for OCFS2.
I was able to successfully multipath a dual-primary DRBD based GFS2 volume in this manner using RHCS and GNBD. But switched
2009 Jul 15
1
CentOS-5.3 + DRBD-8.2 + OCFS2-1.4
I've run into a problem mounting an OCFS2 filesystem on a DRBD device. I think it's the same one discussed at http://lists.linbit.com/pipermail/drbd-user/2007-April/006681.html
When I try to mount the filesystem I get a ocfs2_hb_ctl: I/O error:
[root at node-6A ~]# mount -t ocfs2 /dev/drbd2 /cshare
ocfs2_hb_ctl: I/O error on channel while starting heartbeat
mount.ocfs2: Error when
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd
Hi
On a drbd Primary node, when attempting to mount our cluster partition:
sudo mount -t ocfs2 /dev/drbd1 /cluster
we get:
mount.ocfs2: Unable to access cluster service while trying to join the
group
We then call:
sudo dpkg-reconfigure ocfs2-tools
Setting cluster stack "o2cb": OK
Starting O2CB cluster ocfs2: OK
And all is well:
Aug 22 13:48:23 uc1 kernel: [
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jan 26
1
ocfs2 + drbd primary/primary "No space left on device"
Hello.
I'm having issues using ocfs2 and drbd in dual primary mode. After running some filesystem test's that create a lot of small files I run really fast into "No space left on device"
The non failing node is able to write/read from the filesystem. And the failing node is also able to delete/read from the filesystem
Ubuntu custom kernel 2.6.27.2
o2cb_ctl version 1.3.9
drbd
2009 Jan 13
0
Some questions for understanding ocfs2 & drbd
Hello list,
If i take a drbd over two hosts configured as dual primary, i can access
files via ocfs2 from both sides.
For this, on both sides i'ld have to mount the ocfs2-partition locally
and both sides have their own ocfs-DLM, as far as i understood?
So in Detail:
1. /dev/drbd0 configured in dual primary, taking one partition from each
host
2. drbd0 is ocfs2 formatted
3. ocfs2-tools are
2007 Sep 04
3
Ocfs2 and debian
Hi.
I'm pretty new to ocfs2 and clusters.
I'm trying to make ocfs2 running over a drbd device.
I know it's not the best solution but for now i must deal with this.
I set up drbd and work perfectly.
I set up ocfs and i'm not able to make it to work.
/etc/init.d/o2cb status:
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html