Displaying 20 results from an estimated 1000 matches similar to: "Node hangs when trying to create/delete file"
2004 Feb 20
1
ocfs hung
having a problem with ocfs.
device /dev/sdd mounted on 2 nodes, node 0 and node 1
tried to create file /u01/oracle/prod/proddata/temp01.dbf from node 1
(ALTER TABLESPACE TEMP ADD TEMPFILE...) caused oracle server process to
hang in a "D" state apparently trying to create the file. the file has
not been created yet. If I type "ls" from node 2 in directory
/u01/oracle/prod
2006 Nov 16
1
Regarding debugocfs
Hi experts,
My customer issued debugocfs to check for file_size and extent info
but values such as file_size, alloc_size, next_free_ext were 0.
(/dev/sdi1 contains datafiles and arc files)
# debugocfs -a 0 /dev/sdi1
debugocfs 1.0.10-PROD1 Fri Mar 5 14:35:29 PST 2004
(build fcb0206676afe0fcac47a99c90de0e7b)
file_extent_0:
file_number = 128
disk_offset = 1433600
curr_master = 0
file_lock =
2004 Mar 10
9
Lock contention issue with ocfs
I am still having this weird problem with nodes hanging while I'm
running OCFS. I'm using OCFS 1.0.9-12 and RHAS 2.1
I've been working on tracking it down and here's what I've got so far:
1. I create a file from node 0. This succeeds; I can /bin/cat the
file, append, edit, or whatever.
2. From node 1, I do an operation that accesses the DirNode (e.g.
/bin/ls)
3. Node 0
2004 Mar 10
9
Lock contention issue with ocfs
I am still having this weird problem with nodes hanging while I'm
running OCFS. I'm using OCFS 1.0.9-12 and RHAS 2.1
I've been working on tracking it down and here's what I've got so far:
1. I create a file from node 0. This succeeds; I can /bin/cat the
file, append, edit, or whatever.
2. From node 1, I do an operation that accesses the DirNode (e.g.
/bin/ls)
3. Node 0
2004 Apr 21
1
Fwd: RE: OCFS Hang
Oh yeah - easy way to check, Randy:
Next time your node hangs, get on the OTHER NODE and go into each
directory where files are being opened (datafiles, archivelogs,
controlefiles, redo logs, etc) and delete a file (you can create one
first then delete it). If this causes the hung node to recover then
you're having the same problem I was having.
Jeremy
>>> "Jeremy
2004 May 27
1
Follow up on async I/O question
Sunil Mushran wrote:
>And as that gets into reading debugocfs outputs, the user has to make
the determination, if the effort is worth the gain in performance. Why
only logfiles? Well, because Oracle performs large ios only to the
logfiles. The ios to be datafiles are in smaller chunks. Hope this
helps. Sunil
I'd like to point out a small mistake in this post. Oracle most
certainly
2003 Dec 10
1
[BUG] node 0 hangs until disk unmounted on node 1
I'm currently part of a project implementing Oracle eBusiness Suite 11i
on RAC. We're using a two-node cluster with shared storage, both nodes
are configured identical. Kernel is 2.4.9-e.27enterprise and ocfs is
1.0.9-11. I have checked and the shared storage can be accessed
directly without any problems from both nodes (/dev/sdx).
Curious if anyone has any suggestions or comments
2003 Dec 10
1
[BUG] node 0 hangs until disk unmounted on node 1
I'm currently part of a project implementing Oracle eBusiness Suite 11i
on RAC. We're using a two-node cluster with shared storage, both nodes
are configured identical. Kernel is 2.4.9-e.27enterprise and ocfs is
1.0.9-11. I have checked and the shared storage can be accessed
directly without any problems from both nodes (/dev/sdx).
Curious if anyone has any suggestions or comments
2018 Dec 04
0
[2.3.4] Segmentation faults
> On 04 December 2018 at 16:46 Joan Moreau via dovecot <dovecot at dovecot.org> wrote:
>
>
> Hi
>
> How to solve this ?
>
> So many similar segfaults
>
> Thank you
>
> On 2018-11-30 06:11, Joan Moreau wrote:
>
> > ANother (very very long) example :
> >
> > # gdb /usr/libexec/dovecot/indexer-worker
2004 Apr 22
1
A couple more minor questions about OCFS and RHE L3
Sort of a followup...
We've been running OCFS in sync mode for a little over a month now,
and it has worked reasonably well. Performance is still a bit spotty, but
we're told that the next kernel update for RHEL3 should improve the
situation. We might eventually move to Polyserve's cluster filesystem for
its multipathing capability and potentially better performance, but at least
we
2018 Dec 04
0
[2.3.4] Segmentation faults
We don't consider it as "very early beta". We consider it production ready. It is bit more work to set up though.
Aki
> On 04 December 2018 at 17:16 Joan Moreau <jom at grosjo.net> wrote:
>
>
> Thanks for mySql
>
> For Squat vs Solr, Solr does not reach Squat by very far in terms of
> results : If I setup Solr, and search (via the search in Roundbube
2018 Dec 04
0
Squat
Yes, but the bottom line is that Squat does the job needed for end
users, Solr does not
On 2018-12-04 16:53, Michael Slusarz wrote:
>> On December 4, 2018 at 8:18 AM Aki Tuomi <aki.tuomi at open-xchange.com> wrote:
>>
>> We don't consider it as "very early beta". We consider it production ready. It is bit more work to set up though.
>
> FWIW, Squat
2018 Nov 30
0
[2.3.4] Segmentation faults
ANother (very very long) example :
# gdb /usr/libexec/dovecot/indexer-worker
core.indexer-worker.0.3a33f56105e043de802a7dfcee265a07.21017.1543533424000000
GNU gdb (GDB) 8.2
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to
2018 Dec 04
2
[2.3.4] Segmentation faults
> On December 4, 2018 at 8:18 AM Aki Tuomi <aki.tuomi at open-xchange.com> wrote:
>
> We don't consider it as "very early beta". We consider it production ready. It is bit more work to set up though.
FWIW, Squat has been deprecated since 2.0, so none of this should come as a surprise.
https://wiki.dovecot.org/Plugins/FTS
> Aki
>
> > On 04 December
2018 Dec 04
2
[2.3.4] Segmentation faults
Hi
How to solve this ?
So many similar segfaults
Thank you
On 2018-11-30 06:11, Joan Moreau wrote:
> ANother (very very long) example :
>
> # gdb /usr/libexec/dovecot/indexer-worker core.indexer-worker.0.3a33f56105e043de802a7dfcee265a07.21017.1543533424000000
> GNU gdb (GDB) 8.2
> Copyright (C) 2018 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or
2018 Dec 04
3
[2.3.4] Segmentation faults
Thanks for mySql
For Squat vs Solr, Solr does not reach Squat by very far in terms of
results : If I setup Solr, and search (via the search in Roundbube or
Evolution) for a keyword or part of the keyword, the results are
complete non sense. The diference between "search in full body" and
"search in fields" does not even work.
Solr with Dovecot seems very early beta
2003 Jun 20
0
Problems Compiling ocfs - RedHat8
Hi,
First of all great project You are developing, really usefull,but....
Compiling the OCFS System under a clean RedHat 8 Installaton renders an error during compilation:
make -C format
make[2]: Entering directory `/root/ocfs-1.0.8/tools/format'
gcc -g -O2 -pipe -I../../ocfs2/Common/inc -Iinc -I../../tools/debugocfs -I../../ocfs2/Linux/inc -DLINUX -DUSERSPACE_TOOL -o
format.o -c
2004 Mar 30
1
RHEL 3 and OCFS 1.0.9-12 / 1.0.11-1
Is the following statement still valid for either OCFS 1.0.9-12 or OCFS
1.0.11-1?
The following is from one of the questions put forward by Derek Suzuki on
Ocfs-users
"A couple more minor questions about OCFS and RHEL3"
> Next, I saw a Metalink thread which suggests that async I/O is not >
supported on OCFS with RHAS 2.1. It doesn't say anything about RHEL3. >
We've
2004 Jun 04
1
RHE L3 -- OCFS 1.0.9-12 and 1.0.12
Running database in ASYNC mode in RHEL 3 has a potential risk of redo logs failure due to some short io's.
A note from
http://oss.oracle.com/projects/ocfs/dist/files/RedHat/RHEL3/i386/README.txt
says that the above mentioned problem is fixed in OCFS 1.0.9-12
"RELEASE 1.0.9-12
Fixes a potential corruption with large, aligned, direct I/Os, for
example Oracle redo logs or direct path SQL
2004 Oct 20
1
i-node showing 100 % used whereas the partitionsare empty
Hi Sunil,
I had filed a bug and saw your response stating that it would be fixed in version 14.
In the meanwhile , what we want to know is whether this bug is a minor bug and can be ignored for now.
Does reporting 100% inodes cause any problem for the OCFS file system or can we ignore this bug and go into production.
Also can you tell us by when version 14 would be released.
R'gds