similar to: That darned orphaned socket hang

Displaying 20 results from an estimated 1000 matches similar to: "That darned orphaned socket hang"

2019 Dec 11
2
[PATCH 1/2] podcheck: __INCLUDE:file.pod__ and __VERBATIM:file.txt__ in POD files.
Make sure the pod checker script can deal with the newer additions of podwrapper.pl. Followup of commit 46e59e9535c2fcd1c188464b5249a249f22af1a0. --- podcheck.pl | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/podcheck.pl b/podcheck.pl index 527a2e47d..795fe0e9b 100755 --- a/podcheck.pl +++ b/podcheck.pl @@ -83,6 +83,15 @@ used where the POD includes
2016 Jun 02
2
[PATCH] Link count attribute extension
Hello, This patch adds client and server support for transmitting the st_nlink field across SSH2_FXP_NAME and SSH2_FXP_ATTRS responses. Please let me know if there anything I can do to improve this patch. I am not subscribed to list so please CC me. Index: sftp-common.c =================================================================== RCS file: /cvs/src/usr.bin/ssh/sftp-common.c,v retrieving
2010 Jul 12
3
deliver and root user
On RHEL5, dovecot 1.0.7, I have set up sendmail to use `deliver` for my local mda. It keeps giving me this error for the root user though: Jul 12 12:51:29 mail sendmail[4105]: o699225f001348: to=<root at localhost.localdomain>, ctladdr=<root at localhost.localdomain> (0/0), delay=3+08:49:26, xdelay=00:00:00, mailer=local, pri=7502879, dsn=4.0.0, stat=Deferred: local mailer
2008 Sep 22
1
[PATCH 1/1] OCFS2: add nlink check in ocfs2_inode_revalidate()
nlink should be also checked in ocfs2_inode_revalidate(). before setting flag OCFS2_INODE_DELETED ip_flags (between unlink and delete vote), the nlink may be 0. the patch is against 1.4 git. 1.2 svn should has the same patch with different line number. Signed-off-by: Wengang wang <wen.gang.wang at oracle.com> -- diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c index 591e693..6b5a83e
2004 Jun 19
2
DU and Hard Links?
Hi, I'm doing a 30 day rotational backup using rysnc. If I go to the root of the backup directory and use: du --max-depth=1 -h, it gives me the actual space being taken up by each incremental directory, the space being taken by the current directory, and then the total of all. For example: 44G /Current 1G /06-20-2004 750M /06-19-2004 Etc... Etc.. .. .. 70G Total But what I would like to
2012 May 09
6
"file not found" under high-contention
Hello, For several years I've been experiencing an intermittent Samba error when running a very intense, highly parallel build/compile jobset. A file is reported as "not found" even though it most certainly exists and re-running the compile jobset always succeeds. Samba version is 3.6.4 running on CentOS 5.8 with 64-bit kernel 2.6.18-308.4.1.el5. Windows side is 64-bit Window
2020 Apr 23
2
CIFS VFS: in dmesg when Linux accesses eComStation's (OS/2) FAT filesystem shares
Items in dmesg when FAT share's are accessed from web browser: CIFS VFS: bogus file nlink value 0 When accessed from FC/L (OFM (orthodox filemanager)): CIFS VFS: illegal date, month 0 day: 0 When the share is initially mounted: CIFS: Attempting to mount //hostname/E Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers CIFS VFS: Send error
2004 Nov 24
3
Jail fails
Hi, We are trying to create a jail with FreeBSD 5.3 but it's fails with this error: cc -O -pipe -I/usr/obj/usr/src/i386/legacy/usr/include -c /usr/src/games/fortune/strfile/strfile.c make: don't know how to make /j/usr/lib/libc.a. Stop *** Error code 2 We are excuting those command in /usr/src: export D=/j make world DESTDIR=$D Are there any problem with FreeBSD 5.3? We have ever
2023 Jul 24
1
[Bug 3591] New: Introduction of "users-groups-by-id@openssh.com" causes nlink to be lost with long view
https://bugzilla.mindrot.org/show_bug.cgi?id=3591 Bug ID: 3591 Summary: Introduction of "users-groups-by-id at openssh.com" causes nlink to be lost with long view Product: Portable OpenSSH Version: 9.1p1 Hardware: All OS: All Status: NEW Severity: normal Priority:
2006 Feb 07
1
orphaned sip channels channels?
My sip show channels shows some channels active that I can not make sense out of, and they have been that way for days, so I am pretty sure they are orphans. Is there a way to show active CALLS (instead of channels) to try and determine the source? Does the output below provide any clues as to why these channels might show active? Anyone aware of related bugs? The #'s indicate original
2010 Aug 02
1
kernel panic not syncing fatal exception due to reiserfs -- rebooted properly on ext3
Hi we had one of our mail servers , going on kernel panic mode ( not syncing fatal exception) ... cause /var/queue/postfix was on reiserfs part .. it has not been giving us any isssue quite some time , but yesterday and today it went on Kernel panic mode , when we hashed out the reiserfs part it booted properly .. we formated the part^ on ext3 later and things working fine now . Could any one
2019 Jun 21
2
LLD handling of orphaned sections
On Fri, 21 Jun 2019 at 13:05, Rui Ueyama via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > I think Geroge (cc'ed) knows better than me in that area. > > lld is underdocumented, and in particular there is virtually no documentation about its linker script support. Our basic strategy is to follow the GNU's documentation and the implementations unless it is too hard or
2010 Jul 19
1
btrfs: unlinked X orphans messages
Hi, I am using btrfs for remote backups (via rsync), with daily and weekly snapshots. I see these messages in kern.log: Jul 18 07:09:43 backup1 kernel: [3437126.458374] btrfs: unlinked 9 orphans Jul 18 12:01:01 backup1 kernel: [3454604.905856] btrfs: unlinked 1 orphans Jul 18 13:01:51 backup1 kernel: [3458254.990199] btrfs: unlinked 1 orphans Jul 19 04:01:41 backup1 kernel: [3512244.236347]
2009 Feb 19
2
Patch to recover orphans in offline slots
This patch is against ocfs2-1.4 and also applies to ocfs2-1.2. ocfs2 mainline requires only the first portion of the patch and hence will make a separate patch for that.
2009 Feb 28
1
[PATCH 1/1] Patch to recover orphans from the slot during mount
Currently we only queue recovery during mount if the journal is dirty. If the last node holding orphans in other node's orphan directory dies and is the first one to mount then it only recovers its orphan directory which leaves the orphans in other nodes slots. Since the other nodes journals are clean they will not queue to recover their orphan directory. This patch queues to recover orphans
2003 Dec 17
2
TODO hardlink performance optimizations
On Mon, 15 Dec 2003, jw schultz <jw@pegasys.ws> wrote: > OK, first pass on TODO complete. .... > PERFORMANCE ---------------------------------------------------------- .... > Traverse just one directory at a time > > Traverse just one directory at a time. Tridge says it's possible. > > At the moment rsync reads the whole file list into memory at the >
2009 Apr 07
1
Backport to 1.4 of patch that recovers orphans from offline slots
The following patch is a backport of patch that recovers orphans from offline slots. It is being backported from mainline to 1.4 mainline patch: 0001-Patch-to-recover-orphans-in-offline-slots-during-rec.patch Thanks, --Srini
2009 Mar 06
1
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount (revised)
During recovery, a node recovers orphans in it's slot and the dead node(s). But if the dead nodes were holding orphans in offline slots, they will be left unrecovered. If the dead node is the last one to die and is holding orphans in other slots and is the first one to mount, then it only recovers it's own slot, which leaves orphans in offline slots. This patch queues complete_recovery
2009 Mar 04
2
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But if the dead nodes were holding orphans in offline slots, they will be left unrecovered. If the dead node is the last one to die and is holding orphans in other slots and is the first one to mount, then it only recovers it's own slot, which leaves orphans in offline slots. This patch queues complete_recovery
2010 Nov 09
1
btrfs: unlinked 34 orphans
Hi, I received the message: btrfs: unlinked 34 orphans Just out of couriosity: what does it mean ? Thanks in advance Bye, David Arendt -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html