similar to: Need your inputs on issue: stale sessions in openssh

Displaying 20 results from an estimated 40000 matches similar to: "Need your inputs on issue: stale sessions in openssh"

2014 Jul 25
2
Does openssh supports multi-channeling ?
Hi All, In openssh, do we support multi-channeling? Rather than opening a new TCP socket for each SSH connection, all the SSH connections are multiplexed into one TCP connection. If so, from which version , do openssh supports it? Thanks Ravi Pratap
2010 Jun 17
0
delete stale sessions in ror web application
hi, in my webapplication, user logins into application. i want to delete the stale sessions. and while before deleting the stale session auomatically i need perform on some operations on database. how to do this Regards, Rajkumar -- Posted via http://www.ruby-forum.com/. -- You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group. To post
2006 Jul 04
0
removing stale active record sessions
I have an application that is used by only a small number of persons. This app stores its sessions in the database. When the user logs out, the session is removed, but of course, there is always the possibility that the user will forget to logout, and the session will go stale. In other webapps, I have used a sweeper ruby script that I run as a cron job to periodically clean these out. But I
1999 Nov 10
0
Script for removing stale sessions: version for RH6.0? [Offtopic?]
Hello, people at SAMBA list. I'm going thru the same problem that Nicholas Williams has described in samba-list first (see references below), and I'm implementing the solution you've decribed in the list (SO_KEEPALIVE, etc). But I'm stuck with the script issue, as I run RedHat Linux 6.0 which doesn't use ksh but pdksh instead. The ksh script doesn't run in pdksh
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/30/2017 02:24 PM, mabi wrote: > Hi Ravi, > > Thanks for your hints. Below you will find the answer to your questions. > > First I tried to start the healing process by running: > > gluster volume heal myvolume > > and then as you suggested watch the output of the glustershd.log file > but nothing appeared in that log file after running the above command. >
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 12:20 PM, mabi wrote: > I did a find on this inode number and I could find the file but only > on node1 (nothing on node2 and the new arbiternode). Here is an ls > -lai of the file itself on node1: Sorry I don't understand, isn't that (XFS) inode number specific to node2's brick? If you want to use the same command, maybe you should try `find
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote: > Now I understand what you mean the the "-samefile" parameter of > "find". As requested I have now run the following command on all 3 > nodes with the ouput of all 3 nodes below: > > sudo find /data/myvolume/brick -samefile > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -ls > >
2017 Jul 30
2
Possible stale .glusterfs/indices/xattrop file?
Hi Ravi, Thanks for your hints. Below you will find the answer to your questions. First I tried to start the healing process by running: gluster volume heal myvolume and then as you suggested watch the output of the glustershd.log file but nothing appeared in that log file after running the above command. I checked the files which need to be healing using the "heal <volume> info"
2017 Jul 30
0
Possible stale .glusterfs/indices/xattrop file?
On 07/29/2017 04:36 PM, mabi wrote: > Hi, > > Sorry for mailing again but as mentioned in my previous mail, I have > added an arbiter node to my replica 2 volume and it seem to have gone > fine except for the fact that there is one single file which needs > healing and does not get healed as you can see here from the output of > a "heal info": > > Brick
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
I did a find on this inode number and I could find the file but only on node1 (nothing on node2 and the new arbiternode). Here is an ls -lai of the file itself on node1: -rw-r--r-- 1 www-data www-data 32 Jun 19 17:42 fileKey As you can see it is a 32 bytes file and as you suggested I ran a "stat" on this very same file through a glusterfs mount (using fuse) but unfortunately nothing
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2017 Jul 29
2
Possible stale .glusterfs/indices/xattrop file?
Hi, Sorry for mailing again but as mentioned in my previous mail, I have added an arbiter node to my replica 2 volume and it seem to have gone fine except for the fact that there is one single file which needs healing and does not get healed as you can see here from the output of a "heal info": Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries: 0 Brick
2016 Oct 14
0
Data Stale at random intervals
Just an update, these messages are in the syslog when nut is no longer able to communicate with the server: Oct 14 01:41:59 golgotha upsmon[1300]: Poll UPS [nailbunny at localhost] failed - Data stale Oct 14 01:42:04 golgotha upsmon[1300]: Poll UPS [nailbunny at localhost] failed - Data stale Oct 14 01:42:09 golgotha upsmon[1300]: Poll UPS [nailbunny at localhost] failed - Data stale Oct 14
2016 Jun 27
2
query "Data stale" from the cmdline
Hi Charles, On Monday, 27. June 2016 08:59:02 Charles Lepple wrote: > > is there a way to query from the cmdline if the UPS data is stale? > > The "data stale" state applies to the entire set of variables for an UPS, > so if the exit code of `upsc` is zero, the data set is not stale. > > "upsc" outputs a lot of information, but not if the data is stale.
2006 May 07
0
updateinfo and stale data
Hi, FreeBSD 5.4, NUT from ports (2.0.3), serial on /dev/cuaa1, Ellipse 1200 USBS. I set the ups.conf up as : [ellipse] driver=mge-utalk port=/dev/cuaa1 desc="Ellipse" When I go to start it I get : Network UPS Tools - UPS driver controller 2.0.3 Network UPS Tools - MGE UPS SYSTEMS/U-Talk driver 0.87 (2.0.3) Could not get multiplier table: using raw
2016 Jun 27
0
query "Data stale" from the cmdline
On Jun 27, 2016, at 4:46 AM, Thomas Jarosch wrote: > > Hello, > > is there a way to query from the cmdline if the UPS data is stale? The "data stale" state applies to the entire set of variables for an UPS, so if the exit code of `upsc` is zero, the data set is not stale. > > "upsc" outputs a lot of information, but not if the data is stale. > Example
2007 Jan 27
2
"no longer stale" when disconnected with 2.0.5 newhidups
Hello, I'm using driver newhidups with APC Back-UPS CS 500. Most things works fine except the following: After I disconnect the UPS the upsd write "Data for UPS [apc] is stale - check driver" in /var/log/messages. In the same second it tells "UPS [apc] data is no longer stale". This repeats all the time the ups is disconnected: Jan 25 14:45:03 degpn026w226 kernel: usb
2015 Sep 14
2
stale/dead ups logic
hi: when testing nut in our environment, we found something that nut maybe tune for "stale/dead ups" situation. currently the "dead ups" are assume alive(eg: host shutdown unnecessary), unless it is in the "OB" state before going to stale. our environment (ServerA + ServerB forms a cluster): ServerA-> usb to UPSA -> two PS power by UPSA and UPSB