search for: alrt

Displaying 20 results from an estimated 23 matches for "alrt".

Did you mean: alert
2008 Apr 28
6
Strange behaviour of winbind on solaris 8
...ng problems in the username resolution etc. When I turn it off I can login e.g. using ssh as an AD users but when i start a command like "ls" it gets put in the background immediately? When "nscd" is turn on and login again I can issue commands with no problems, but doing an ls -alrt on a directory gets stuck if a file is owned by user that is not a AD user. my /etc/nsswitch.conf # # /etc/nsswitch.dns: # # An example file that could be copied over to /etc/nsswitch.conf; it uses # DNS for hosts lookups, otherwise it does not use any other naming service. # # "hosts:&quot...
2011 Mar 25
1
Magic Number Error Message
...has worked for him, so I don't think that there is something wrong with the script itself -- it seems like there is something wrong on my end. Thank you for your help. Billy > load("H:\\Restoration Center\\Climate Change and > Restoration\\MidAtlFloodRisk\\discharge data\\R files\\ALRT.txt") Error: bad restore file magic number (file may be corrupted) -- no data loaded In addition: Warning message: file 'ALRT.txt' has magic number '# Coh' Use of save versions prior to 2 is deprecated -- View this message in context: http://r.789695.n4.nabble.com/Magic-N...
2010 Apr 15
2
Could not create PID file error when .svn directory exists.
...417 13135 0 15:09 pts/0 00:00:00 grep puppet [root@puppethc01 files]# puppetmasterd --verbose --no-daemonize notice: Starting Puppet server version 0.25.4 Could not run: Could not create PID file: /var/run/puppet/ puppetmasterd.pid Yup there''s a PID file [root@puppethc01 files]# ls -alrt /var/run/puppet/puppetmasterd.pid -rw-r--r-- 1 puppet puppet 0 Apr 15 15:10 /var/run/puppet/ puppetmasterd.pid Still no process though. [root@puppethc01 files]# ps -ef | grep puppet root 16464 13135 0 15:10 pts/0 00:00:00 grep puppet Better remove the PID file [root@puppethc01 files]# rm...
2004 Nov 09
1
redoing error causes backup file failure on target
...4/AccChkpt.gz.r_bck redoing /pattern/acc.4/AccChkpt.gz(0) /pattern/acc.4/AccChkpt.gz backed up /pattern/acc.10/AccChkpt.gz to /pattern/acc.10/AccChkpt.gz.r_bck redoing /pattern/acc.10/AccChkpt.gz(0) backed up /pattern/acc.2/AccChkpt.gz to /pattern/acc.2/AccChkpt.gz.r_bck target file was OK: $ ls -alrt acc.*/AccCh* | grep -v r_bc -rw-r----- 1 pattern pattappl 1064087737 Nov 7 01:59 acc.9/AccChkpt.gz -rw-r----- 1 pattern pattappl 1066818822 Nov 7 01:59 acc.5/AccChkpt.gz -rw-r----- 1 pattern pattappl 1068174949 Nov 7 01:59 acc.10/AccChkpt.gz -rw-r----- 1 pattern pattappl 1062181981...
2014 Aug 25
2
Call for testing: OpenSSH 6.7
...g directory `/usr/src/openssh/regress' > > make: *** [tests] Error 2 > > any clues in regress/failed-*? > > Brought that VM back up (admittedly I didn't look too deep at this one - was trying to get through the test suite first), looking at those files I see this: # ls -alrt failed-* -rw-r--r-- 1 root root 308 Aug 25 09:05 failed-ssh.log -rw-r--r-- 1 root root 236 Aug 25 09:05 failed-sshd.log -rw-r--r-- 1 root root 89 Aug 25 09:05 failed-regress.log [root at buildhost regress]# cat failed-regress.log trace: wait for sshd...
2008 Jul 09
0
Samba winbind under Solaris 8 and Bash shell
...[ID 129890 auth.error] pam_winbind(sshd): reque st failed: No such user, PAM error was No account present for user (13), NT erro r was NT_STATUS_NO_SUCH_USER Strange is that the NT_STATUS_NO_SUCH_USER appears after i successfully logged in via ssh and logged out. The Pam module is in place: ls -alrt /usr/lib/security/pam_winbind* -rw-r--r-- 1 root other 102364 Jul 8 14:53 /usr/lib/security/pam_winbind.so.1 and also the nss module: bash-2.03# ls -alrt /usr/lib/nss_* -rwxr-xr-x 1 root bin 14564 Jan 5 2000 /usr/lib/nss_xfn.so.1 -rwxr-xr-x 1 root bin 13476...
2007 Mar 05
1
extra-sounds 1.4.5 timestapmed newer than 1.4.6 ???
...risk-extra-sounds-en-alaw-1.4.5.tar.gz -rw-r--r-- 1 0 0 2017747 Feb 22 00:32 asterisk-extra-sounds-en-g729-1.4.5.tar.gz drwxr-xr-x 4 0 0 4096 Feb 22 00:40 .. drwxr-xr-x 6 0 0 4096 Mar 06 00:50 .svn ncftp ...ephony/sounds/releases > dir -alrt -- +-----------------------------------------------------------------+ | James W. Laferriere | System Techniques | Give me VMS | | Network Engineer | 663 Beaumont Blvd | Give me Linux | | babydr@baby-dragons.com | Pacifica, CA. 94044 | only on AXP | +---------------------...
2005 Jan 31
3
[Fwd: IPAPPEND on http://syslinux.zytor.com/faq.php#config]
FYI -------- Original Message -------- Subject: IPAPPEND on http://syslinux.zytor.com/faq.php#config Date: Mon, 31 Jan 2005 15:20:00 +0100 From: Pascal Terjan <pterjan at mandrakesoft.com> Organization: Mandrakesoft To: david at weekly.org Hi, I needed to get the MAC from which we booted using pxelinux (in order to know which interface we used to boot). I found reading the source that
2014 Aug 26
1
Call for testing: OpenSSH 6.7
...) > > > > any clues in regress/failed-*? > > > > > > > > Brought that VM back up (admittedly I didn't look too deep at this one - > > was trying to get through the test suite first), looking at those files I > > see this: > > > > # ls -alrt failed-* > > -rw-r--r-- 1 root root 308 Aug 25 09:05 failed-ssh.log > > -rw-r--r-- 1 root root 236 Aug 25 09:05 failed-sshd.log > > -rw-r--r-- 1 root root 89 Aug 25 09:05 > failed-regress.log > > [root at buildhost regress]...
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...one of the 4 replicate gluster servers for maintenance today. There are 2 gluster volumes totaling about 600GB. Not that much data. After the server comes back online, it starts auto healing and pretty much all operations on gluster freeze for many minutes. For example, I was trying to run an ls -alrt in a folder with 7300 files, and it took a good 15-20 minutes before returning. During this time, I can see iostat show 100% utilization on the brick, heal status takes many minutes to return, glusterfsd uses up tons of CPU (I saw it spike to 600%). gluster already has massive performance issues f...
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...ster servers for maintenance > today. There are 2 gluster volumes totaling about 600GB. Not that much > data. After the server comes back online, it starts auto healing and > pretty much all operations on gluster freeze for many minutes. > > For example, I was trying to run an ls -alrt in a folder with 7300 > files, and it took a good 15-20 minutes before returning. > > During this time, I can see iostat show 100% utilization on the brick, > heal status takes many minutes to return, glusterfsd uses up tons of > CPU (I saw it spike to 600%). gluster already has m...
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...gluster servers for maintenance today. > There are 2 gluster volumes totaling about 600GB. Not that much data. After > the server comes back online, it starts auto healing and pretty much all > operations on gluster freeze for many minutes. > > For example, I was trying to run an ls -alrt in a folder with 7300 files, > and it took a good 15-20 minutes before returning. > > During this time, I can see iostat show 100% utilization on the brick, > heal status takes many minutes to return, glusterfsd uses up tons of CPU (I > saw it spike to 600%). gluster already has mass...
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...maintenance today. >> There are 2 gluster volumes totaling about 600GB. Not that much data. After >> the server comes back online, it starts auto healing and pretty much all >> operations on gluster freeze for many minutes. >> >> For example, I was trying to run an ls -alrt in a folder with 7300 files, >> and it took a good 15-20 minutes before returning. >> >> During this time, I can see iostat show 100% utilization on the brick, >> heal status takes many minutes to return, glusterfsd uses up tons of CPU (I >> saw it spike to 600%). glus...
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...y. There are 2 gluster volumes totaling about >> 600GB. Not that much data. After the server comes back online, it >> starts auto healing and pretty much all operations on gluster >> freeze for many minutes. >> >> For example, I was trying to run an ls -alrt in a folder with >> 7300 files, and it took a good 15-20 minutes before returning. >> >> During this time, I can see iostat show 100% utilization on the >> brick, heal status takes many minutes to return, glusterfsd uses >> up tons of CPU (I saw it spik...
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...> today. There are 2 gluster volumes totaling about 600GB. Not that much >>> data. After the server comes back online, it starts auto healing and pretty >>> much all operations on gluster freeze for many minutes. >>> >>> For example, I was trying to run an ls -alrt in a folder with 7300 >>> files, and it took a good 15-20 minutes before returning. >>> >>> During this time, I can see iostat show 100% utilization on the brick, >>> heal status takes many minutes to return, glusterfsd uses up tons of CPU (I >>> saw it...
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...about 600GB. Not that much data. After the server comes >>> back online, it starts auto healing and pretty much all >>> operations on gluster freeze for many minutes. >>> >>> For example, I was trying to run an ls -alrt in a folder >>> with 7300 files, and it took a good 15-20 minutes before >>> returning. >>> >>> During this time, I can see iostat show 100% utilization >>> on the brick, heal status takes many minutes to r...
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...Content-Type: text/plain; charset=us-ascii On Feb 11, 2008 17:04 +0100, Per Lundqvist wrote: > I got this error today when testing a newly set up 1.6 filesystem: > > n50 1% cd /mnt/test > n50 2% ls > ls: reading directory .: Identifier removed > > n50 3% ls -alrt > total 8 > ?--------- ? ? ? ? ? dir1 > ?--------- ? ? ? ? ? dir2 > drwxr-xr-x 4 root root 4096 Feb 8 15:46 ../ > drwxr-xr-x 4 root root 4096 Feb 11 15:11 ./ > > n50 4% stat . > File: `.'' >...
2011 Mar 05
19
[RFC apcsmart V3 00/18] apcsmart driver updates
Sorry for a bit longer delay than I anticipated, I was stuffed with the work. This is the next iteration of the patch adding some functionality to apcsmart driver, and relying on 'ignorelb' recently added. Follow up from previous thread: http://www.mail-archive.com/nut-upsdev at lists.alioth.debian.org/msg02331.html Main differences is that V3 is split into many small patches, so the
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad, I actually saw that post already and even asked a question 4 days ago ( https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode#comment1172497_540917). The accepted answer also seems to go against your suggestion to enable direct-io-mode as it says it should be disabled for better performance when used just for file accesses. It'd be great if someone from the Gluster team
2011 Jan 25
1
[RFC] Updates to ACP smart driver
This patch introduces a handful of new options, I mentioned earlier in: http://www.mail-archive.com/nut-upsdev at lists.alioth.debian.org/msg02088.html See the large commit message in the follow-up for the details and rationale. I realize it's a bit larger diff - so if it's required I can split it into few smaller ones. Michal Soltys (1): APC smart driver update and new features.