Displaying 20 results from an estimated 20000 matches similar to: "using rsync on files that are being written to"
2005 Jul 20
1
Corrupted indices (and accidental checkin)
I've seen a number of corrupted-index problems pop up on our test-dovecot
servers (used by a few people at the office), and I'm wondering why corrupt
indices don't just automatically get deleted ? Something like:
if (!MAIL_INDEX_IS_IN_MEMORY(index)) {
unlink(index->filepath);
}
in src/lib-index/mail-index.c:mail_index_set_error() comes to mind. It
2010 Nov 30
1
OT: for those wondering on the stability
root at pbx:~# uptime
23:10:15 up 606 days, 9:38, 1 user, load average: 0.31, 0.08, 0.02
Customer called they are having a scheduled power outage for most of
the day because of construction if I can shut down the machine
gracefully. So I decided to run uptime first.
Enjoy
2008 Apr 02
1
show uptime and last reload
Hi,
I just upgraded from 1.2 to 1.4.
In 1.2, when I did a "show uptime" I used to see a
second line telling me the time since the last reload.
Has this been removed in 1.4?
The following is the output of my two test boxes:
Connected to Asterisk 1.4.18.1 currently running on
voip2 (pid = 10605)
Verbosity is at least 3
voip2*CLI> show uptime
System uptime: 15 hours, 55 seconds
2011 Feb 15
5
uptime
Now this is what I call uptime...
minipbx*CLI> show uptime
System uptime: 41 years, 7 weeks, 6 days, 3 hours, 26 minutes, 46 seconds
Last reload: 8 hours, 3 minutes, 51 seconds
Bizarre bug?
root at minipbx:~# asterisk -V
Asterisk 1.4.37
root at minipbx:~# uname -a
Linux minipbx 2.6.32-dockstar #2 Thu Nov 25 18:03:25 UTC 2010 armv5tel
GNU/Linux
root at minipbx:~# uptime
03:29:27 up 5 days,
2011 Dec 02
12
puppet master under passenger locks up completely
I came in this morning to find all the servers all locked up solid:
# passenger-status
----------- General information -----------
max = 20
count = 20
active = 20
inactive = 0
Waiting on global queue: 236
----------- Domains -----------
/etc/puppet/rack:
PID: 2720 Sessions: 1 Processed: 939 Uptime: 9h 22m 18s
PID: 1615 Sessions: 1 Processed: 947 Uptime: 9h 23m
2012 May 04
3
[BUG 2.6.32.y] Broken PV migration between hosts with different uptime, non-monotonic time?
Hello,
I encountered the following bug when migrating a Linux-2.6.32.54 PV domain on
Xen-3.4.3 between different hosts, whose uptime differs by several minutes (3
hosts, each ~5 minutes apart): When migrating from a host with lower uptime
to a host with higher uptime, the VM looses it''s network connection for some
time and then continues after some minutes (roughly equivalent to the
2008 Aug 06
10
[BUG 1282] time jump on live migrate root cause & proposed fixes
Hi,
I have done some debugging to find out the root cause of bug 1282, which
has the following symptoms with paravirtualized guests:
- after a live migrate, the time on the guest can jump
- after a live migrate, the guest "forgets" to wake up processes
- after a domU save, dom0 reboot and domU restore, the time is
correct but processes are not woken up from sys_nanosleep
The problem
2011 Jul 02
1
Bug#632397: xen: /proc/uptime show idle bigger than uptime
Package: xen
Version: 4.0.1-2
Severity: normal
/proc/uptime shows idle bigger than uptime:
dom0:
% cat /proc/uptime
518389.91 944378.70
%
one domU:
% cat /proc/uptime
417536.22 764826.15
%
another domU:
% cat /proc/uptime
426960.17 795800.89
%
This is normal on multicore / ht cpu, but this is old amd:
% lscpu
Architecture: i686
CPU(s): 1
Thread(s) per core: 1
2018 Mar 28
1
The 'not-always-on' infrastructure at home and Samba4 AD DC's..
Hi everyone,
Apologies in advance, this will be a bit long but I'm hoping to get some
guidance and hints on usual practices for using Samba4 AD DC as an Idm for
W10 laptops that might be on the road elsewhere..
As much as I have been using samba for file serving, a Samba AD DC is
something new to me.
I built a small Samba AD DC infrastructure to serve UIDs and Passwords (4
VMs on 4 KVM
2006 Apr 29
2
How many asterisk process's are "normal"?
Hello all,
I have two test beds running the exact same version of asterisk 1.2.7.1,
latest of zaptel, libpri, etc..
Test bed #1 (Solaris 9,sparc ultra 5):
This one is closer to a "production" machine, in that it is connected to a
sip provider thru an iax2 connection and have an incoming DID configured. I
can send and receive calls.
Test bed #2 (Slackware Linux 10.2, AMD XP chip):
1999 Oct 15
1
99.9% uptime
Sorry, forgot to change the subject on my first posting.
I was reading a comment this morning about something Microsoft had published
to the effect that there were vendors guaranteeing 99.9% uptime for NT. The
guy who wrote the reply did the math for what that means, and the results
are very interesting.
Quote below:
OK, now what does a 99.9% uptime guarantee mean? Well, it means that
at
2010 Oct 04
1
MySQL on BTRFS Experiences
Hi,
My shop is a long time (~2006) ZFS user, considering moving off OpenSolaris
due to a number of non-technical issues. We use ZFS for all of our MySQL
databases. It''s cheap/fast snapshots are critical to our backup strategy,
and we rely on the L2ARC and dedicated log device (i.e. NVRAM) features to
accelerate our pools with SSDs.
I''ve read through the archives for comments on
2012 Apr 16
1
A way to determine when a guest domain was launched?
I've looked around at the API documents (I'm using python) and I know how to find the cpu active uptime of the domain, but that only increases if the domain is running something. Is there a way to determine when the domain was first launched, either as a date or uptime with idle uptime included?
/Amy
2011 Feb 25
2
1.8.2.4: SIP dialogs not killed?
Hi,
I'm wondering if this is normal asterisk behaviour:
asterisk*CLI> sip show channels
Peer User/ANR Call ID Format Hold Last Message Expiry Peer
10.12.0.2 (None) 3c2f7ff2975e-wp 0x0 (nothing) No Rx: PUBLISH <guest>
10.12.0.2 (None) 3c2f7f21b71b-9q 0x0 (nothing) No
2015 Jun 15
1
Logwatch and System uptime
On Mon, June 15, 2015 11:16 am, Pete Geenhuizen wrote:
> Enable it in /usr/share/logwatch/default.conf/services/zz-runtime.conf
Thanks a lot! Helps you to be aware that you definitely missed something
important if you haven't the box rebooted during more than 45-60 days...
Valeri
>
> Pete
>
> On 06/15/15 09:58, James B. Byrne wrote:
>> CentOS-6.6
>>
>> Can
2008 Sep 09
3
not being released
I've noticed a bug with even recent OpenSSH products, where if the host disconnects during a certain period of time, the connection becomes frozen causing possible expolit problems .
For example
[root at portal ~] users
root
[root at portal ~] uptime -u (used to show how many users the box believes is logged on)
2 Users
[root at portal ~]
In theory this trapped connection can and has
2019 Sep 30
1
Re: [PATCH nbdkit v2 4/4] info: Add tests for time, uptime and conntime modes.
On 9/28/19 3:02 PM, Richard W.M. Jones wrote:
> ---
> tests/Makefile.am | 6 ++++
> tests/test-info-conntime.sh | 65 +++++++++++++++++++++++++++++++++++
> tests/test-info-time.sh | 68 +++++++++++++++++++++++++++++++++++++
> tests/test-info-uptime.sh | 65 +++++++++++++++++++++++++++++++++++
> 4 files changed, 204 insertions(+)
>
> +# Test the info
2007 Feb 13
5
reboot long uptimes?
Hi,
I was just wondering if I should reboot some servers that are running over
180 days?
They are still stable and have no problems, also top shows no zombie
processes or such, but maybe it's better for the hardware (like ext3 disk
checks f.e.) to reboot every six months...
btw this uptime really confirms me how stable Centos 4.x really is and so
I wonder how long some people's
2016 Apr 30
3
tune2fs: Filesystem has unsupported feature(s) while trying to open
On Sat, April 30, 2016 8:54 am, William Warren wrote:
> uptime=insecurity.
This sounds like MS Windows admin's statement. Are there any Unix admins
still left around who remember systems with kernel that doesn't need
[security] patching for few years? And libc that does not need security
patches often. I almost said glibc, but on those Unixes it was libc;
glibc, however, wasn't
2008 Aug 13
1
Cannot start xen domUs anymore, domUs hang on kernel startup, happens after a long dom0 uptime
Hello!
I've noticed this problem two times now.. last time I fixed it by rebooting
the (centos 5.1 x86 32b) xen host/dom0.
Symptoms:
- Already running domUs (debian 2.6.18-6-xen-686 32b PAE) continue running
and working OK
- Cannot start new domUs (debian 2.6.18-6-xen-686).. kernel bootup just
hangs before running initrd. Same domU with the exact same xen domU cfgfile worked
earlier.