Displaying 20 results from an estimated 800 matches similar to: "Odd behavior"
2010 Apr 16
1
Random Timeouts?
Hi All,
I was hoping someone could help me figure out what['s going on here...
I have a server that I'm using to backup a lot of files to, and I'm
using rsync to back them up.
The backup server runs CentOS 5.4 Linux:
# uname -a
Linux slurp.kilokluster.ucsc.edu 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17
11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
and so do the servers the rsync
2014 Jun 10
2
Overload Point for OPUS
Hello,
Can you please tell me what the overload point is for OPUS in dBm0 or dBV?
As an example, the G.711 A law codec has an overload point of +3.14 dBm0
across a 600 ohm circuit.
In ITU-T G.100.1 section 5.8 you can read more about the relationship
between dBm0 and dBov. In that same section you can see that the G.711 A
law, u law, and G.722 codec overload points are also defined.
2010 May 19
2
Compiz 8.6 on openSuSE 11.0 -- Success! But need help with a few issues.
Dominique, cc: compiz,
Dominique, you wanted the feedback, and List, I need your help. I have
installed and fully configured (kicked the tires) on compiz 8.6 on openSuSE 11.0
and on balance it is great. There are a few weird things going on. Most notably,
the number of options in ccsm that now uncheck themselves. In the past (compiz
0.5.6 - 0.8.2) there may have been one or two that would stick
2006 Jun 21
1
Is there a way to cache a file system listing?
I have a filesystem that has roughly ~600,000 files on it and every time a
new client rsyncs this tree (after 10-20 minutes when the cache has
expired) then it takes 5-10 minutes to re-traverse the tree during a new
rsync. Is there a way, other than running find /path every minute or so,
to keep the listing in memory so the rsyncs would run much faster?
2001 Sep 12
6
Yet another backtrace
Another one at block.c:176:
---
Title: We The People
Artist: DJ Lithium Presents
Bitstream is 2 channel, 44100Hz
Time: 58:29.07, Bitrate: 100.1
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 1024 (LWP 27207)]
_vds_shared_init (v=0xbffff73c, vi=0x4024efe0, encp=0) at block.c:176
176 b->modebits=ilog2(ci->modes);
(gdb) bt
#0 _vds_shared_init
2011 Nov 18
2
round() ignores missing arguments if it is used inside another function where some arguments are missing.
I have stumbled across some behaviour in R that I really can't place,
and that makes coding a bit tricky. I know that I can work around it
when explicitly checking for missing arguments, but still...
I have two functions. I have a first function based on paste
? ? fun1 <- function(x,y){
? ? ? print(missing(y))
? ? ? paste(x,'X',sep=y)
? ? }
If I try this function without
2007 Jun 21
1
Rsync to remote host much faster than 10k/sec speeds on local rsync
I run several remote rsyncs off a CentOS 4 server and also back up between
two drives on the same server.
Performance is far better on the remote rsyncs and the local rsync only
manages 10k/sec despite being on modern hardware.
Here are the stats from a local backup:
Number of files: 979548
Number of files transferred: 2289
Total file size: 24703651523 bytes
Total transferred file size:
2007 Nov 14
1
reading tables from url
I'm trying to read some web tables directly into R. These are both
genome sequencing projects (eukaryotes and metagenomes) from NCBI and
look very similar; however, only the first one works.
http://www.ncbi.nlm.nih.gov/genomes/leuks.cgi
http://www.ncbi.nlm.nih.gov/genomes/lenvs.cgi
I added ?dump=selected to the end of the url string to get a tab-
delimited file (which is what happens
2015 Aug 07
3
download.file() on ftp URL fails in windows with default download method
Hi,
> url <- "ftp://ftp.ncbi.nlm.nih.gov/genomes/ASSEMBLY_REPORTS/All/GCF_000001405.13.assembly.txt"
> download.file(url, tempfile())
trying URL 'ftp://ftp.ncbi.nlm.nih.gov/genomes/ASSEMBLY_REPORTS/All/GCF_000001405.13.assembly.txt'
Error in download.file(url, tempfile()) :
cannot open URL
2001 Oct 31
2
Multiples rsyncs with multiple sshs...CPU overload
Hello Folks,
I am using rsync 2.4.6 over ssh on Solaris 2.6 machines.
It's been working great for months keeping three DMZ ftp servers in
sync...now, though, I am trying to implement a new solution with DMZ and
"inside" ftp servers.
Basically, I want to sync files being ftp'ed to the DMZ server over to an
"inside" machine, and since some processing (decryption) then
2004 May 28
2
Keeping Multiple Rsyncs Separate
I have noticed that if you run two rsyncs at once, they get confused and
copy the files from one the wrong rsync thread. Apparently this is because
of the ?Build List? that is made in ram. Two build lists stepping on each
other. Does anyone know how to change the source so that the each build list
in ram is kept separate?
2011 Apr 03
1
R-project: plot 2 zoo objects (price series) that have some date mis-matches
I have 2 zoo objects -
1) Interest rate spread between 10-YR-US-Treasury and 2-YR-US-Treasury
(object name = sprd)
2) S&P 500 index (object name = spy)
> str(spy)
?zoo? series from 1976-06-01 to 2011-03-31
Data: num [1:8791] 99.8 100.2 100.1 99.2 98.6 ...
Index: Class 'Date' num [1:8791] 2343 2344 2345 2346 2349 ...
> str(sprd)
?zoo? series from 1976-06-01 to 2011-03-31
2010 Mar 18
2
Rsync behaviour on harddisc crash
I was just pondering:
Lets look at Server A with one extra harddisc hdb, where hdb1 is mounted at /mnt/folder.
/mnt/folder is the folder which should be mirrored on Server B.
So Server B rsyncs
rsynch -a -delete server::folder /folder
from Server A and gets all the new files and deletes all the files, which doesn't exist on Server A's /mnt/folder anymore.
What would happen,
2009 Nov 12
2
Turning off "Fixed Duplicates" feature
Is there a way to disable this feature? Seems to be causing more harm then
good right now. Without getting into too much details it is fixing the
duplcates but it does not remove the old file, and the new file it creates
is not marked as read like the old one was. I understand that this problem
can be created by multiple rsyncs and files changing in between, but is
there any way that I can
2006 Feb 09
1
Problems with high load
Hi,
Specs: rsync is running as root with prio: 0
Linux: Debian Kernel: 2.4.30
Rsync: rsync version 2.6.4 protocol version 29
We are having problems with a very high load at the time of an rsync is running.
Sometimes if 2 or 3 Rsyncs are running simultanously on to the same machine the load goes up on
that machine to 10 or more. the datas will be synced in about 2 or 3 minutes with about:
sent
2008 Jun 11
1
how to save an updated dataset
I wrote a package which includes a number of genome sequencing project
statistics on the web like http://www.ncbi.nlm.nih.gov/genomes/lproks.cgi. I
included some generic functions to summarize, plot, and update the tables
with the most recent version
data(lproks)
update(lproks)
[1] "lproks successfully updated, 7 new genomes added"
I usually save the dataset back to my package
2015 Aug 08
2
download.file() on ftp URL fails in windows with default download method
----- Original Message -----
> From: "Uwe Ligges" <ligges at statistik.tu-dortmund.de>
> To: "Dan Tenenbaum" <dtenenba at fredhutch.org>, "R-devel at r-project.org" <r-devel at r-project.org>
> Sent: Saturday, August 8, 2015 3:57:34 PM
> Subject: Re: [Rd] download.file() on ftp URL fails in windows with default download method
>
>
2012 Mar 07
1
Long delays in rsync manifested by repeated entries, CentOS, rsync v2.6.8.
Hello, rsync list folks,
Recently, rsyncs abort during busier times of the day, although they
run error-free during non-busy hours.
Problem 1 - Many rsyncs abort these errors:
Read from remote host www.xxx.yyy.zzz: Connection reset by peer
rsync: writefd_unbuffered failed to write 4 bytes [sender]: Broken pipe (32)
rsync: connection unexpectedly closed (3929920 bytes received so far)
2004 Sep 19
1
Multiple concurrent rsyncs: an idea...
Yesterday, as I was still waiting for a large rsync mirror to finish, I
was thinking that it would be interesting if you could run multiple
rsyncs and have them cooperate to mirror a repository from several
different sources. I think a close approximation should be fairly
easy to do, but I just won't have any time to do it.
My thought is that it could be implemented fairly inexpensively by
2008 Jul 29
1
securing rsync over ssh
I want to secure some remote rsyncs over ssh by using the command= option
in .authorized_keys.
As I understand I can use only the full command there, as it is not a list
of "allowed commands" but the command that will be executed when logging
in with this key.
Now, I'm running several rsync commands on individual directories in the
root, not just one command. I do that to pull