Displaying 20 results from an estimated 200 matches similar to: "rsync freezes when copying several million files"
2014 Apr 23
0
Asterisk 11.9.0 Now Available
The Asterisk Development Team has announced the release of Asterisk 11.9.0.
This release is available for immediate download at
http://downloads.asterisk.org/pub/telephony/asterisk
The release of Asterisk 11.9.0 resolves several issues reported by the
community and would have not been possible without your participation.
Thank you!
The following are the issues resolved in this release:
Bugs
2014 Apr 23
0
Asterisk 11.9.0 Now Available
The Asterisk Development Team has announced the release of Asterisk 11.9.0.
This release is available for immediate download at
http://downloads.asterisk.org/pub/telephony/asterisk
The release of Asterisk 11.9.0 resolves several issues reported by the
community and would have not been possible without your participation.
Thank you!
The following are the issues resolved in this release:
Bugs
2013 Nov 30
1
bnlearn and very large datasets (> 1 million observations)
Hi
Anyone have experience with very large datasets and the Bayesian Network
package, bnlearn? In my experience R doesn't react well to very large
datasets.
Is there a way to divide up the dataset into pieces and incrementally learn
the network with the pieces? This would also be helpful incase R crashes,
because I could save the network after learning each piece.
Thank you.
2011 Mar 31
0
Xapian Index: 607GB = 219 million of unique documents
It took approximately five days, having single process using one core
CPU and 6GB of memory to build this giant 607GB single Xapian index,
containing 219 million of unique documents (web sites). So far I did
not found any other implementation that would enable me to build such
a single index containing over 200 million documents, while testing
Lucene, Solr, MySQL, Hadoop and Oracle. Probably
2016 Apr 09
0
fast way to search for a pattern in a few million entries data frame
Hi there,
I have a data frame DF with 40 millions strings and their frequency. I
am searching for strings with a given pattern and I am trying to speed
up this part of my code. I try many options but so far I am not
satisfied. I tried:
- grepl and subset are equivalent in term of processing time
grepl(paste0("^",pattern),df$Strings)
subset(df,
2011 May 13
0
Xapian Index 253 million documents = 704G
Xapian Index 253 million documents = 704G
I just build my largest single Xapian index with 253 million unique
documents on single server using single hard disk, less that 8G RAM
and single processor 2.0 GHz. I do not see any search performance
decreases in searching my indexes between 100 million and 250 million,
which indicates a good scalability of Xapian and it looks like, I can
push it easily
2004 Sep 03
0
I forgot to add my email please contact me offline we have around 300, 000 to 1/2 million minutes per month for India and Pakistan .. can ztdummy help trunk mode?
Hi all, did not find much info in lists about subj.
I have ztdummy working properly because I can use conferences without
any errors.
But when I try to use trunk=yes, I get the following:
Sep 2 21:20:51 WARNING[1137720112]: chan_iax2.c:6422 build_user:
Unable to support trunking on user home' without zaptel timing
Sep 2 21:20:51 WARNING[1137720112]: chan_iax2.c:6246 build_peer:
Unable to
2007 Dec 20
0
[VOIP-Users-Conference] Re: Digium: as of this a.m., one million Asterisk downloads this year
lol - yep when news of this first broke I thought thats actually a very
good idea to have implemented, though it sounds the way Trixbox
implemented it may have been unsecure.
Maybe someone else can come up with a better way of implementing this.
If the data was all randomised there's no harm in doing this;
some basic infomration like;
Hours of uptime
Reboots
Number of extensions
Number of
2017 Feb 23
3
Scaling to 10 Million IMAP sessions on a single server
Comparison of Dovecot, Uwash, Courier, Cyrus and M-Box:
http://www.isode.com/whitepapers/mbox-benchmark.html
2017 Feb 23
0
Scaling to 10 Million IMAP sessions on a single server
Quoting Ruga <ruga at protonmail.com>:
> Comparison of Dovecot, Uwash, Courier, Cyrus and M-Box:
> http://www.isode.com/whitepapers/mbox-benchmark.html
Wow. That comparison is only 11.5 years old.
The "default" file system of reiserfs and gcc-3.3 were dead giveaways.
I suspect Dovecot's changed a tad since that test.
=R=
2008 Feb 12
1
rsync 2 million files
I'm trying to use rsync to get a live backup of 2 million files, about 50
gb, max depth of 5 directory levels.
I'm on a gigabit lan so I'm passing -W, but it's still incredibly slow.
What else can I do to speed things up?
Perhaps there's a good way to filter out files older than X so only newer
files are checked?.
Will rsync 3.0.0 make a big difference for large trees?
2013 Jul 19
0
if i want to delete a million files, can rsync be more faster than rm?
i found two article said that rsync -a --delete empty/ a1/ will much
more faster than rm when deleting millions files:
http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html
http://www.quora.com/File-Systems/How-can-someone-rapidly-delete-400-000-files
but my test gets opposite result, and i think it's impossible on theory.
i use the command 'for i in
2009 Oct 27
0
2010 Fukuoka Ruby Award Competition – Enter Now to Win 1 Million Yen
*Fukuoka Ruby Award*
http://www.f-rubyaward.com/index_en.html
The Government of Fukuoka Japan, together with the Fukuoka Ruby Award
Selection Committee, is excited to announce the opening of the 2010
Fukuoka Ruby Award Competition. The competition is free to enter. The
grand prize is 1 million yen (approximately $10,000). Applications may
be submitted Online at
2004 Jul 02
1
Half Million features Selection (Random Forest)
Hi,
I have about half million binary features, and would like to find a model to estimate the continous response. According to the inference, I can express predictors and response by linear model. (ie. Design matrix: large sparse matrix with 0/1. Response: Continous number) Since it is not a classification problem, someone suggested me to try random forest in R. However, in the randomForest help
2009 Aug 05
2
Million linux virtual machines
If someone posted already, forgive me I get the digest.
http://www.tgdaily.com/content/view/43480/108/
Scientists get a million Linux kernels to run at once
Scientists at Sandia National Laboratories in Livermore, have run more than
a million Linux kernels as virtual machines.
(how long before shared hosts use this....lol)
The technique will allow them to effectively observe behaviour found
2008 Aug 12
2
memory usage in rsync 3.0.3 -- how much RAM should I have to transfer 13 million files?
Hi. I am trying to recursively rsync a directory containing 13 million files.
Right now this is killing my server, in terms of memory usage.
I've upgraded from rsync 2.6.9 to 3.0.3 on both ends, but memory usage
is still too high. I killed the rsync process when it reached 256 MB
in size.
I only have 1 GB of RAM in this server.
We've planned an outage to upgrade it to 3 GB, but
2011 Sep 02
3
Can't got mail by OUTLOOK for a half million mails account
Hi,
I used postfix always_bcc to backup mail. And up to now the backup account
has half million mails in cur/, when I first time tried to receive the mail
by outlook, it failed , no responds.
Does any one has some good idea to deal with this problem?
Thanks
2016 Apr 10
0
what is the faster way to search for a pattern in a few million entries data frame ?
On 10/04/2016 2:03 PM, Fabien Tarrade wrote:
> Hi there,
>
> I have a data frame DF with 40 millions strings and their frequency. I
> am searching for strings with a given pattern and I am trying to speed
> up this part of my code. I try many options but so far I am not
> satisfied. I tried:
> - grepl and subset are equivalent in term of processing time
>
2016 Apr 10
0
what is the faster way to search for a pattern in a few million entries data frame ?
Hi Fabien,
I was going to send this last night, but I thought it was too simple.
Runs in about one millisecond.
df<-data.frame(freq=runif(1000),
strings=apply(matrix(sample(LETTERS,10000,TRUE),ncol=10),
1,paste,collapse=""))
match.ind<-grep("DF",df$strings)
match.ind
[1] 2 11 91 133 169 444 547 605 734 943
Jim
On Mon, Apr 11, 2016 at 5:27 AM, Fabien Tarrade
2017 Feb 22
0
Scaling to 10 Million IMAP sessions on a single server
On Tue, 21 Feb 2017 09:49:39 -0500 KT Walrus wrote:
> I just read this blog: https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/ <https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/> about scaling to 12 Million Concurrent Connections on a single server and it got me