search for: efficient

Displaying 20 results from an estimated 2241 matches for "efficient".

2011 Mar 29
Most efficient way of pxe booting windows pe
Hi guys, We are using syslinux (memdisk) and gpxe already for a while. Also got a working winpe boot method by means of pxe. Which basicly is a dd of an ''recovery partition'' where winpe.wim is on. That method isn''t really the most efficient way, cause its loaded into memory twice. We are going to do a major windows 7 deployment soon and i''m looking for the most efficient way of booting a small winpe. Can anyone tell me the options and what is the best and most efficient method of today into booting a winpe.wim? Thanks in a...
2010 Aug 09
efficient matrix element comparison
It is a simple problem in that I simply want to convert the For loop to a more efficient method. It simply loops through a large vector checking if the numeric element is not equal to the index of that element. The following example demonstrates a simplified example: > rows <- 10 > collusionM <- Matrix(0,10,10,sparse=TRUE) > matchM <- matrix(c(1,2,3,4,4,6,7,9,9,10)...
2007 Aug 21
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn''t, but I can''t find the reference. Thanks, Brian -- - Brian Gupta...
2010 Mar 26
More efficient alternative to combn()?
...combn(), e.g.: > vector <- 1:6 > combn(x = vector, m = 3, FUN = function(y) prod(y)) In my case the vector has 2000 elements and i need to compute the values specified above for m = 32. Using combn() i encounter problems for m >= 4. Is there any alternative to combn() that works more efficiently? Also my vector contains many duplicates, so there are actually only about 300 distinct values in the vector. This reduces the number of possible combinations significantly. Is there any way to use this fact to reduce the computational cost? Thanks in advance, El -- View this message in contex...
2009 Aug 10
strsplit a matrix
Dear all, I am trying to split a matrix into 2 as efficiently as possible. It is a character matrix: 1 2 3 1 "2-271" "2-367" "1-79" 2 "2-282" "2-378" "1-90" 3 "2-281" "2-377" "1-89" I want to make 2 matrices from this, as succinctly and efficientl...
2007 Oct 05
Limit Rates in more scalable and efficient way
Hello I''m looking for a more efficient way to limit rates to different clients. Right now as I understand it, I have to make a class for every customer/ip-address I''d like to limit bandwidth. This means lots of configuration if i had many customers to setup traffic shaping for. I can filter for ip-ranges, but then all ip&...
2016 Apr 05
Is that an efficient way to find the overlapped , upstream and downstream ranges for a bunch of ranges
...romosome, start and end, like that seqnames start end width strand gene1 chr1 1 5 5 + gene2 chr1 10 15 6 + gene3 chr1 12 17 6 + gene4 chr1 20 25 6 + gene5 chr1 30 40 11 + I just wondering is there an efficient way to find overlapped, upstream and downstream genes for each gene in the granges For example, assuming all_genes_gr is a ~50000 genes genomic range, the result I want like belows: gene_nameupstream_genedownstream_geneoverlapped_gene gene1NAgene2NA gene2gene1gene4gene3 gene3gene1gene4gene2 gene4...
2009 May 19
ext3 efficiency, larger vs smaller file system, lots of inodes...
(... to Nabble Ext3:Users - reposted by me after I joined the ext3-users mailing list - sorry for the dup...) A bit of a rambling subject there but I am trying to figure out if it is more efficient at runtime to have few very large file systems (8 TB) vs a larger number of smaller file systems. The file systems will hold many small files. My preference is to have a larger number of smaller file systems for faster recovery and less impact if a problem does occur, but I was wondering if anybo...
2009 Sep 28
What is the most efficient way to split a table into 2 groups?
..."lots.part_id =" WHERE clause), but didn''t get anywhere with that. I will end up leaving this the way it is (most likely), especially since this isn''t the right phase of development to be worrying about optimization, but I am curious how one might do this most efficiently. Is it most efficient to grab 1 record where = blah and then all the (rest of the) records where <> blah? Is it more efficient to grab all the records at once and write some ruby code to select the one record from the rest? If so, what would that code look like? I do...
2010 Nov 13
Efficient marginal sums?
...I have a function f(x,y) which computes a value for scalar x and y; or, if either x=X or y=Y is a vector, a corresponding vector of values f(X,y) or f(x,Y) (with the usual built-in vectorisation of operations). Now I have X=(x.1,x.2,...,x.m) and Y=(y.1,y.2,...,y.n). I''m seeking a fast and efficient method to compute (say) sum[over elements of Y](f(X,Y)) returning an m-vector in which, for each x.i in X, I have sum(f(x.i,Y)) I know I can do this by constructing matrices say M.X and M.Y M.X <- matrix(rep(X,length(Y)),nrow=length(Y),byrow=TRUE) M.Y <- matrix(rep(Y,length(X)),n...
2007 Jan 05
Efficient multinom probs
Dear R-helpers, I need to compute probabilties of multinomial observations, eg by doing the following: y=sample(1:3,15,1) prob=matrix(runif(45),15) prob=prob/rowSums(prob) diag(prob[,y]) However, my question is whether this is the most efficient way to do this. In the call prob[,y] a whole matrix is computed which seems a bit of a waste. Is there maybe a vectorized version of dmultinom which does this? Best, Ingmar
2012 Nov 18
subtract multiple columns from single column for Nash Sutcliffe efficiency
Hi everyone, I am having trouble using my own data in the Nash-Sutcliffe efficiency (NSE) function. In R, this is what I have done: Vobsr <- read.csv("Observed_Flow.csv", header = TRUE, sep =",") # see data below Vsimr <- read.csv("1000Samples_Vsim.csv", header = TRUE, sep =",") # see data below Vobsr <- as.matrix(Vobsr[,-1]) # remove column 1
2017 Jul 14
Efficient Binning
Hi all, I have a situation where I have 16 bins. I generate a random number and then want to know which bin number the random number falls in. Right now, I am using a serious of 16 if() else {} statements which get very complicated with the embedded curly braces. Is there a more efficient (i.e., easier) way to go about this? boundaries<-(0:16)/16 rand<-runif(1) Which bin number (1:16) does rand fall in? Thanks, Dan [[alternative HTML version deleted]]
2011 May 21
[cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc.
...e secure hash Merkle Tree could be minimal. Then, the filesystem should make this Merkle Tree available to applications through a simple query. This would enable applications?without needing any further in-filesystem code?to perform a Merkle Tree sync, which would range from "noticeably more efficient" to "dramatically more efficient" than rsync or zfs send. :-) Of course it is only more efficient because we''re treating the maintenance of the secure-hash Merkle Tree as free. There are two senses in which this is legitimate and it is almost free: 1. Since the values get...
2006 Mar 23
conservative robust estimation in (nonlinear) mixed models
...mixture of normals. I have tested this in a simple linear mixed model using 5% contamination with a normal with 3 times the standard deviation, which seems to be a common assumption. Simulation results indicate that when the random effects are normally distributed this estimator is about 3% less efficient, while when the random effects are contaminated with 5% outliers the estimator is about 23% more efficient, where by 23% more efficient I mean that one would have to use a sample size about 23% larger to obtain the same size confidence limits for the parameters. Question? I wonder if there are o...
2005 Dec 15
efficient INSERTS
...utes(package, :sent_on => end end my table ''companies_packages'' therefore lists which company was sent which package at what time. If I''m sending 20 packages to 20 companies though, I generate 400 INSERT statements. Is there a rails way of doing this more efficiently? i.e. generating an SQL statement like INSERT INTO `companies_packages` (`company_id`, `package_id`, ''sent_on'') VALUES (1, 101, 2005-12-15), (2, 102, 2005-12-15), (3, 103, 2005-12-15), (4, 104, 2005-12-15), (5, 105, 2005-12-15), (6, 106, 2005-12-15), (7, 107, 2005-12-15...
2012 Oct 02
Efficient Way to gather data from various files
...ot;folder1/1AC/folder2/blah.dbf" This dbf looks like run Stat 1 10 2 10 3 999999 4 100000000000 5 100000000 6 9999999999 7 100000000 8 10 9 10 10 10 11 1000000 I know i could do this with a loop, but I can''t see the efficient, R way. I was hoping that you experienced R programmers could give me some pointers on the most efficient way to achieve this result. Sam [[alternative HTML version deleted]]
2012 Jun 07
Making touch more efficient
I''m currently working on a Rails 2.3 project but I think the code hasn''t changed much around touching in the latest release so here goes: Touching seems to be less efficient than it could be. Currently it seems to change the updated_at (or other) field, save the whole object, trigger all callbacks on that object (some of which might be to touch other associated objects) and so on. Seems to me that there could be performance gains made (especially when it comes to...
2006 Feb 12
Re: sending personalized emails efficiently
Hi Tom, I noticed this message from you saying that you can use ActionMailer to efficiently do high volume mailing systems. I need to create something that mails out once a day to all my users status updates. But I must admit I don''t see how you can make ActionMailer do all these efficiences you state since it''s API is so limited. Specifically I don''t see h...
2010 Oct 01
What the most efficient way to access HVM guest''s filesystem from dom0?
Hi, What the most efficient way to get a file from a Windows HVM guest from dom0 when the guest is running? No need to change the file, just read it. Hu Shaolong _______________________________________________ Xen-devel mailing list