search for: efficiences

Displaying 20 results from an estimated 11367 matches for "efficiences".

Did you mean: efficiencies
2005 Oct 23
0
brewing stats
I guess this isn't so much of a help request as a show-and-tell from a non-statistician homebrewer who has been fumbling around with R. If nothing else it provides yet another data set. I hope it is not out of line. Anyway, the plots I have produced are at http://brewiki.org/BatchSparge#poll The polling method is somewhat simple, its just one of those multiple choice style polls you
2011 Apr 13
1
RFC: adding new data "ups.efficiency"
Guys, we (Eaton) have created a new data to expose efficiency of the UPS (basically it is the ratio of the output current on the input current). I know that at least APC should also provide it on some units, since I've seen evidence in an EPA presentation [1]. So I'd like to create the following new data: - Name: ups.efficiency - Description: Efficiency of the UPS (ratio of the output
2007 Nov 11
4
Largest N Values Efficiently?
What is the most efficient alternative to x[order(x)][1:n] where length(x)>>n? I also need the positions of the mins/maxs perhaps by preserving names. Thanks for any suggestions. -- View this message in context: http://www.nabble.com/Largest-N-Values-Efficiently--tf4788033.html#a13697535 Sent from the R help mailing list archive at Nabble.com.
2011 Oct 24
1
using lme and 'by' function to extract the co-efficients by individuals
Hi all I'm trying to use the 'by' function to extract the co-efficients from a mixed model which is performed on multiple individuals. I basically have a group of individuals and for each individual I want the co-efficient for there change in 'pots_hauled' in response to a change in 'vpue' with my random variable being 'ns_a_vpue'. The problem I am having is
2013 Apr 25
1
Stochastic Frontier: Finding the optimal scale/scale efficiency by "frontier" package
Hi, I am trying to find out the scale efficiency and optimal scale of banks by stochastic frontier analysis given the panel data of bank. I am free to choose any model of stochastic frontier analysis. The only approach I know to work with R is to estimate a translog production function by sfa or other related function in frontier package, and then use the Ray 1998 formula to find the scale
2007 Apr 02
3
Efficiency
Hi list users, Is there a comparison somewhere of the efficiency of decoding flac files, with respect to some benchmark related to cpu processing? As compared to, say, ape files? I ask because I have recently switched my entire archive from ape to flac. I have an old 400 mhz laptop in my office running xubuntu, which I run into a receiver. Works great. Since switching to flac, I notice the
2008 Jan 07
2
Efficient way to substract all entries in two vectors from each other
Hi all, I'm to inexperienced to come up with the matrix solution elusively appearing on the horizon for the following problem and would appreciate if you could give me a nudge ... I have two vectors a, and b and need to find the closest match for each value of a in b. How to do that efficiently? Thanks, Joh
2009 Feb 17
2
Efficient matrix computations
Hi, I am looking for two ways to speed up my computations: 1. Is there a function that efficiently computes the 'sandwich product' of three matrices, say, ZPZ' 2. Is there a function that efficiently computes the determinant of a positive definite symmetric matrix? Thanks, S.A. [[alternative HTML version deleted]]
2011 May 30
1
Most efficient update of already existing document?
Hello, What is the most efficient way to update some content of document with new info gathered later after it's first indexed? For example I first index a lot of documents text (lets say it's mailbox), and after all documents are indexed I determine each document uniqued ID (which I wasn't able to determine on initial indexing) and I want update all documents with this ID. As I
2011 Mar 29
1
Most efficient way of pxe booting windows pe
Hi guys, We are using syslinux (memdisk) and gpxe already for a while. Also got a working winpe boot method by means of pxe. Which basicly is a dd of an 'recovery partition' where winpe.wim is on. That method isn't really the most efficient way, cause its loaded into memory twice. We are going to do a major windows 7 deployment soon and i'm looking for the most efficient way of
2008 Aug 29
1
more efficient double summation...
Dear R users... I made the R-code for this double summation computation http://www.nabble.com/file/p19213599/doublesum.jpg ------------------------------------------------- Here is my code.. sum(sapply(1:m, function(k){sum(sapply(1:m, function(j){x[k]*x[j]*dnorm((mu[j]+mu[k])/sqrt(sig[k]+sig[j]))/sqrt(sig[k]+sig[j])}))})) ------------------------------------------------- In fact, this is
2010 Dec 07
2
Efficient way to use data frame of indices to initialize matrix
I have a data frame with three columns, x, y, and a. I want to create a matrix from these values such that for matrix m: m[x,y] == a Obviously, I can go row by row through the data frame and insert the value a at the correct x,y location in the matrix. I can make that slightly more efficient (perhaps), by doing something like this: > for (each.x in unique(df$x)) m[each.x, df$y[df$x ==
2011 Nov 28
2
efficient way to fill up matrix (and evaluate function)
Hi All, I want to do something along the lines of: for (i in 1:n){ for (j in 1:n){ A[i,j]<-myfunc(x[i], x[j]) } } The question is what would be the most efficient way of doing this. Would using functions such as sapply be more efficient that using a for loop? Note that n can be a few thousand. Thus atleast a 1000x1000 matrix. Thanks, Sachin [[alternative HTML version
2011 Aug 24
3
Efficient way to Calculate the squared distances for a set of vectors to a fixed vector
I am pretty new to R. So this may be an easy question for most of you. ? I would like to calculate the squared distances of a large set (let's say 20000) of vectors (let's say dimension of 5) to a fixed vector. ? Say I have a data frame MY_VECTORS with 20000 rows and 5 columns, and one 5x1 vector y. I would like to efficiently calculate the squared distances?between each of the 20000
2016 Jan 14
3
High memory use and LVI/Correlated Value Propagation
On Wed, Jan 13, 2016 at 03:38:24PM -0800, Philip Reames wrote: > I don't think that arbitrary limiting the complexity of the search is the > right approach. There are numerous ways the LVI infrastructure could be > made more memory efficient. Fixing the existing code to be memory efficient > is the right approach. Only once there's no more low hanging fruit should > we
2011 Dec 10
2
efficiently finding the integrals of a sequence of functions
Hi folks, I am having a question about efficiently finding the integrals of a list of functions. To be specific, here is a simple example showing my question. Suppose we have a function f defined by f<-function(x,y,z) c(x,y^2,z^3) Thus, f is actually corresponding to three uni-dimensional functions f_1(x)=x, f_2(y)=y^2 and f_3(z)=z^3. What I am looking for are the integrals of these three
2010 Aug 09
2
efficient matrix element comparison
It is a simple problem in that I simply want to convert the For loop to a more efficient method. It simply loops through a large vector checking if the numeric element is not equal to the index of that element. The following example demonstrates a simplified example: > rows <- 10 > collusionM <- Matrix(0,10,10,sparse=TRUE) > matchM <- matrix(c(1,2,3,4,4,6,7,9,9,10)) > >
2007 Feb 01
3
Help with efficient double sum of max (X_i, Y_i) (X & Y vectors)
Greetings. For R gurus this may be a no brainer, but I could not find pointers to efficient computation of this beast in past help files. Background - I wish to implement a Cramer-von Mises type test statistic which involves double sums of max(X_i,Y_j) where X and Y are vectors of differing length. I am currently using ifelse pointwise in a vector, but have a nagging suspicion that there is a
2007 Aug 28
2
Efficient way to parse string and construct data.frame
Hi all, I have this list of strings [1] "1 ,2 ,3" "4 ,5 ,6" Is there an efficient way to convert it to data.frame: V1 V2 V3 1 1 2 3 2 4 5 6 Like I can use strsplit to get to a list of split strings.. and then use say a = strsplit(mylist, ",") data.frame(V1 = lapply(a, function(x){x[1]}), V2 = lapply(a, function(x){x[2]}),.....) but i'm
2009 Mar 31
1
Efficient calculation of partial correlations in R
Hello, I'm looking for an efficient function for calculating partial correlations. I'm currently using the pcor.test () function, which is equivalent to the cor.test() function, and can receive only single vectors as input. I'm looking for something which is equivalent to the cor() function, and can receive matrixes as input (which should make the calculations much more efficient).