similar to: robust method to obtain a correlation coeff?

Displaying 20 results from an estimated 2000 matches similar to: "robust method to obtain a correlation coeff?"

2007 Apr 24
1
help interpreting the output of functions - any sources of information
Hi, I am looking for documentation, reference guides, etc. that explain the output of functions... For example using cor.test(...., method="pearson") with Pearson's corr coeff the output is: Pearson's product-moment correlation data: a and b t = 0.2878, df = 14, p-value = 0.7777 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval:
2017 Aug 09
2
generating cran package list matching R minor version
Dear All, It is a common problem to update R for distributors. My challenge is to maintain R module files for a cluster, using easybuild. My question is: Is there a way to derive a list of cran packages and their version for a given version of R? In case addressing the R-help list is addressing wrong list, any pointer is appreciated. Best regards, Christian Meesters
2017 Aug 09
0
generating cran package list matching R minor version
> On Aug 9, 2017, at 5:38 AM, Christian Meesters <meesters at uni-mainz.de> wrote: > > Dear All, > > It is a common problem to update R for distributors. My challenge is to maintain R module files for a cluster, using easybuild. > > My question is: Is there a way to derive a list of cran packages and their version for a given version of R? I'm wondering whether
2015 Apr 18
2
"keep qlp coeff precision such that only 32-bit math is required"
stream_encoder.c has the following code: /* try to keep qlp coeff precision such that only 32-bit math is required for decode of <=16bps streams */ if(subframe_bps <= 16) { ... But FLAC can convert 16-bit input to 17-bit if mid-side coding is used. So, does it make sense to compare subframe_bps with 17? (The patch is attached. What do you think about it?) -------------- next part
2015 Apr 22
2
"keep qlp coeff precision such that only 32-bit math is required"
Martijn van Beurden wrote: > Yes, but that MAX value is used to loop over the > qlp_coeff_precision values between MIN and MAX. So, if the > qlp_coeff_precision value is limited in the loop but MAX is not > limited, the loop does the exact same thing multiple times: a > waste of time. Therefore, only the MAX should be limited. > > I don't think the logic is needed
2015 Apr 18
0
"keep qlp coeff precision such that only 32-bit math is required"
Brian Willoughby wrote: > Ok, I just did a comparison of 1.2.1 with 1.3.2, and the change you're > suggesting was already there before. So, now the question becomes: why > was the code changed in the first place? There should be some indication of why in the git history. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo
2015 Apr 19
2
"keep qlp coeff precision such that only 32-bit math is required"
Martijn van Beurden wrote: > Yes, indeed. I removed the 17-bits part because I just matched > the code in evaluate_lpc_subframe_ with the process_subframe_ > code. It appears it only makes sense that those two pieces code > are the same. A bit of history: 1) The conditional "if(subframe_bps <= 16)" was added to evaluate_lpc_subframe_() in the commit
2015 Apr 18
2
"keep qlp coeff precision such that only 32-bit math is required"
Ok, I just did a comparison of 1.2.1 with 1.3.2, and the change you're suggesting was already there before. So, now the question becomes: why was the code changed in the first place? Was there a bug that was fixed by changing 17 to 16, or did someone just get overzealous in a code review and thought that 17 was a bad choice? Perhaps 32 bits isn't actually large enough to handle the
2015 Apr 18
2
"keep qlp coeff precision such that only 32-bit math is required"
Erik de Castro Lopo wrote: > There should be some indication of why in the git history. http://git.xiph.org/?p=flac.git;a=commitdiff;h=27846708fe6271e5e3965a4bbad99baa1ca24c49 Now I remember a discussion about a bug in -p switch: the old code substracts lpc_order instead of FLAC__bitmath_ilog2(lpc_order), and this commit fixes this. It seems that the logic in process_subframe_() and in
2015 Apr 20
2
"keep qlp coeff precision such that only 32-bit math is required"
Martijn van Beurden wrote: > Or maybe the question is: why is this code in evaluate_lpc_subframe anyway, > i.e, why is this code duplicated? If process_subframe_ sets the > qlp_precision for evaluate_lpc_subframe, why should the latter ignore that? > > We can only speculate about this, but I think this code has been duplicated > by accident, and therefore it wasn't changed
2011 Jun 27
1
group interaction in a varying coeff. model (mgcv)
Dear UseRs, I built varying coefficient models (in mgcv) for two groups separately, with one explanatory and one moderator variable (see the example below). # ------- #  Example: # ------ # generate moderator variable (can the same for both groups) modvar <- c(1:1000) # generate group1 values x1 <- rnorm(1000) y1 <- scale(cbind(1,poly(modvar,2))%*%c(1,2,1)*x1 + rnorm(1000,0,0.3)) #
2007 Mar 07
2
Power calculation for detecting linear trend
Dear people, I've a problem in doing a power calculation. In Fryer and Nicholson (1993), ICES J. mar. Sci. 50: 161-168 page 164 an example is given with the following characteristics T=5, points in time R=5, replicates Var.within=0.1 q=10, a 10% increase per year The degrees of freedom for the test are calculated as Vl=T*R-2=23 and the non-centrality parameter Dl=4.54. Using this they get a
2005 Jun 15
4
Multiple line plots
Greetings, I would like to plot three lines on the same figure, and I am lost. There is an answer to a similar thread… but I tried matplot and it is beyond me. An example of the data follows: Year EM IM BM 1983 9.1 16.8 -7.7 1984 12.0 18.0 -6.0 1985 13.6 19.1 -5.5 1986 12.4 17.3 -4.9 1987 14.6 20.3 -5.7 1988 20.6 23.3 -2.6 1989 25.0 27.2 -2.2 1990 28.4 30.2 -1.8 1991 33.3 31.2 2.1 1992 40.6
2011 Jul 11
2
problem with 'predict'
Hi, I would like to tabulate the likelihood for an affection. For this, I retrieve indices of affected people and controls for my data set and proceed as follows: flags <- c(rep(1, length(patient_indices)), rep(0, length(control_indices))) # dataset is a data.frame and param the parameter to be analysed: data1 <- dataset[,param][c(patient_indices, control_indices)] fit1 <- glm(flags ~
2012 Jul 09
3
Predicted values for zero-inflated Poisson
Hi all- I fit a zero-inflated Poisson model to model bycatch rates using an offset term for effort. I need to apply the fitted model to a datasets of varying levels of effort to predict the associated levels of bycatch. I am seeking assistance as to the correct way to code this. Thanks in advance! Laura [[alternative HTML version deleted]]
2013 Jun 27
1
corrgram with two datasets
Hi, I would like to display inter-parameter scatter plots like those with the corrgram package (see upper triangle here: http://www.statmethods.net/advgraphs/images/corrgram2.png ), just that I would like to plot two datasets instead of one. Say one with black and one with red dots. Or a merged dataset where an indicator column is used to assign different colors to particular dots - with still
2011 Dec 13
1
k-means cluster and plot labels
Hi, For my data, I followed the example of http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/K-Means#Execution and got some very nice results. Despite the fact, that I want to achieve a bit more by clustering my data (stratification beyond case-control), the actual data-frame contains a column labeled "C" which holds a case-control indicator (here either "Z"
2011 Dec 16
1
kmeans and plot labels
Hi, Thanks Sarah. Unfortunately I did not get a step further. My question, perhaps a bit clearer, is how to display the case control status (or any other arbitrary point label) after clustering in a plot: With a bit of pseudo code, where dataset is a data.frame, parameters are those column names where we find numerical values (no NAs) and nclasses is the desired number of classes. fit <-
2012 Jul 03
0
need help EM algorithm to find MLE of coeff in mixed effects model
Dear All, have a general question about coefficients estimation of the mixed model. I simulated a very basic model: Y|b=X*\beta+Z*b +\sigma^2* diag(ni); b follows N(0,\psi) #i.e. bivariate normal where b is the latent variable, Z and X are ni*2 design matrices, sigma is the error variance, Y are longitudinal data, i.e. there are ni
2010 May 24
1
retrieve path analysis coefficients (package agricolae)
Dear list, I'd like to use path.analysis in the package agricolae in batch format on many files, retrieving the path coefficients for each run and appending them to a table. I don't see any posts in the help files about this package or the path.analysis package. I've tried creating an object out of the call to path.analysis, but no matter what I try, the function automatically prints