similar to: formula used by R to compute the t-values in a linear regression

Displaying 20 results from an estimated 2000 matches similar to: "formula used by R to compute the t-values in a linear regression"

2004 Jun 07
2
MCLUST Covariance Parameterization.
Hello all (especially MCLUS users). I'm trying to make use of the MCLUST package by C. Fraley and A. Raftery. My problem is trying to figure out how the (model) identifier (e.g, EII, VII, VVI, etc.) relates to the covariance matrix. The parameterization of the covariance matrix makes use of the method of decomposition in Banfield and Rraftery (1993) and Fraley and Raftery (2002) where
2002 May 06
2
A logit question?
Hello dear r-gurus! I have a question about the logit-model. I think I have misunderstood something and I'm trying to find a bug from my code or even better from my head. Any help is appreciated. The question is shortly: why I'm not having same coefficients from the logit-regression when using a link-function and an explicite transformation of the dependent. Below some details. I'm
2000 Oct 23
4
More mdct questions
Sorry for starting another topic, this is actually a reply to Segher's post on Sun Oct 22 on the 'mdct question' topic. I wasn't subscribed properly and so I didn't get email confirmation and thus can't add to that thread. So Segher, if the equation is indeed what you say it is, then replacing mdct_backward with this version should work, but it doesn't. Am I applying
2001 Jan 02
0
mdct explanation
...as promised. This describes the mdct used in my d.m.l patch. I think it is the same as the Lee fast-dct. I typed it in a kind of pseudo-TeX, 'cause the ascii art would kill me. Hope you can read TeX source; if not, ask someone who can to make a .ps/.gif/.whatever of the TeX output, and put it on a webpage or something. I'm to lazy to do it (and besides, I don't have access to TeX,
2003 Nov 15
2
Using the rsync checksums for handling large logfiles.
Dear all, I've only just joined this list, but I can't find any mention of this idea anywhere else, so I thought I'd just post here before getting too deep into programming and possibly reinventing the wheel. Here at Aber, we have around 30 unix and linux servers doing core services. Each one is maintaining its own logfiles and, for various reasons, we want to keep these on the
2002 Feb 06
4
Weighted median
Is there a weighted median function out there similar to weighted.mean() but for medians? If not, I'll try implement or port it myself. The need for a weighted median came from the following optimization problem: x* = arg_x min (a|x| + sum_{k=1}^n |x - b_k|) where a : is a *positive* real scalar x : is a real scalar n : is an integer b_k: are negative and positive scalars
2000 Apr 04
0
stochastic process transition probabilities estimation
Hi all, I'm new with R (and S), and relatively new to statistics (I'm a computer scientist), so I ask sorry in advance if my question is silly. My problem is this: I have a (sample of a) discrete time stochastic process {X_t} and I want to estimate Pr{ X_t | X_{t-l_1}, X_{t-l_2}, ..., X_{t-l_k} } where l_1, l_2, ..., l_k are some fixed time lags. It will be enough for me to compute
2015 Feb 03
2
Seed in 'parallel' vignette
Hi, This is most likely only a minor technicality, but I saw the following: On page 6 of the 'parallel' vignette (http://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf), the random-number generator "L'Ecuyer-CMRG" is said to have seed "(x_n, x_{n-1}, x_{n-2}, y_n, y_{n-1}, y_{n-2})". However, in L'Ecuyer et al. (2002), the seed is given with
2006 Aug 24
1
lmer(): specifying i.i.d random slopes for multiple covariates
Dear readers, Is it possible to specify a model y=X %*% beta + Z %*% b ; b=(b_1,..,b_k) and b_i~N(0,v^2) for i=1,..,k that is, a model where the random slopes for different covariates are i.i.d., in lmer() and how? In lme() one needs a constant grouping factor (e.g.: all=rep(1,n)) and would then specify: lme(fixed= y~X, random= list(all=pdIdent(~Z-1)) ) , that?s how it's done in the
2007 Sep 12
1
Verifying understanding of backup-dir vs compare-dest
Hello, Say one starts with creating an archive rsync work -> archive and periodically (below, i = 1 to N) does rsync --backup-dir=a_<i> work -> archive and rsync --compare-dest=archive work -> b_<i> Then suppose one wants to recover the work directory as it was at time k. Using the b_<i> directories, one would merely merge
2006 May 20
1
(PR#8877) predict.lm does not have a weights argument for newdata
Dear R developers, I am a little disappointed that my bug report only made it to the wishlist, with the argument: Well, it does not say it has. Only relevant to prediction intervals. predict.lm does calculate prediction intervals for linear models from weighted regression, so they should be correct, right? As far as I can see they are bound to be wrong in almost all cases, if no weights
2008 Apr 05
2
Adding a Matrix Exponentiation Operator
Hi all I recently started to write a matrix exponentiation operator for R (by adding a new operator definition to names.c, and adding the following code to arrays.c). It is not finished yet, but I would like to solicit some comments, as there are a few areas of R's internals that I am still feeling my way around. Firstly: 1) Would there be interest in adding a new operator %^% that performs
2006 Sep 14
1
Rv generation
Hi, Can Someone inform me how to generate RV's using the below CDF, by inverse technique. Thanks for your help and time. My CDF is as follows \[ F(x)=0 \ \text{if} \ x < 0\]\[ F(x)=\{\frac{x-x_i}{x_{i+1}-x_{i}}*(p_{i+1}-p_{i})\}+p_{i}\ \forall \ x_{i}\leq x < x_{i+1} \] \[ F(x)=1 \ \text{if} \ x > x_{i+1} \] Regards Murthy
2012 Jun 25
4
do.call or something instead of for
Dear R users, I'd like to compute X like below. X_{i,t} = 0.1*t + 2*X_{i,t-1} + W_{i,t} where W_{i,t} are from Uniform(0,2) and X_{i,0} = 1+5*W_{i,0} Of course, I can do this with "for" statement, but I don't think it's good idea because "i" and "t" are too big. So, my question is that Is there any better idea to avoid "for" statement
2013 Mar 11
3
How to obtain the original indices of elements after sorting
Dear All, Suppose I have a vector X = (x_1, x_2, ...., x_n), X_sort = sort(X) = (x_(1), x_(2), ... , x(n) ), and I would like to know the original position of these ordered x_(i) in X, how can I do it? case 1: all values are unique x <- c( 3, 5, 4, 6) x.sort <- sort(x) # # I would like to obtain a vector (1, 3, 2, 4) which indicates that 3 in x is still the 1st element in x.sort, 5 is at
2012 Jul 28
4
quantreg Wald-Test
Dear all, I know that my question is somewhat special but I tried several times to solve the problems on my own but I am unfortunately not able to compute the following test statistic using the quantreg package. Well, here we go, I appreciate every little comment or help as I really do not know how to tell R what I want it to do^^ My situation is as follows: I have a data set containing a
2015 Jun 01
2
sum(..., na.rm=FALSE): Summing over NA_real_ values much more expensive than non-NAs for na.rm=FALSE? Hmm...
I'm observing that base::sum(x, na.rm=FALSE) for typeof(x) == "double" is much more time consuming when there are missing values versus when there are not. I'm observing this on both Window and Linux, but it's quite surprising to me. Currently, my main suspect is settings in on how R was built. The second suspect is my brain. I hope that someone can clarify the below
2006 Apr 08
1
cross product
Hi, there. How do I calculate the cross-product in the form of \sum_{i=1}^{n}X_{i}^{t} \Sigma X_{i} using R code without using do loop? X_{i} is the covariate matrix for subject I, \Sigma is the covariance matrix. Thanks for your help. Yulei [[alternative HTML version deleted]]
2002 Sep 04
3
strange things with eval and parent frames
Dear mailing list, I have found some strange behaviour which I think relates to parent frames and eval. Can anyone explain what's going on here? First example: > test.parent.funcs_ function() { outer.var_ 5 subfunc1_ function() substitute( outer.var, envir=parent.frame()) print( subfunc1()) subfunc2b_ function() eval( quote( outer.var), envir=parent.frame()) print(
2005 Mar 03
2
regression on a matrix
Hi - I am doing a monte carlo experiment that requires to do a linear regression of a matrix of vectors of dependent variables on a fixed set of covariates (one regression per vector). I am wondering if anyone has any idea of how to speed up the computations in R. The code follows: #regression function #Linear regression code qreg <- function(y,x) { X=cbind(1,x) m<-lm.fit(y=y,x=X)