similar to: coxph covariance matrix

Displaying 20 results from an estimated 9000 matches similar to: "coxph covariance matrix"

2007 Mar 01
1
covariance question which has nothing to do with R
This is a covariance calculation question so nothing to do with R but maybe someone could help me anyway. Suppose, I have two random variables X and Y whose means are both known to be zero and I want to get an estimate of their covariance. I have n sample pairs (X1,Y1) (X2,Y2) . . . . . (Xn,Yn) , so that the covariance estimate is clearly 1/n *(sum from i = 1 to n of ( X_i*Y_i) ) But,
2003 Oct 23
1
Variance-covariance matrix for beta hat and b hat from lme
Dear all, Given a LME model (following the notation of Pinheiro and Bates 2000) y_i = X_i*beta + Z_i*b_i + e_i, is it possible to extract the variance-covariance matrix for the estimated beta_i hat and b_i hat from the lme fitted object? The reason for needing this is because I want to have interval prediction on the predicted values (at level = 0:1). The "predict.lme" seems to
2002 Aug 20
0
Re: SVM questions
> > So i guess from your prev. email the svmModel$coefs correspond to the > "Alpha" . yes (times the sign of y!). > > Why do I see three columns in the coefs?( Is this the number of classes -1 > = Numbe of hyperplanes) yes, but in a packed format which is not trivial. I attach some explanation I sent to R-help some time ago (the guy wanted to write his own
2010 Feb 06
1
Canberra distance
Hi the list, According to what I know, the Canberra distance between X et Y is : sum[ (|x_i - y_i|) / (|x_i|+|y_i|) ] (with | | denoting the function 'absolute value') In the source code of the canberra distance in the file distance.c, we find : sum = fabs(x[i1] + x[i2]); diff = fabs(x[i1] - x[i2]); dev = diff/sum; which correspond to the formula : sum[ (|x_i - y_i|) /
2018 Jan 17
1
mgcv::gam is it possible to have a 'simple' product of 1-d smooths?
I am trying to test out several mgcv::gam models in a scalar-on-function regression analysis. The following is the 'hierarchy' of models I would like to test: (1) Y_i = a + integral[ X_i(t)*Beta(t) dt ] (2) Y_i = a + integral[ F{X_i(t)}*Beta(t) dt ] (3) Y_i = a + integral[ F{X_i(t),t} dt ] equivalents for discrete data might be: 1) Y_i = a + sum_t[ L_t * X_it * Beta_t ] (2) Y_i
2009 Oct 01
1
Help for 3D Plotting Data on 'Irregular' Grid
Dear All, Here is what I am trying to achieve: I would like to plot some data in 3D. Usually, one has a matrix of the kind y_1(x_1) , y_1(x_2).....y_1(x_i) y_2(x_1) , y_2(x_2).....y_2(x_i) ........................................... y_n(x_1) , y_n(x_2)......y_n(x_i) where e.g. y_2(x_1) is the value of y at time 2 at point x_1 (see that the grid in x is the same for the y values at all times).
2007 Apr 12
1
LME: internal workings of QR factorization
Hi: I've been reading "Computational Methods for Multilevel Modeling" by Pinheiro and Bates, the idea of embedding the technique in my own c-level code. The basic idea is to rewrite the joint density in a form to mimic a single least squares problem conditional upon the variance parameters. The paper is fairly clear except that some important level of detail is missing. For
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as sum(|x_i - y_i| / |x_i + y_i|) Obviously this is undefined for cases where both x_i and y_i are zeros. Since double zeros are common in many data sets, this is a nuisance. In our field (from which the distance is coming), it is customary to remove double zeros: contribution to distance is zero when both x_i
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as sum(|x_i - y_i| / |x_i + y_i|) Obviously this is undefined for cases where both x_i and y_i are zeros. Since double zeros are common in many data sets, this is a nuisance. In our field (from which the distance is coming), it is customary to remove double zeros: contribution to distance is zero when both x_i
2004 Apr 18
2
lm with data=(means,sds,ns)
Hi Folks, I am dealing with data which have been presented as at each x_i, mean m_i of the y-values at x_i, sd s_i of the y-values at x_i number n_i of the y-values at x_i and I want to linearly regress y on x. There does not seem to be an option to 'lm' which can deal with such data directly, though the regression problem could be algebraically
2010 Nov 03
1
Orthogonalization with different inner products
Suppose one wanted to consider random variables X_1,...X_n and from each subtract off the piece which is correlated with the previous variables in the list. i.e. make new variables Z_i so that Z_1=X_1 and Z_i=X_i-cov(X_i,Z_1)Z_1/var(Z_1)-...- cov(X_i,Z__{i-1})Z__{i-1}/var(Z_{i-1}) I have code to do this but I keep getting a "non-conformable array" error in the line with the covariance.
2010 Apr 25
1
function pointer question
Hello, I have the following function that receives a "function pointer" formal parameter name "fnc": loocv <- function(data, fnc) { n <- length(data.x) score <- 0 for (i in 1:n) { x_i <- data.x[-i] y_i <- data.y[-i] yhat <- fnc(x=x_i,y=y_i) score <- score + (y_i - yhat)^2 } score <- score/n
2004 Mar 03
1
Confusion about coxph and Helmert contrasts
Hi, perhaps this is a stupid question, but i need some help about Helmert contrasts in the Cox model. I have a survival data frame with an unordered factor `group' with levels 0 ... 5. Calculating the Cox model with Helmert contrasts, i expected that the first coefficient would be the same as if i had used treatment contrasts, but this is not true. I this a error in reasoning, or is it
2007 Apr 12
0
LME: internal workings of QR factorization --repost
Hi: I've been reading "Computational Methods for Multilevel Modeling" by Pinheiro and Bates, the idea of embedding the technique in my own c-level code. The basic idea is to rewrite the joint density in a form to mimic a single least squares problem conditional upon the variance parameters. The paper is fairly clear except that some important level of detail is missing. For
2008 Dec 01
1
linear functional relationships with heteroscedastic & non-Gaussian errors - any packages around?
Hi, I have a situation where I have a set of pairs of X & Y variables for each of which I have a (fairly) well-defined PDF. The PDF(x_i) 's and PDF(y_i)'s are unfortunately often rather non-Gaussian although most of the time not multi--modal. For these data (estimates of gas content in galaxies), I need to quantify a linear functional relationship and I am trying to do this as
2018 Mar 15
0
stats 'dist' euclidean distance calculation
> 3x3 subset used > Locus1 Locus2 Locus3 > Samp1 GG <NA> GG > Samp2 AG CA GA > Samp3 AG CA GG > > The euclidean distance function is defined as: sqrt(sum((x_i - y_i)^2)) My > assumption was that the difference between
2010 Dec 15
4
Generacion de binomiales correlacionadas
Buenas tardes, Estoy interesado en generar observaciones de una distribucion binomial bivariada en la que hay _cierto_ grado de correlacion (denotemoslo rho). Podria por favor alguien indicarme como hacerlo en R? Este es el contexto. Supongamos que se tienen dos experimentos en los que la variable respuesta _sigue_ una distribucion binomial, i.e., X_i ~Binomial(n_i, p_i), i=1,2 y que, por ahora,
2009 Sep 24
0
basic cubic spline smoothing (resending because not sure about pending)
Hello, I come from a non statistics background, but R is available to me, and I needed to test an implementation of smoothing spline that I have written in c++, so I would like to match the results with R (for my unit tests). I am following Smoothing Splines, D.G. Pollock (available online) where we have a list of points (xi, yi), the yi points are random such that: y_i = f(x_i) + e_i
2019 May 16
3
nrow(rbind(character(), character())) returns 2 (as documented but very unintuitive, IMHO)
Hi Hadley, Thanks for the counterpoint. Response below. On Thu, May 16, 2019 at 1:59 PM Hadley Wickham <h.wickham at gmail.com> wrote: > The existing behaviour seems inutitive to me. I would consider these > invariants for n vector x_i's each with size m: > > * nrow(rbind(x_1, x_2, ..., x_n)) equals n > Personally, no I wouldn't. I would consider m==0 a degenerate
2007 Feb 01
3
Help with efficient double sum of max (X_i, Y_i) (X & Y vectors)
Greetings. For R gurus this may be a no brainer, but I could not find pointers to efficient computation of this beast in past help files. Background - I wish to implement a Cramer-von Mises type test statistic which involves double sums of max(X_i,Y_j) where X and Y are vectors of differing length. I am currently using ifelse pointwise in a vector, but have a nagging suspicion that there is a