Displaying 20 results from an estimated 1000 matches similar to: "lm with data=(means,sds,ns)"
2006 Nov 21
3
Fitting mixed-effects models with lme with fixed error term variances
Dear R users,
I am writing to you because I have a few question on how to fix
the error term variances in lme in the hope that you could help me. To
my knowledge, the closest possibility is to fix the var-cov structure,
but not the whole var-cov matrix. I found an old thread (a few years
ago) about this, and it seems that the only alternative is to write the
likelihood down and use optim or a
2005 Jun 15
2
need help on computing double summation
Dear helpers in this forum,
This is a clarified version of my previous
questions in this forum. I really need your generous
help on this issue.
> Suppose I have the following data set:
>
> id x y
> 023 1 2
> 023 2 5
> 023 4 6
> 023 5 7
> 412 2 5
> 412 3 4
> 412 4 6
> 412 7 9
> 220 5 7
> 220 4 8
> 220 9 8
> ......
>
Now I want to compute the
2010 Dec 15
4
Generacion de binomiales correlacionadas
Buenas tardes,
Estoy interesado en generar observaciones de una distribucion binomial
bivariada en la que hay _cierto_ grado de correlacion (denotemoslo rho).
Podria por favor alguien indicarme como hacerlo en R?
Este es el contexto. Supongamos que se tienen dos experimentos en los que la
variable respuesta _sigue_ una distribucion binomial, i.e., X_i
~Binomial(n_i, p_i), i=1,2 y que, por ahora,
2007 Apr 12
1
LME: internal workings of QR factorization
Hi:
I've been reading "Computational Methods for Multilevel Modeling" by Pinheiro and Bates, the idea of embedding the technique in my own c-level code. The basic idea is to rewrite the joint density in a form to mimic a single least squares problem conditional upon the variance parameters. The paper is fairly clear except that some important level of detail is missing. For
2010 Feb 05
3
metafor package: effect sizes are not fully independent
In a classical meta analysis model y_i = X_i * beta_i + e_i, data
{y_i} are assumed to be independent effect sizes. However, I'm
encountering the following two scenarios:
(1) Each source has multiple effect sizes, thus {y_i} are not fully
independent with each other.
(2) Each source has multiple effect sizes, each of the effect size
from a source can be categorized as one of a factor levels
2001 Oct 09
1
PROC MIXED user trying to use (n)lme...
Dear R-users
Coming from a proc mixed (SAS) background I am trying to get into
the use of (n)lme.
In this connection, I have some (presumably stupid) questions
which I am sure someone out there can answer:
1) With proc mixed it is easy to get a hold on the estimated
variance parameters as they can be put out into a SAS data set.
How do I do the same with lme-objects? For example, I can see the
2011 Sep 14
1
Hints for Data Mining
Dear All,
I am recycling a previous email of mine where I asked some questions
about clustering mixed numerical/categorical data. This time I am more
into data mining. I am given a set of known statistical indexes {s_i},
i=1,2...N for a N countries. These indexes in general are a both
numerical and categorical variables. For each country, I also have a
property x_i whose value is known, but
2006 Dec 08
1
MAXIMIZATION WITH CONSTRAINTS
Dear R users,
I?m a graduate students and in my master thesis I must
obtain the values of the parameters x_i which maximize this
Multinomial log?likelihood function
log(n!)-sum_{i=1]^4 log(n_i!)+sum_
{i=1}^4 n_i log(x_i)
under the following constraints:
a) sum_i x_i=1,
x_i>=0,
b) x_1<=x_2+x_3+x_4
c)x_2<=x_3+x_4
I have been using the
?ConstrOptim? R-function with the instructions
2011 Oct 31
1
Question on estimating standard errors with noisy signals using the quantreg package
Dear all,
My question might be more of a statistics question than a question on R,
although it's on how to apply the 'quantreg' package. Please accept my
apologies if you believe I am strongly misusing this list.
To be very brief, the problem is that I have data on only a random draw, not
all of doctors' patients. I am interested in the, say, median number of
patients of
2010 Nov 08
1
try (nls stops unexpectedly because of chol2inv error
Hi,
I am running simulations that does multiple comparisons to control.
For each simulation, I need to model 7 nls functions. I loop over 7 to do
the nls using try
if try fails, I break out of that loop, and go to next simulation.
I get warnings on nls failures, but the simulation continues to run, except
when the internal call (internal to nls) of the chol2inv fails.
2010 Feb 06
1
Canberra distance
Hi the list,
According to what I know, the Canberra distance between X et Y is : sum[
(|x_i - y_i|) / (|x_i|+|y_i|) ] (with | | denoting the function
'absolute value')
In the source code of the canberra distance in the file distance.c, we
find :
sum = fabs(x[i1] + x[i2]);
diff = fabs(x[i1] - x[i2]);
dev = diff/sum;
which correspond to the formula : sum[ (|x_i - y_i|) /
2018 Jan 17
1
mgcv::gam is it possible to have a 'simple' product of 1-d smooths?
I am trying to test out several mgcv::gam models in a scalar-on-function regression analysis.
The following is the 'hierarchy' of models I would like to test:
(1) Y_i = a + integral[ X_i(t)*Beta(t) dt ]
(2) Y_i = a + integral[ F{X_i(t)}*Beta(t) dt ]
(3) Y_i = a + integral[ F{X_i(t),t} dt ]
equivalents for discrete data might be:
1) Y_i = a + sum_t[ L_t * X_it * Beta_t ]
(2) Y_i
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as
sum(|x_i - y_i| / |x_i + y_i|)
Obviously this is undefined for cases where both x_i and y_i are zeros. Since
double zeros are common in many data sets, this is a nuisance. In our field
(from which the distance is coming), it is customary to remove double zeros:
contribution to distance is zero when both x_i
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as
sum(|x_i - y_i| / |x_i + y_i|)
Obviously this is undefined for cases where both x_i and y_i are zeros. Since
double zeros are common in many data sets, this is a nuisance. In our field
(from which the distance is coming), it is customary to remove double zeros:
contribution to distance is zero when both x_i
2007 Feb 17
1
Constraint maximum (likelihood) using nlm
Hi,
I'm trying to find the maximum (likelihood) of a function. Therefore,
I'm trying to minimize the negative likelihood function:
# params: vector containing values of mu and sigma
# params[1] - mu, params[2]- sigma
# dat: matrix of data pairs y_i and s_i
# dat[,1] - column of y_i , dat[,2] column of s_i
negll <- function(params,dat,constant=0)
{
for(i in 1:length(dat[,1]))
{
2007 Feb 01
3
Help with efficient double sum of max (X_i, Y_i) (X & Y vectors)
Greetings.
For R gurus this may be a no brainer, but I could not find pointers to
efficient computation of this beast in past help files.
Background - I wish to implement a Cramer-von Mises type test statistic
which involves double sums of max(X_i,Y_j) where X and Y are vectors of
differing length.
I am currently using ifelse pointwise in a vector, but have a nagging
suspicion that there is a
2010 Apr 25
1
function pointer question
Hello,
I have the following function that receives a "function pointer" formal parameter name "fnc":
loocv <- function(data, fnc) {
n <- length(data.x)
score <- 0
for (i in 1:n) {
x_i <- data.x[-i]
y_i <- data.y[-i]
yhat <- fnc(x=x_i,y=y_i)
score <- score + (y_i - yhat)^2
}
score <- score/n
2007 Mar 01
1
covariance question which has nothing to do with R
This is a covariance calculation question so nothing to do with R but
maybe someone could help me anyway.
Suppose, I have two random variables X and Y whose means are both known
to be zero and I want to get an estimate of their covariance.
I have n sample pairs
(X1,Y1)
(X2,Y2)
.
.
.
.
.
(Xn,Yn)
, so that the covariance estimate is clearly 1/n *(sum from i = 1 to n
of ( X_i*Y_i) )
But,
2007 May 08
5
Weighted least squares
Dear all,
I'm struggling with weighted least squares, where something that I had
assumed to be true appears not to be the case. Take the following
data set as an example:
df <- data.frame(x = runif(100, 0, 100))
df$y <- df$x + 1 + rnorm(100, sd=15)
I had expected that:
summary(lm(y ~ x, data=df, weights=rep(2, 100)))
summary(lm(y ~ x, data=rbind(df,df)))
would be equivalent, but
2008 Dec 01
1
linear functional relationships with heteroscedastic & non-Gaussian errors - any packages around?
Hi,
I have a situation where I have a set of pairs of X & Y variables for
each of which I have a (fairly) well-defined PDF. The PDF(x_i) 's and
PDF(y_i)'s are unfortunately often rather non-Gaussian although most
of the time not multi--modal.
For these data (estimates of gas content in galaxies), I need to
quantify a linear functional relationship and I am trying to do this
as