Displaying 20 results from an estimated 111 matches for "y_y".
Did you mean:
_y
2018 Jan 17
1
mgcv::gam is it possible to have a 'simple' product of 1-d smooths?
I am trying to test out several mgcv::gam models in a scalar-on-function regression analysis.
The following is the 'hierarchy' of models I would like to test:
(1) Y_i = a + integral[ X_i(t)*Beta(t) dt ]
(2) Y_i = a + integral[ F{X_i(t)}*Beta(t) dt ]
(3) Y_i = a + integral[ F{X_i(t),t} dt ]
equivalents for discrete data might be:
1) Y_i = a + sum_t[ L_t * X_it * Beta_t ]
(2) Y_i
2010 Feb 06
1
Canberra distance
Hi the list,
According to what I know, the Canberra distance between X et Y is : sum[
(|x_i - y_i|) / (|x_i|+|y_i|) ] (with | | denoting the function
'absolute value')
In the source code of the canberra distance in the file distance.c, we
find :
sum = fabs(x[i1] + x[i2]);
diff = fabs(x[i1] - x[i2]);
dev = diff/sum;
which correspond to the formula : sum[ (|x_i - y_i|) /
2008 May 23
1
maximizing the gamma likelihood
for learning purposes and also to help someone, i used roger peng's
document to get the mle's of the gamma where the gamma is defined as
f(y_i) = (1/gammafunction(shape)) * (scale^shape) * (y_i^(shape-1)) *
exp(-scale*y_i)
( i'm defining the scale as lambda rather than 1/lambda. various books
define it differently ).
i found the likelihood to be n*shape*log(scale) +
2013 Jun 23
1
2SLS / TSLS / SEM non-linear
Dear all, I try to conduct a SEM / two stage least squares regression with
the following equations:
First: X ~ IV1 + IV2 * Y
Second: Y ~ a + b X
therein, IV1 and IV2 are the two instruments I would like to use. the
structure I would like to maintain as the model is derived from economic
theory. My problem here is that I have trouble solving the equations to get
the reduced form so I can run
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as
sum(|x_i - y_i| / |x_i + y_i|)
Obviously this is undefined for cases where both x_i and y_i are zeros. Since
double zeros are common in many data sets, this is a nuisance. In our field
(from which the distance is coming), it is customary to remove double zeros:
contribution to distance is zero when both x_i
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as
sum(|x_i - y_i| / |x_i + y_i|)
Obviously this is undefined for cases where both x_i and y_i are zeros. Since
double zeros are common in many data sets, this is a nuisance. In our field
(from which the distance is coming), it is customary to remove double zeros:
contribution to distance is zero when both x_i
2007 Feb 01
3
Help with efficient double sum of max (X_i, Y_i) (X & Y vectors)
Greetings.
For R gurus this may be a no brainer, but I could not find pointers to
efficient computation of this beast in past help files.
Background - I wish to implement a Cramer-von Mises type test statistic
which involves double sums of max(X_i,Y_j) where X and Y are vectors of
differing length.
I am currently using ifelse pointwise in a vector, but have a nagging
suspicion that there is a
2013 Oct 19
2
ivreg with fixed effect in R?
I want to estimate the following fixed effect model:
y_i,t = alpha_i + beta_1 x1_t + beta_2 x2_i,tx2_i,t = gamma_i + gamma_1
x1_t + gamma_2 Z1_i + gamma_3 Z2_i
I can use ivreg from AER to do the iv regression.
fm <- ivreg(y_i,t ~ x1_t + x2_i,t | x1_t + Z1_i + Z2_i,
data = DataSet)
But, I'm not sure how can I add the fixed effects.
Thanks!
[[alternative HTML
2003 Oct 23
1
Variance-covariance matrix for beta hat and b hat from lme
Dear all,
Given a LME model (following the notation of Pinheiro and Bates 2000) y_i
= X_i*beta + Z_i*b_i + e_i, is it possible to extract the
variance-covariance matrix for the estimated beta_i hat and b_i hat from the
lme fitted object?
The reason for needing this is because I want to have interval prediction on
the predicted values (at level = 0:1). The "predict.lme" seems to
2010 Apr 25
1
function pointer question
Hello,
I have the following function that receives a "function pointer" formal parameter name "fnc":
loocv <- function(data, fnc) {
n <- length(data.x)
score <- 0
for (i in 1:n) {
x_i <- data.x[-i]
y_i <- data.y[-i]
yhat <- fnc(x=x_i,y=y_i)
score <- score + (y_i - yhat)^2
}
score <- score/n
2010 Feb 05
3
metafor package: effect sizes are not fully independent
In a classical meta analysis model y_i = X_i * beta_i + e_i, data
{y_i} are assumed to be independent effect sizes. However, I'm
encountering the following two scenarios:
(1) Each source has multiple effect sizes, thus {y_i} are not fully
independent with each other.
(2) Each source has multiple effect sizes, each of the effect size
from a source can be categorized as one of a factor levels
2012 Mar 20
1
MA process in panels
Dear R users,
I have an unbalanced panel with an average of I=100 individuals and a total
of T=1370 time intervals, i.e. T>>I. So far, I have been using the plm
package.
I wish to estimate a FE model like:
res<-plm(x~c+v, data=pdata_frame, effect="twoways", model="within",
na.action=na.omit)
?where c varies over i and t, and v represents an exogenous impact on x
2005 Nov 09
5
How to find statistics like that.
Hi there,
Suppose mu is constant, and error is normally distributed with mean 0 and
fixed variance s. I need to find a statistics that:
Y_i = mu + beta1* I1_i beta2*I2_i + beta3*I1_i*I2_i + +error, where I_i is
1 Y_i is from group A, and 0 if Y_i is from group B.
It is large when beta1=beta2=0
It is small when beta1 and/or beta2 is not equal to 0
How can I find it by R? Thank you very much
2011 Mar 16
2
Re; Fitting a Beta distribution
I want to fit some p-values to a beta distribution. But the problem is some
of the values have 0s and 1's. I am getting an error if I use the MASS
function to do this. Is there anyway to get around this?
--
Thanks,
Jim.
[[alternative HTML version deleted]]
2004 Apr 18
2
lm with data=(means,sds,ns)
Hi Folks,
I am dealing with data which have been presented as
at each x_i, mean m_i of the y-values at x_i,
sd s_i of the y-values at x_i
number n_i of the y-values at x_i
and I want to linearly regress y on x.
There does not seem to be an option to 'lm' which can
deal with such data directly, though the regression
problem could be algebraically
2013 Jan 11
0
Manual two-way demeaning of unbalanced panel data (Wansbeek/Kapteyn transformation)
Dear R users,
I wish to manually demean a panel over time and entities. I tried to code
the Wansbeek and Kapteyn (1989) transformation (from Baltagi's book Ch. 9).
As a benchmark I use both the pmodel.response() and model.matrix() functions
in package plm and the results from using dummy variables. As far as I
understood the transformation (Ch.3), Q%*%y (with y being the dependent
variable)
2017 Dec 03
1
Discourage the weights= option of lm with summarized data
Peter,
This is a highly structured text. Just for the discussion, I separate
the building blocks, where (D) and (E) and (F) are new:
BEGIN OF TEXT --------------------
(A)
Non-?NULL? ?weights? can be used to indicate that different
observations have different variances (with the values in ?weights?
being inversely proportional to the variances);
(B)
or equivalently, when the elements of
2009 Nov 12
1
naive "collinear" weighted linear regression
Hi there
Sorry for what may be a naive or dumb question.
I have the following data:
> x <- c(1,2,3,4) # predictor vector
> y <- c(2,4,6,8) # response vector. Notice that it is an exact,
perfect straight line through the origin and slope equal to 2
> error <- c(0.3,0.3,0.3,0.3) # I have (equal) ``errors'', for
instance, in the measured responses
Of course the
2018 Mar 15
3
stats 'dist' euclidean distance calculation
Hello,
I am working with a matrix of multilocus genotypes for ~180 individual snail samples, with substantial missing data. I am trying to calculate the pairwise genetic distance between individuals using the stats package 'dist' function, using euclidean distance. I took a subset of this dataset (3 samples x 3 loci) to test how euclidean distance is calculated:
3x3 subset used
2007 Mar 09
1
help with zicounts
Dear UseRs:
I have simulated data from a zero-inflated Poisson model, and would like
to use a package like zicounts to test my code of fitting the model.
My question is: can I use zicounts directly with the following simulated
data?
Create a sample of n=1000 observations from a ZIP model with no intercept
and a single covariate x_{i} which is N(0,1). The logit part is
logit(p_{i})=x_{i}*beta