Displaying 12 results from an estimated 12 matches for "x_new".
Did you mean:
__new
2012 Feb 21
0
BHHH algorithm on duration time models for stock prices
...hat.
The loglikelihood
LogLik<-function(param){
beta_1<-param[1]
beta_2<-param[2]
beta_3<-param[3]
beta_4<-param[4]
lambda.plus<-beta_1*Iplusless1 + beta_2*Iminusless1
lambda.minus<-beta_3*Iplusless1 + beta_4*Iminusless1
sum(Iplus_new*log(lambda.plus))-sum(log(lambda.plus)*x_new) +
sum(Iminus_new*log(lambda.minus))-sum(log(lambda.minus)*x_new)
}
The gradients
gradient<-function(param){
beta_1<-param[1]
beta_2<-param[2]
beta_3<-param[3]
beta_4<-param[4]
lambda.plus<-beta_1*Iplusless1 + beta_2*Iminusless1
lambda.minus<-beta_3*Iplusless1 + beta_4*Iminu...
2011 Mar 12
2
Identifying unique pairs
...4 2 8
5 2 8
6 2 8
7 2 7
8 2 7
9 5 2
10 5 2
11 6 4
unique(mydat$x) will give me 1, 2, 5, 6 i.e. 4 values and
unique(mydat$y) will give me 10, 8, 7, 2, 4.
What I need is a data frame where I will get a vector (say) x_new as (1, 2, 2, 5, 6) and corresponding y_new as (10, 8, 7, 2, 4). I need to use these two vectors viz. x_new and y_new seperately for further processing. They may be under same data frame say mydat_new but I should be able to access them as mydat_new$x_new and similarly for y.
I tried following way....
2016 Jul 09
2
Red Neuronal complicada categorías
...--
V1Binario <- model.matrix(~ factor(x$V1) - 1)
V2Binario <- model.matrix(~ factor(x$V2) - 1)
V3Binario <- model.matrix(~ factor(x$V3) - 1)
V4Binario <- model.matrix(~ factor(x$V4) - 1)
V5Binario <- model.matrix(~ factor(x$V5) - 1)
V6Binario <- model.matrix(~ factor(x$V6) - 1)
x_new <- cbind(V1Binario,V2Binario)
x_new <- cbind(x_new,V3Binario)
x_new <- cbind(x_new,V4Binario)
x_new <- cbind(x_new,V5Binario)
nam_ori <- colnames(x_new)
col_nam <- paste("V", 1:ncol(x_new), sep = "")
colnames(x_new) <- col_nam
library(RSNNS)
xValues <-...
2012 Dec 24
1
How to do it through 1 step?
A data set(dat),has 2 variables: x and a, and 100 rows.
I wanna add 2 variables,and call the new data set dat1:
var1:f = a/median(a)
var2:x_new = x*f
My solution:
dat1<-transform(dat,f = a/median(a),x_new = x*f)
But gets error reply which says that "f" is not exits since dat has no variables called "f".
So I have to do through 2 steps:
dat0<-transform(dat,f=a/median(a))
dat1<-transform(dat0,x_new=x*f)
How...
2010 Apr 22
1
Convert character string to top levels + NAN
Dear all,
I have several character strings with a high number of different levels.
unique(x) gives me values in the range of 100-200.
This creates problems as I would like to use them as predictors in a coxph
model.
I therefore would like to convert each of these strings to a new string
(x_new).
x_new should be equal to x for the top n categories (i.e. the top n levels
with the highest occurrence) and NAN elsewhere.
For example, for n=3 x_new would have three levels: The three most common
levels of x + NAN.
Is there some convenient way of doing this?
Thanks in advance,
Michael
Micha...
2016 Jul 07
2
Red Neuronal complicada categorías
Estimados
Les consulto por redes neuronales, hay diversos artículos como los siguientes (el último tienen un error actualmente). Pero mi pregunta va un poco por otro lado.
http://www.r-bloggers.com/build-your-own-neural-network-classifier-in-r/
http://www.r-bloggers.com/classification-using-neural-net-in-r/
Básicamente se puede calcular un valor, por ejemplo doblar 2,4 grados a la derecha, luego 1
2012 Dec 18
2
Set a zero at minimum row by group
...;- c(1,2,3,1,4,3,5,6,8)
x <- rep(1,9)
df <- data.frame(ID,T,x)
>df
ID T x
1 1 1
1 2 1
1 3 1
2 1 1
2 4 1
3 3 1
3 5 1
3 6 1
3 8 1
I want to manipulate the x column in a way that for each customer (ID) at
the minimum of T the x value is set to zero. The result should look like
this:
ID T x x_new
1 1 1 0
1 2 1 1
1 3 1 1
2 1 1 0
2 4 1 1
3 3 1 0
3 5 1 1
3 6 1 1
3 8 1 1
I already tried the aggregate() and apply() function, but I don't get the
result I'm looking for. I would glad if you could help me out.
Best regards,
Carlos
[[alternative HTML ve...
2009 Aug 12
3
Obtaining the value of x at a given value of y in a smooth.spline object
I have some data fit to a smooth.spline object as follows: (x=vector of data
for the predictor variable, y=vector of data for the response variable)
fit <- smooth.spline(x,y)
Now, given a spline fit point y_new, I want to be able to find out what
value of x_new yielded this fit value. How to do so?
(This problem is the inverse of the predict.smooth.spline function, which
takes x_new as input and yields the corresponding y_new fit value)
Any insight is much appreciated!
Thanks,
Kavitha
[[alternative HTML version deleted]]
2010 Aug 07
3
plot the dependent variable against one of the predictors with other predictors as constant
...ne of the
predictors with other predictors as constant. Not for the original data, but
after prediction. It means y is the predicted value of the dependent
variables. The constane value of the other predictors may be the average or
some fixed value.
#######
y=1:10
x=10:1
z=2:11
lin_model=lm(z~x+y)
x_new=11:2
#######
How to plot predicted value of z from the regression model with x takes
x_new and y as a constant (let's say y=1)
I am thinking about using 'predict' command to generate the prediction of z
with the new data.frame but there should be a better way.
Thanks all.
Yi
[[a...
2012 Apr 11
1
inference for customized regression in R?
...elp/2011-July/285022.html
The question is: how to do the statistical inference on GMR results?
More specifically, we are looking for the prediction interval:
Lets say we regress y1, y2, ..., yn onto x1, x2, ..., xn:
we would like to know what's the prediction interval for a new data point:
x_new=x1+x2+x3
(i.e. the new data point is the sum of the existing first three data points)
In ordinary linear regression, we could derive prediction interval for an
in-sample data point as well as a new data point...
For our x_new=x1+x2+x3, we can derive formulas for the prediction interval.
But for...
2009 Jul 12
0
Plotting problem [lars()/elasticnet()]
Dear all,
I am using modified LARS algorithm (ref: The Adaptive Lasso and Its Oracle
Properties, Zou 2006) for adaptive lasso penalized linear regression.
1. w(j) <- |beta_ols(j)|^(-gamma) gamma>0 and j = 1,...,p
2. define x_new(j) <- x(j)*w(j)
3. apply LARS to solve modified lasso problem
out.adalasso <- lars(X_new,y,type="lasso") or enet(X_new, y,
lambda=0)
suppose, the estimated coefficients are beta_new(j), j=1,...,p
4. beta_adalasso <- beta_new(j)*w(j)
everything went fine except...
2009 Feb 15
0
feature extraction on time series data
........ X100. I am trying to reduce the
dimension of this input because the data at the end of each row does not
have significant meaning to the project I am doing.
I used cubic splines on ea. row -- ns(row, df = 10) , and decided to reduce
the dimension to 10. Then I generated the new feature by X_new = W*X, where
W is obtained from ns(row, df = 10). Now everything seemed perfect and I
was able to fit the logistic regression on the new inputs (500 rows, ea. row
has only 10 X's). After I computed the coefficients for this reduced model,
I want to map these coefficients back to the original...