similar to: y ~ X -1 , X a matrix

Displaying 20 results from an estimated 90 matches similar to: "y ~ X -1 , X a matrix"

2007 Dec 31
2
help on ROC analysis
Dear all, Some functions like 'ROC(Epi)' can be used to perform ROC analyssi, but it needs us to specify the fitting model in the argument. Now i have got the predicted p-values (0,1) for the 0/1 response variable using some other approach, see the following example dataset: id mark predict.pvalue 1 1 0.927 2 0 0.928 3 1 0.928 ..................
2013 Mar 12
5
extract values
Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]>1820,]->x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514,
2012 Aug 11
1
unsued argument
It is a complex function, functions are quoted frequently, you may read from bottom up The independent variable for final fit is q %%Rg0 is a function of L and b Rg0sq<-function(L,b)L*b/6*(1-3/2*b/L+3/2*(b/L)^2-3/4*(b/L)^3*(1-exp(-2*L/b))) %%alpha is a defined function alpha<-function(x)(1+(x/3.12)^2+(x/8.67)^3)^(0.176/3) %%w is a defined function
2020 Aug 23
2
sum() vs cumsum() implicit type coercion
Hi I noticed a small inconsistency when using sum() vs cumsum() I have a char-based series > tryjpy$long [1] "0.0022" "-0.0002" "-0.0149" "-0.0023" "-0.0342" "-0.0245" "-0.0022" [8] "0.0003" "-0.0001" "-0.0004" "-0.0036" "-0.001" "-0.0011"
2011 May 31
2
Forcing a negative slope in linear regression?
Dear forum members, How can I force a negative slope in a linear regression even though the slope might be positive? I will need it for the purpose of determining the trend due reasons other than biological because the biological (genetic) trend is not positive for these data. Thanks. Julia Example of the data: [1] 1.254 1.235 1.261 0.952 1.202 1.152 0.801 0.424 0.330 0.251 0.229
2011 Apr 12
2
font and size times New Roman
Hello I wonder how to change the font of chart to Times New Roman and size 9. plot(c(0,100,20),c(0,600,50), xlab= 'Idade(meses)', ylab="Peso(kg)", type = "n", axes=F) axis(1, pos=0, at=seq(0,100,20)) axis(2, pos=0, at=seq(0,600,100)) t<- seq(0,100,1) TA=543.56*(1-0.8976*exp(-0.0522*t)) NI=498.97*(1-0.9259*exp(-0.0494*t)) RC=514.57*(1-0.9112*exp(-0.0499*t))
2017 Oct 10
1
Unbalanced data in split-plot analysis with aov()
Dear all, I'm analysing a split-plot experiment, where there are sometimes one or two values missing. I realized that if the data is slightly unbalanced, the effect of the subplot-treatment will also appear and be tested against the mainplot-error term. I replicated this with the Oats dataset from Yates (1935), contained in the nlme package, where Variety is on mainplot, and nitro on
2012 Dec 14
1
format.pval () and printCoefmat ()
Hi List, My goal is to force R not to print in scientific notation in the sixth column (rel_diff - for the p-value) of my data frame (not a matrix). I have used the format.pval () and printCoefmat () functions on the data frame. The R script is appended below. This issue is that use of the format.pval () and printCoefmat () functions on the data frame gives me the desired results, but coerces
2010 Mar 01
1
Random Forest prediction questions
Hi, I need help with the randomForest prediction. i run the folowing code: > iris.rf <- randomForest(Species ~ ., data=iris, > importance=TRUE,keep.forest=TRUE, proximity=TRUE) > pr<-predict(iris.rf,iris,predict.all=T) > iris.rf$votes[53,] setosa versicolor virginica 0.0000000 0.8074866 0.1925134 > table(pr$individual[53,])/500 versicolor virginica 0.928
2005 Jun 08
1
logistic regression (glm binary)
Hi I am looking for a couple of pointers using glm (family = binary). 1. I want to add all the products of my predictive features as additional features (and I have 23 of them). Is there some easy way to add them? 2. I want to drop each feature in turn and get the most significant, then drop two and get the next most significant, etc. Is there some function that allows me to do this?
2020 Aug 25
1
sum() vs cumsum() implicit type coercion
>>>>> Tomas Kalibera >>>>> on Tue, 25 Aug 2020 09:29:05 +0200 writes: > On 8/23/20 5:02 PM, Rory Winston wrote: >> Hi >> >> I noticed a small inconsistency when using sum() vs cumsum() >> >> I have a char-based series >> >> > tryjpy$long >> >> [1]
2012 Nov 23
2
[LLVMdev] [cfe-dev] costing optimisations
On 23.11.2012, at 15:12, john skaller <skaller at users.sourceforge.net> wrote: > > On 23/11/2012, at 5:46 PM, Sean Silva wrote: > >> Adding LLVMdev, since this is intimately related to the optimization passes. >> >>> I think this is roughly because some function level optimisations are >>> worse than O(N) in the number of instructions. >>
2012 Jul 23
3
3D scatterplot, using size of symbols for the fourth variable
Dear R fans, I would like to create a scatterplot showing the relationship between 4 continuous variables. I thought of using the package "scatterplot 3d" to have a 3-dimensional plot and then using the size of the symbols to represent the 4th variable. Does anybody know how to do this? I already tried to create this graph using the colour of the symbols, but I was unable to generate
2011 Nov 11
1
Fwd: Use of R for VECM
----- Forwarded Message ----- From: vramaiah at neo.tamu.edu To: "bernhard pfaff" <bernhard.pfaff at pfaffikus.de> Sent: Friday, November 11, 2011 9:03:11 AM GMT -06:00 US/Canada Central Subject: Use of R for VECM Hello Fellow R'ers I am a new user of R and I am applying it for solving Bi-Variate (Consumption and Output) VECM with Co-Integration (I(1)) with three lags on
2013 Apr 17
1
mgcv: how select significant predictor vars when using gam(...select=TRUE) using automatic optimization
I have 11 possible predictor variables and use them to model quite a few target variables. In search for a consistent manner and possibly non-manual manner to identify the significant predictor vars out of the eleven I thought the option "select=T" might do. Example: (here only 4 pedictors) first is vanilla with "select=F" >
2001 Feb 08
2
Test for multiple contrasts?
Hello, I've fitted a parametric survival model by > survreg(Surv(Week, Cens) ~ C(Treatment, srmod.contr), > data = poll.surv.wo3) where srmod.contr is the following matrix of contrasts: prep auto poll self home [1,] 1 1 1.0000000 0.0 0 [2,] -1 0 0.0000000 0.0 0 [3,] 0 -1 0.0000000 0.0 0 [4,] 0 0 -0.3333333 1.0 0 [5,] 0 0
2012 Oct 09
1
car::linearHypothesis Sum of Sqaures Error?
I am working with a RCB 2x2x3 ANCOVA, and I have noticed a difference in the calculation of sum of squares in a Type III calculation. Anova output is a follows: > Anova(aov(MSOIL~Forest+Burn*Thin*Moisture+ROCK,data=env3l),type=3) Anova Table (Type III tests) Response: MSOIL Sum Sq Df F value Pr(>F) (Intercept) 22.3682 1 53.2141 3.499e-07 *** Forest
2008 Aug 01
5
drop1() seems to give unexpected results compare to anova()
Dear all, I have been trying to investigate the behaviour of different weights in weighted regression for a dataset with lots of missing data. As a start I simulated some data using the following: library(MASS) N <- 200 sigma <- matrix(c(1, .5, .5, 1), nrow = 2) sim.set <- as.data.frame(mvrnorm(N, c(0, 0), sigma)) colnames(sim.set) <- c('x1', 'x2') # x1 & x2 are
2003 Jul 23
6
Condition indexes and variance inflation factors
Has anyone programmed condition indexes in R? I know that there is a function for variance inflation factors available in the car package; however, Belsley (1991) Conditioning Diagnostics (Wiley) notes that there are several weaknesses of VIFs: e.g. 1) High VIFs are sufficient but not necessary conditions for collinearity 2) VIFs don't diagnose the number of collinearities and 3) No one has
2010 Aug 16
2
When to use bootstrap confidence intervals?
Hello, I have a question regarding bootstrap confidence intervals. Suppose we have a data set consisting of single measurements, and that the measurements are independent but the distribution is unknown. If we want a confidence interval for the population mean, when should a bootstrap confidence interval be preferred over the elementary t interval? I was hoping the answer would be