search for: 0.644

Displaying 20 results from an estimated 23 matches for "0.644".

Did you mean: 0.44
2012 Mar 20
2
Constraint Linear regression
Hi there, I am trying to use linear regression to solve the following equation - y <- c(0.2525, 0.3448, 0.2358, 0.3696, 0.2708, 0.1667, 0.2941, 0.2333, 0.1500, 0.3077, 0.3462, 0.1667, 0.2500, 0.3214, 0.1364) x2 <- c(0.368, 0.537, 0.379, 0.472, 0.401, 0.361, 0.644, 0.444, 0.440, 0.676, 0.679, 0.622, 0.450, 0.379, 0.620) x1 <- 1-x2 # equation lmFit <- lm(y ~ x1 + x2) lmFit Call:
2011 Jul 13
1
max possible rsquare
Dear all, I have a question regarding the output of the coxph function. What does the 'max possible' exactly mean in the output below? Many thanks. coef exp(coef) se(coef) robust se z Pr(>|z|) smocc_zyban -0.4384 0.6451 0.8667 0.9473 -0.463 0.644 self 1.1857 3.2728 0.1405 0.1443 8.216 2.22e-16 *** smocc_zyban:self
2004 Jan 19
1
qda problem
Hi, the following strange error appears when I use qda: > qda1 <- qda(as.data.frame(mfilters[cvtrain,]),as.factor(traingroups)) Error: function is not a closure That's also strange: > qda1 <- qda(mfilters[cvtrain,],as.factor(traingroups)) Error in qda.default(mfilters[cvtrain, ], as.factor(traingroups)) : length of dimnames must match that of dims Some backgroud: >
2012 May 08
2
mgcv: inclusion of random intercept in model - based on p-value of smooth or anova?
Dear useRs, I am using mgcv version 1.7-16. When I create a model with a few non-linear terms and a random intercept for (in my case) country using s(Country,bs="re"), the representative line in my model (i.e. approximate significance of smooth terms) for the random intercept reads: edf Ref.df F p-value s(Country) 36.127 58.551 0.644
2006 Mar 08
1
RES: survival
Dear Thomas, The head of my dataset > head(wsuv) parcel sp time censo treatment species 1 S8 Poecilanthe effusa ( Hub. ) Ducke. 1 1 1 1 2 S8 Poecilanthe effusa ( Hub. ) Ducke. 1 1 1 1 3 S8 Poecilanthe effusa ( Hub. ) Ducke. 1 1 1 1 4 S8 Poecilanthe effusa ( Hub. ) Ducke. 1 1 1
1999 Oct 25
2
leaps: XHAUST returned error code -999
Hi there, This problem has been dogging me for a bit, and I'm trying to figure out why. When running the the subsets function in the leaps library, R is giving me the following error message > lvodsub <- subsets(pred, resp$LVOD) Warning message: XHAUST returned error code -999 in: leaps.exhaustive(a, really.big = really.big) but this still happens if I add the really.big option:
2008 Aug 25
3
lmer4 and variable selection
Dear list, I am currently working with a rather large data set on body temperature regulation in wintering birds. My original model contains quite a few dependent variables, but I do not (of course) wish to keep them all in my final model. I've fitted the following model to the data: >
2007 Jul 13
2
nearest correlation to polychoric
Dear all, Has someone implemented in R (or any other language) Knol DL, ten Berge JMF. Least-squares approximation of an improper correlation matrix by a proper one. Psychometrika, 1989, 54, 53-61. or any other similar algorithm? Best regards Jens Oehlschl?gel Background: I want to factanal() matrices of polychoric correlations which have negative eigenvalue. I coded Highham 2002
2009 Oct 07
0
how to extract the second table from the factanal functions result's loadings part?
Hi All, Can someone help me?The way to do this may be very easy but i do not know. *Question1:----* factanal() function produces the results in this way:-- *RESULTS:--* *>fact1<- factanal(data_withNA,factors=1,rotation="none") >fact1$"loadings"* Loadings: Factor1 i1 0.784 i2 0.874 i3 0.786 i4 0.839 i5 0.778 i6 0.859 i7 0.850 i8 0.763 i9 0.810 i10 0.575
2009 Jan 29
0
lmer for a binary dependent variable
Hi,   I am trying to use the lmer function from the lme4 package in R 2.8.0. to fit a generalized mixed-effects model for a dependent variable with a binomial distribution (for more info on my experiment, look below). However, I encounter a major problem: How is it possible to find the general test statistic and see the relative importance of the predictors? The methods which I found described in
2013 Jan 12
4
nesting in CoxPH with survival package
Hello all, I am trying to understand how to specify nested factors when using coxph(), and if it is appropriate to nest these factors in my situation. In the simplest form, I am testing two different temperatures, with each temperature being performed twice in different experimental periods (e.g. Temp5 performed in Period A and C, Temp4 performed in Period B and D) I am trying to see if survival
2004 Jan 30
0
GLMM (lme4) vs. glmmPQL output (summary with lme4 revised)
This is a summary and extension of the thread "GLMM (lme4) vs. glmmPQL output" http://maths.newcastle.edu.au/~rking/R/help/04/01/0180.html In the new revision (#Version: 0.4-7) of lme4 the standard errors are close to those of the 4 other methods. Thanks to Douglas Bates, Saikat DebRoy for the revision, and to G?ran Brostr?m who run a simulation. In response to my first posting, Prof.
2010 Mar 26
1
a vectorized solution to some simple dataframe math?
I have a data frame containing the results of time measurements taken from several cells. Each cell was measured in conditions A and B, and there are an arbitrary number of measurements in each condition. I am trying to calculate the difference of each measurement from the mean of a given cell in a given condition without relying on loops. >my.df id cond time 1
2008 Jun 28
2
Parallel R
Hello, The problem I'm working now requires to operate on big matrices. I've noticed that there are some packages that allows to run some commands in parallel. I've tried snow and NetWorkSpaces, without much success (they are far more slower that the normal functions) My problem is very simple, it doesn't require any communication between parallel tasks; only that it divides
2010 Sep 28
1
Very slow plot rendering with X11 on CentOS 5.5
I am connecting from a PC to a Linux system running CentOS release 5.5 (Final) and it is extremely slow to render plots to the X11 device. This is not R's fault but I wonder if anyone can offer guidance so I can help the system administrators address the problem. I can connect to the Linux server using a NoMachine NX client for Windows or using X-Win32. I also have access to R running
2007 Jul 14
0
ts model challenge (transfer function)
Dear useRs, I am trying to model a time series with a transfer function. I think it can be put into the ARMA framework, and estimated with the 'arima' function (and others have made similar comments on this list). I have tried to do that, but the results have so far been disappointing. Maybe I am trying to make 'arima' do something it can't... The data are time series of
2006 Jun 23
1
looping through a data frame
Hi- I am having trouble with the syntax of looping through the rows and columns of a data frame. I have a table with 17 observations for 84 lines at n=5-10 per line. So the table is ~700x17. I want to pull out the median and stdev for each line and put it in a dataframe with rowname = linename. So I have tried the following.... #read in the table input.table <- read.table(file =
2018 May 15
0
Systemfit
... and the mailing list is picky about attachments... whatever you attached did not conform to the stringent requirements mentioned in the Posting Guide. Pasting the code right into the email is usually safest, though you DO have to post using plain text (as the Posting Guide indicates) or your code may get mangled by the automatic html format removal. On May 15, 2018 7:04:31 AM PDT, Bert Gunter
2018 May 15
2
Systemfit
OK, Let's try this again! Here is the reproducible script; it is long because I had to copy the panel dataset here. My question is related to systemfit; I don't know how to get the result for the entire panel. #Reproducible script Empdata<- read.csv("/Users/ngwinuiazenui/Documents/UPLOADemp.csv") View(Empdata) install.packages("systemfit")
2018 May 15
1
Systemfit
Unless there is good reason not to, always cc the list -- there are lots of smarter folks than I on it who can help. I may or may not have time to look at this. Hopefully someone else will. -- Bert Bert Gunter "The trouble with having an open mind is that people keep coming along and sticking things into it." -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip