Displaying 20 results from an estimated 6000 matches similar to: "Interpretation of VarCorr results"
2010 Sep 28
2
Table with different digit number
Hi!
I have a table representing both absolute and relative frequency, for
example (code to get example data under the signature):
Italy Germany
absolute 100 105
relative 40.51 41.18
How can I print a different number of decimal digits? I try to transform to
as.character, but cells result aligned to left and I don't like this
solution. At the
2012 May 01
1
VarCorr procedure from lme4
Folks
In trying to use lmer for a hierarchical model, I encountered the
following message:
Error in UseMethod("VarCorr") :
no applicable method for 'VarCorr' applied to an object of class "mer"
foo.mer <- lmer(y ~ TP + (TP|M),data=joe.q)
> head(joe.q[,1:5])
TP M AB Trt y
1 1 Jan A NN 19.20002
2 1 Jan A NN 19.06378
3 1 Jan A NN
2006 May 08
1
Repeatability and lme
Dear R-help list members
I gathered longitudinal data on fish behaviour which I try to analyse using
a multi level model for change. Mostly, I am following Singer & Willett
(2003), who provide also the S/R code for their examples in the book (e.g.
http://www.ats.ucla.edu/stat/Splus/examples/alda/ch4.htm). Of course I am
interested in change over time, but I am also very much interested in
2011 Jan 19
2
VarCorr
I have a loop that I would like to use to extract the "stddev" for
each itteration so I can average the "stddev" for all the runs. It
would be helpful to know how to extract the "stddev" for each run from
the VarCorr. Thanks
MCruns<-1000
sighatlvec<-rep(NA,MCruns)
sighatbvec<-rep(NA,MCruns)
sighatevec<-rep(NA,MCruns)
for(mc in 1:MCruns)
{
2010 Mar 25
1
Read SAS data
Hi!
I need to import in R some SAS dataset (sas7bdat). I found two functions to
do it:
"read.ssd" from the package "foreign" and "sas.get" from "Hmisc".
df = read.ssd(libname = path2data, sectionnames = "sasSmallDataset",
tmpXport = path2data, tmpProgLoc = path2data, sascmd = path2sas)
sas.get(libraryName = path2data, member =
2009 Jan 23
3
Table Modification
I am trying to construct a two-way table where, instead of printing the
two-way frequencies in the table, I would like to print the values of a
third variable that correspond to the frequencies.
For example, the following is easily constructed in R
> fact1 <- factor(sample(LETTERS[1:3],10,replace=TRUE))
> fact2 <- factor(sample(LETTERS[25:26],10,replace=TRUE))
> fact3
2005 Sep 01
2
VarCorr function for assigning random effects: was Question
If you are indeed using lme and not lmer then the needed function is
VarCorr(). However, 2 recommendations. First, this is a busy list and
better emails subject headers get better attention. Second, I would
recommend using lmer as it is much faster. However, VarCorr seems to be
incompatible with lmer and I do not know of another function to work
with lmer.
Hence, a better email subject header
2010 Nov 02
5
density() function: differences with S-PLUS
Hello!
Someone know what are the difference between R and S-PLUS in the density()
function?
For example, I would like to reply this simple S-PLUS code in R, but I don't
understand which parameter I should modify to get the same results.
S-PLUS CODE:
density(1:1000, width = 4)
R-CODE:
density(1:1000, bw = 4, window = "g", n = 50, cut = 0.75)
I obtain the same x values, but
2009 Jan 16
5
Value Lookup from File without Slurping
Dear all,
I have a repository file (let's call it repo.txt)
that contain two columns like this:
# tag value
AAA 0.2
AAT 0.3
AAC 0.02
AAG 0.02
ATA 0.3
ATT 0.7
Given another query vector
> qr <- c("AAC", "ATT")
I would like to find the corresponding value for each query above,
yielding:
0.02
0.7
However, I want to avoid slurping whole repo.txt
2009 May 05
8
limits
Hey,
what is the R function for the mathematical limit ?
e.g. to calculate and return the amount that the expression
X^2 +X +2
approach
as X approach 2
(X-> 2)
thanks
hassan
[[alternative HTML version deleted]]
2004 Jul 06
2
lme: extract variance estimate
For a Monte Carlo study I need to extract from an lme model
the estimated standard deviation of a random effect
and store it in a vector. If I do a print() or summary()
on the model, the number I need is displayed in the Console
[it's the 0.1590195 in the output below]
>print(fit)
>Linear mixed-effects model fit by maximum likelihood
> Data: datag2
> Log-likelihood:
2012 Nov 16
1
Interpretation of davies.test() in segmented package
My data:
I have raw data points that form a logit style curve as if they were a time
series. Which is to say they form 3 distinct lines with 3 distinct slopes
in backwards z pattern. A certain class of my data looks essentially flat
to the eye with marginal oscillation. What is important to me is the x
value at which the state change is occurring, in other words, the break
point
Use of
2011 Dec 05
1
about interpretation of anova results...
quantreg package is used.
*fit1 results are*
Call:
rq(formula = op ~ inp1 + inp2 + inp3 + inp4 + inp5 + inp6 + inp7 +
inp8 + inp9, tau = 0.15, data = wbc)
Coefficients:
(Intercept) inp1 inp2 inp3 inp4
inp5
-0.191528450 0.005276347 0.021414032 0.016034803 0.007510343
0.005276347
inp6 inp7 inp8 inp9
0.058708544
2011 Jun 21
1
Help interpreting ANCOVA results
Please help me interpret the following results.
The full model (Schwa~Dialect*Prediction*Reduction) was reduced via both
update() and step().
The minimal adequate model is:
ancova<-lm(Schwa~Dialect+Prediction+Reduction+Dialect:Prediction)
Schwa is response variable
Dialect is factor, two levels ("QF","SF")
Prediction is factor, two levels ("High","Low")
2011 Sep 09
3
split variable / create categories
Hi,
is there a function or an easy way to convert a variable with continuous values into a categorial variable (with x levels)?
here is what I mean:
I want to transform x:
x <- c(3.2, 1.5, 6.8, 6.9, 8.5, 9.6, 1.1, 0.6)
into a 'categorial'-variable with four levels so that I get:
[1] 2 2 3 3 4 4 1 1
so each element is converted into its rank- value / categorial-value
(in
2011 Oct 22
5
interpreting bootstrap corrected slope [rms package]
Dear List:
Below is the validation output of a fitted ordinal logistic model
using the bootstrap in the rms package. My interpretation is that
most of the corrected indices indicate little overfitting, however the
slope seems to indicate that the model is too optimistic. Given that
most of the corrected indices seem reasonable, would it be appropriate
to use this model on future data if the
2006 Apr 25
5
Heteroskedasticity in Tobit models
Hello,
I've had no luck finding an R package that has the ability to estimate a
Tobit model allowing for heteroskedasticity (multiplicative, for example).
Am I missing something in survReg? Is there another package that I'm
unaware of? Is there an add-on package that will test for
heteroskedasticity?
Thanks for your help.
Cheers,
Alan Spearot
--
Alan Spearot
Department of Economics
2010 Apr 12
2
Interpreting factor*numeric interaction coefficients
Dear all,
I am a relative novice with R, so please forgive any terrible errors...
I am working with a GLM that describes a response variable as a function of
a categorical variable with three levels and a continuous variable. These
two predictor variables are believed to interact.
An example of such a model follows at the bottom of this message, but here
is a section of its summary table:
2008 Dec 09
3
Significance of slopes
Hello R community,
I have a question regarding correlation and regression analysis. I have
two variables, x and y. Both have a standard deviation of 1; thus,
correlation and slope from the linear regression (which also must have
an intercept of zero) are equal.
I want to probe two particular questions:
1) Is the slope significantly different from zero? This should be easy
with the lm
2010 Sep 13
2
Homogeneity of regression slopes
Hello,
We've got a dataset with several variables, one of which we're using
to split the data into 3 smaller subsets. (as the variable takes 1 of
3 possible values).
There are several more variables too, many of which we're using to fit
regression models using lm. So I have 3 models fitted (one for each
subset of course), each having slope estimates for the predictor
variables.