similar to: How to calculate the robust standard error of the dependent variable

Displaying 20 results from an estimated 9000 matches similar to: "How to calculate the robust standard error of the dependent variable"

2010 Jun 24
2
count data with a specific range
I would like to prepare the data for barplot. But I only have the data frame now. x1=rnorm(10,mean=2) x2=rnorm(20,mean=-1) x3=rnorm(15,mean=3) data=data.frame(x1,x2,x3) If there a way to put data within a specific range? The expected result is as follows: range x1 x2 x3 -10-0 2 5 1 (# points in this
2010 Jun 21
2
How to predict the mean and variance of the dependent variable after regression
Hi, folks, As seen in the following codes: x1=rlnorm(10) x2=rlnorm(10,mean=2) y=rlnorm(10,mean=10)### Fake dataset linmod=lm(log(y)~log(x1)+log(x2)) After the regression, I would like to know the mean of y. Since log(y) is normal and y is lognormal, I need to know the mean and variance of log(y) first. I tried mean (y) and mean(linmod), but either one is what I want. Any tips? Thanks in
2010 Jun 22
2
Verify the linear regression model used in R ( fundamental theory)
Hi, folks, As I understand, Least-squares Estimate (second-moment assumption) and the Method of Maximum Likelihood (full distribtuion assumption) are used for linear regression. I do >?lm, but the help file does not tell me the model employed in R. But in the book 'Introductory Statistics with R', it indicates R estimate the parameters using the method of Least-squares. However it
2010 Jun 23
1
How to 'understand' R functions besides reading R codes
Apologize for not being clearer earlier. I would like to ask again. Thank Joris and Markleeds for response. Two examples: 1. Function 'var'. In R, it is the sum of square divided by (n-1) but not by n. (I know this in R class) 2. Function 'lm'. In R, it is the residual sum of square divied by (n-2) not by n, the same as in the least squares estimate. But the assumption following
2010 Jun 26
1
All a column to a data frame with a specific condition
Hi, folks, Please first look at the codes: plan_a=c('apple','orange','apple','apple','pear','bread') plan_b=c('bread','bread','orange','bread','bread','yogurt') value=1:6 data=data.frame(plan_a,plan_b,value) library(plyr) library(reshape) mm=melt(data, id=c('plan_a','plan_b'))
2018 Aug 30
3
ROBUSTNESS: x || y and x && y to give warning/error if length(x) != 1 or length(y) != 1
On 08/30/2018 01:56 PM, Joris Meys wrote: > I have to agree with Emil here. && and || are short circuited like in C and > C++. That means that > > TRUE || c(TRUE, FALSE) > FALSE && c(TRUE, FALSE) > > cannot give an error because the second part is never evaluated. Throwing a > warning or error for > > c(TRUE, FALSE) || TRUE > > would mean
2010 Dec 21
2
Warning message when items of Hmisc are masked by loading a package.
I've noticed that I get a warning message every time a package masks some functions from Hmisc. The warning message says : Warning message: In identical(get(., i), get(., lib.pos)) : ignoring non-pairlist attributes This happens with eg: library(plyr) library(xtable) I think I've seen this passing by before, but I'm not sure any more. Just thought I'd mention it. Cheers Joris
2012 Aug 29
3
Help on calculating spearman rank correlation for a data frame with conditions
Dear all, Suppose my data frame is as follows: id price distance 1 2 4 1 3 5 ... 2 4 8 2 5 9 ... n 3 7 n 8 9 I would like to calculate the rank-order correlation between price and distance for each id. cor(price,distance,method = "spearman") calculate a correlation for all. Then I tried to use apply(data,list='id',cor(price , distance , method =
2010 Jun 18
3
Non-procedural access to columns of a matrix
Hi, I would like to have an index for a column in a matrix encoded in a cell of the same matrix. For example: x = matrix(c(11,12,13,1, 21,22,23,3, 31,32,33,2),byrow=T,ncol=4) In this case, column 4 is the index. I then access the column specified in the index by: > for (i in 1:3) print(x[i,x[i,4]]) [1] 11 [1] 23 [1] 32 > > for (i in 1:3) {x[i,x[i,4]] <- x[i,x[i,4]] + 5} > x
2017 Mar 28
2
`[` not recognized as a primitive in certain cases.
?typeof? is your friend here: > typeof(`[`) [1] "special" > typeof(mc[[1]]) [1] "symbol" > typeof(mc2[[1]]) [1] "special" so mc[[1]] is a symbol, and thus not a primitive. - Lukas > On 28 Mar 2017, at 14:46, Michael Lawrence <lawrence.michael at gene.com> wrote: > > There is a difference between the symbol and the function (primitive >
2018 Jan 31
3
Best practices in developing package: From a single file
On 31/01/2018 6:33 AM, Joris Meys wrote: > 3. given your criticism, I'd like your opinion on where I can improve > the documentation of https://github.com/CenterForStatistics-UGent/pim. > I'm currently busy updating the help files for a next release on CRAN, > so your input is more than welcome. After this invitation I sent some private comments to Joris. I would say his
2016 Sep 06
2
The use of match.fun
Dear gurus, I was utterly surprised to learn that one of my examples illustrating the need of match.fun() doesn't give me the expected result. center <- function(x,FUN) FUN(x) center(1:10, mean) mean <- 4 center(1:10, mean) Used to give me the error message "could not find function FUN". Now it just works, even though I didn't expect it to. I believe this is at least
2010 Jun 25
1
Different standard errors from R and other software
Hi all, Sorry to bother you. I'm estimating a discrete choice model in R using the maxBFGS command. Since I wrote the log-likelihood myself, in order to double check, I run the same model in Limdep. It turns out that the coefficient estimates are quite close; however, the standard errors are very different. I also computed the hessian and outer product of the gradients in R using the
2010 Jun 08
2
Please help me
Dear Mr. or Ms.,   I used the R-software to run the zero-inflatoin negative binomial model (zeroinfl()) .   Firstly, I introduced one dummy variable to the model as an independent variable, and I got the estimators of parameters. But the results are not satisfied to me. So I introduced three dummy variables to the model. but I could not get the results. And the error message is
2010 May 25
2
Calculation time of isoMDS and the optimal number of dimensions
Dear all, I'm running a set of nonparametric MDS analyses, using a wrapper for isoMDS, on a 800x800 distance matrix. I noticed that setting the parameter k to larger numbers seriously increases the calculation time. Actually, with k=10 it calculates already longer than for k=2 and k=5 together. It's now calculating for 6 hours, and counting... There is quite a difference between the
2015 Apr 01
4
evaluation in transform versus within
On 01/04/2015 1:35 PM, Gabriel Becker wrote: > Joris, > > > The second argument to evalq is envir, so that line says, roughly, "call > environment() to generate me a new environment within the environment > defined by data". I think that's not quite right. environment() returns the current environment, it doesn't create a new one. It is evalq() that created
2010 May 25
2
summary of arima model in R
Hi, I want to give a summary or anova for "arima" model in R, as "summary", and "anova" for "lm". As including various intervention factors in arima(xreg = ) part, I want to assess the significancy of thse factors. I can do it using interrupted analysis of time series by linear regression, but want to see whether arima model works for the data first.
2017 May 31
4
stats::line() does not produce correct Tukey line when n mod 6 is 2 or 3
Seriously, if a method gives a wrong result, it's wrong. line() does NOT implement the algorithm of Tukey, even not after the patch. We're not discussing Excel here, are we? The method of Tukey is rather clear, and it is NOT using the default quantile definition from the quantile function. Actually, it doesn't even use quantiles to define the groups. It just says that the groups
2011 Feb 04
2
terribly annoying bug with POSIXlt : one o'clock is midnight?
Apparently, as.POSIXlt takes one o'clock as the start of the day : > as.POSIXlt(0,origin="1970-01-01") [1] "1970-01-01 01:00:00 CET" > as.POSIXlt(0,origin="1970-01-01 00:00:00") [1] "1970-01-01 01:00:00 CET" > as.POSIXlt(0,origin="1970-01-01 23:59:59") [1] "1970-01-02 00:59:59 CET" Cheers -- Joris Meys Statistical
2010 May 07
2
help on hmisc
can anyone know where i can find information on compile hmisc on windows, especially 64 windows? thanks, _________________________________________________________________ The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. ID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4 [[alternative HTML version deleted]]