similar to: DescTools::Quantile

Displaying 20 results from an estimated 100 matches similar to: "DescTools::Quantile"

2024 Jan 29
0
DescTools::Quantile
It looks like a homework assignment. It also looks like you didn't read the documentation carefully enough. The 'len.out' argument in seq is solely for specifying the length of a sequence. The 'quantile' function omputes the empirical quantile of raw data in the vector 'x' at cumulative probabilit(y)(ies) given in the weights' argument, with interpolation I'm
2023 Dec 11
1
Base R wilcox.test gives incorrect answers, has been fixed in DescTools, solution can likely be ported to Base R
While using the Hodges Lehmann Mean in DescTools (DescTools::HodgesLehmann), I found that it generated incorrect answers (see <https://github.com/AndriSignorell/DescTools/issues/97> https://github.com/AndriSignorell/DescTools/issues/97). The error is driven by the existence of tied values forcing wilcox.test in Base R to switch to an approximate algorithm that returns incorrect results - see
2009 Feb 08
0
Initial values of the parameters of a garch-Model
Dear all, I'm using R 2.8.1 under Windows Vista on a dual core 2,4 GhZ with 4 GB of RAM. I'm trying to reproduce a result out of "Analysis of Financial Time Series" by Ruey Tsay. In R I'm using the fGarch library. After fitting a ar(3)-garch(1,1)-model > model<-garchFit(~arma(3,0)+garch(1,1), analyse) I'm saving the results via > result<-model
2014 Apr 08
2
Test de Moses
¿Alguien sabe si el test de reacciones extremas de Moses está escrito en algún paquete de R? Gracias de antemano.
2023 Apr 09
1
simultaneous confidence intervals for multinomial proportions: sample size
Hello! I want to calculate simultaneous confidence intervals for a nominal variable with three categories: "yes", "no", "partially" and I expect that far more than 5 samples fall into each category. I have read that Glaz & Sison's method is only appropriate for variables with 7 or more categories. Therefore, the Goodman method seems like a good idea. I have
2023 Mar 22
1
How to test the difference between paired correlations?
Hello, I have three numerical variables and I would like to test if their correlation is significantly different. I have seen that there is a package that "Test the difference between two (paired or unpaired) correlations". [https://www.personality-project.org/r/html/paired.r.html] However, there is the need to convert the correlations to "z scores using the Fisher r-z
2016 Apr 08
3
Generating Hotelling's T squared statistic with hclust
I am doing a cluster analysis with hclust. I want to get hclust to output the Hotelling's T squared statistic for each cluster so I can evaluate is data points should be in a cluster or not. My research to answer this question has been unsuccessful. Does anyone know how to get hclust to output the Hotelling's T squared statistic for each cluster? Mike [[alternative HTML version
2023 Mar 23
1
How to test the difference between paired correlations?
Thank you, but this now sounds more difficult: what would be the point in having these ready-made functions if I have to do it manually? Anyway, How would I implement the last part? On Thu, Mar 23, 2023 at 1:23?AM Ebert,Timothy Aaron <tebert at ufl.edu> wrote: > > If you are open to other options: > The null hypothesis is that there is no difference. > If I have two equations
2010 Jun 18
1
12th Root of a Square (Transition) Matrix
Dear R-tisans, I am trying to calculate the 12th root of a transition (square) matrix, but can't seem to obtain an accurate result. I realize that this post is laced with intimations of quantitative finance, but the question is both R-related and broadly mathematical. That said, I'm happy to post this to R-SIG-Finance if I've erred in posting this to the general list. I've
2007 Mar 03
2
format of summary.lm for 2-way ANOVA
Hi, I am performing a two-way ANOVA (2 factors with 4 and 5 levels, respectively). If I'm interpreting the output of summary correctly, then the interaction between both factors is significant: ,---- | ## Two-way ANOVA with possible interaction: | > model1 <- aov(log(y) ~ xForce*xVel, data=mydataset) | | > summary(model1) | Df Sum Sq Mean Sq F value Pr(>F) |
2024 Feb 12
0
Errors in wilcox family functions
Hi Everyone, Following the previous discussion on optimizing *wilcox functions, Andreas Loeffler brought to my attention a few other bugs in `wilcox` family functions. It seems like these issues have been discussed online in the past few months, but I haven?t seen discussion on R-devel...unless I missed an email, it seems like discussion never made it to the mailing list. I haven?t seen any bug
2011 Sep 02
1
Parameters in Gamma Frailty model
Dear all, I'm new to frailty model. I have a question on the output from 'survival' pack. Below is the output. What does gamma1,2,3 refer to? How do I calculate joint hazard function or marginal hazard function using info below? Many thanks! Call: coxph(formula = surv ~ as.factor(tibia) + frailty(as.factor(bdcat)), data = try) n=877 (1 observation deleted due to missingness)
2007 Sep 18
0
[LLVMdev] 2.1 Pre-Release Available (testers needed)
On Fri, Sep 14, 2007 at 11:42:18PM -0700, Tanya Lattner wrote: > The 2.1 pre-release (version 1) is available for testing: > http://llvm.org/prereleases/2.1/version1/ > > [...] > > 2) Download llvm-2.1, llvm-test-2.1, and the llvm-gcc4.0 source. > Compile everything. Run "make check" and the full llvm-test suite > (make TEST=nightly report). > > Send
2010 Dec 15
1
lmList and lapply(... lm) different std. errors
Am I trying to perform multiple linear regressions on each 'VARIABLE2'. I figured out that there are different ways, using the following code: (data is given at the end of this message) reg <- lapply(split(TRY, VARIABLE2), function(X){lm(X2 ~ X3, data=X)}) lapply(reg, summary) Which produces the following: $`1` Call: lm(formula = X2 ~ X3, data = X) Residuals: Min
2007 Aug 31
3
Choosing the optimum lag order of ARIMA model
Dear all R users, I am really struggling to determine the most appropriate lag order of ARIMA model. My understanding is that, as for MA [q] model the auto correlation coeff vanishes after q lag, it says the MA order of a ARIMA model, and for a AR[p] model partial autocorrelation vanishes after p lags it helps to determine the AR lag. And most appropriate model choosed by this argument gives
2008 Sep 16
1
Using quasibinomial family in lmer
Dear R-Users, I can't understand the behaviour of quasibinomial in lmer. It doesn't appear to be calculating a scaling parameter, and looks to be reducing the standard errors of fixed effects estimates when overdispersion is present (and when it is not present also)! A simple demo of what I'm seeing is given below. Comments appreciated? Thanks, Russell Millar Dept of Stat U.
2010 Oct 22
2
Random Forest AUC
Guys, I used Random Forest with a couple of data sets I had to predict for binary response. In all the cases, the AUC of the training set is coming to be 1. Is this always the case with random forests? Can someone please clarify this? I have given a simple example, first using logistic regression and then using random forests to explain the problem. AUC of the random forest is coming out to be
2009 Sep 24
2
aggregate() - error message
Dear list, ? would anybody be able to tell me why the statement ? Tripstatistics=aggregate(TripsData[2:3],by=list(Trip=Tripmatch),FUN="mean") ? seems to work well with TripsData 1 but not with TripsData 2 ? ? With TripsData 2 it yields ? Error in FUN(X[[1L]], ...) : arguments must have same length I can't see a difference in the two data sets. Could someone shed light on the error
2007 Jun 05
1
logit model interpretation
Hello everyone I appologize for my lack of experience in statistical methods. I am an R user begginer and I am running a logit model using "zelig" and "pcse" packages. I will go to the point and is that Im having problems with interpreting the results of my models.. It is really simple (I guess for the most advanced scholars) however I really dont understand how to interpret
2010 Jun 23
1
Probabilities from survfit.coxph:
Hello: In the example below (or for a censored data) using survfit.coxph, can anyone point me to a link or a pdf as to how the probabilities appearing in bold under "summary(pred$surv)" are calculated? Do these represent acumulative probability distribution in time (not including censored time)? Thanks very much, parmee *fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)*