search for: 1.000

Displaying 20 results from an estimated 365 matches for "1.000".

Did you mean: 1.00
2012 Mar 26
0
Different result with "kruskal.test" and post-hoc analysis with Nemenyi-Damico-Wolfe-Dunn test implemented in the help page for oneway_test in the coin package that uses multcomp
Dear Researchers, Sorry for this email but I am not a statistician, and for this I have this problem to understand. Thanks in Advance for help and suggestions. Gianni I have 21 classes (00, 01, 02, 04, ....,020) with different length. I did a kruskal wall test in R with the following code kruskal.test(m.class.l, m.class.length.lf) Kruskal-Wallis rank sum test data: m.class.l and
2003 Apr 24
1
write.table problem
Dear R helpers, I have been using the loadings function from the multiv library and I get the typical output (see below). When I try to export these results to a file using a write.table() I get the following error message "Error in as.data.frame.default(x[[i]], optional = TRUE) : can't coerce loadings into a data.frame" Any idea why write.table is doing that and any
2004 Nov 06
3
how to read this matrix into R
the following the the lower.tri matrix in a file named luxry.car and i want to read it in R as a lower.tri matrix.how can i do? i have try to use help.search("read"),but no result what i want. 1.000 0.591 1.000 0.356 0.350 1.000
2009 Sep 29
1
Summary
My data is called xc and has more than 15 variables. When I used summary(xc) it gave me the detail description of each variable. Summary(xc) Y1 x1 x2 x3 .. Min. :0.0000 Min. : 1.000 Min. : 1.000 Min. : 1.000 1st Qu. :0.0000 1st Qu.: 1.000 1st Qu.: 1.000 1st Qu.: 2.000 Median :1.0000 Median : 1.000
2012 May 03
0
problem with running probit
Hi, I am having problems with running a probit regression and don't understand where the problem comes from since with the original data set I was able to get correct estimates. To that data set I have added extra variables and upon running the regression I get now multiple estimates of the same predictor: Deviance Residuals: Min 1Q Median 3Q Max -1.17741
1998 Apr 27
1
R-beta: vectors in dataframe?
I have a file: x y z 0.025 0.025 1.65775 0.025 0.050 1.62602 0.025 0.075 1.63683 0.025 0.100 1.91847 0.025 0.125 2.00913 0.025 0.150 1.82222 0.025 0.175 1.70901 0.025 0.200 1.39759 0.025 0.225 1.39089 0.025 0.250 1.04762 If I read the file like this: data<-read.table("file.dat") How do I access the vectors x,y,z that are inside the dataframe data? I studied Venables and
2011 Apr 21
1
Rearranging PCA results from R
Hi!! I'm having trouble selecting 10 out of 41 attributes of the KDD data set. In order to identify the components with the higher variance I'm using princomp. the result i get for summary(pca1) is: Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 Comp.9
2012 Jun 04
0
Negative variance with lavaan in a multigroup analysis.
Hi list members, I saw a couple lavaan posts here so I think I?m sending this to the correct list. I am trying to run a multigroup analysis with lavaan in order to compare behavioural correlations across two populations. I?m following the method suggested in the paper by Dingemanse et al. (2010) in Behavioural Ecology. In one of the groups, lavaan returns negative variance for one path and I?m
2020 Sep 21
2
Help with the Error Message in R "Error in 1:nchid : result would be too long a vector"
Hello everyone, I am using *mlogit* to analyse my choice experiment data. I have *3 alternatives* for each individual and for each individual I have *9 questions*. I have a response from *516 individuals*. So it is a panel of 9*516 observations. I have arranged the data in long format (it contains 100 columns indicating different variables and identifiers). In mlogit I tried the following
1999 May 06
0
matrix weirdness
I am using R on unix version 63.0 I am doing an image plot of the following data file: ================================ lag1 lag2 cif2d 0.000 0.000 NaN 0.000 1.000 0.500000 0.000 2.000 0.489831 0.000 3.000 0.492986 0.000 4.000 0.493409 0.000 5.000 0.492727 0.000 6.000 0.494485 1.000 0.000 0.500000 1.000 1.000 NaN 1.000 2.000 0.495098 1.000 3.000 0.489831 1.000 4.000 0.492986 1.000 5.000
2011 Jul 25
1
lme convergence error
Hello, I am working from a linux 64 machine on a server with R-2.12 (I can't update to 2.13). I am iterating through many linear mixed models for longitudinal data and I occasionally receive the following convergence error: > BI.lme <- lme(cd4 ~ time + genBI + genBI:time + C1 + C2 + C11 + C12, random =~ 1 + time | IID, data = d) Error in lme.formula(cd4 ~ time + genBI + genBI:time +
2005 May 31
1
apply the function "factor" to multiple columns
I have a case where I would like to change multiple columns containing numbers to factors. I can change each column one at a time as in: TEMP.FACT$EXPOS01<-factor(TEMP.FACT$EXPOS01,levels=c(1,2,3),labels=c("No ne","Low Impact","MedHigh Imp")) TEMP.FACT$EXPOS02<-factor(TEMP.FACT$EXPOS02,levels=c(1,2,3),labels=c("No ne","Low
2009 Mar 30
1
Possible bug in summary.survfit - 'scale' argument ignored?
Hi all, Using: R version 2.8.1 Patched (2009-03-07 r48068) on OSX (10.5.6) with survival version: Version: 2.35-3 Date: 2009-02-10 I get the following using the first example in ?summary.survfit: > summary( survfit( Surv(futime, fustat)~1, data=ovarian)) Call: survfit(formula = Surv(futime, fustat) ~ 1, data = ovarian) time n.risk n.event survival
2006 Jun 25
1
Puzzled with contour()
Folks, The contour() function wants x and y to be in increasing order. I have a situation where I have a grid in x and y, and associated z values, which looks like this: x y z [1,] 0.00 20 1.000 [2,] 0.00 30 1.000 [3,] 0.00 40 1.000 [4,] 0.00 50 1.000 [5,] 0.00 60 1.000 [6,] 0.00 70 1.000 [7,] 0.00 80 0.000 [8,] 0.00 90
2003 Jan 17
2
read.table bug in Mac OS X (PR#2469)
Full_Name: George W. Gilchrist Version: 1.6.2 OS: OS X Submission from: (NULL) (128.239.124.126) Start with a tab-delimited or comma-delimited text file created on the Mac and use read.table("filename.txt", header=T) to read it in. When the first column of the file contains a character vector, and there is a header line, the first letter of the first column of the fifth row is appended
2013 Mar 28
0
using cvlm to do cross-validation
Hello, I did a cross-validation using cvlm from DAAG package but wasn't sure how to assess the result. Does this result means my model is a good model? I understand that the overall ms is the mean of sum of squares. But is 0.0987 a good number? The response (i.e. gailRel5yr) has min,1st Quantile, median, mean and 3rd Quantile, and max as follows: (0.462, 0.628, 0.806, 0.896, 1.000, 2.400) ?
2013 Dec 23
2
[PATCH net-next 3/3] net: auto-tune mergeable rx buffer size for improved performance
On Mon, Dec 16, 2013 at 04:16:29PM -0800, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all >
2013 Dec 23
2
[PATCH net-next 3/3] net: auto-tune mergeable rx buffer size for improved performance
On Mon, Dec 16, 2013 at 04:16:29PM -0800, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all >
2010 Jun 23
1
Probabilities from survfit.coxph:
Hello: In the example below (or for a censored data) using survfit.coxph, can anyone point me to a link or a pdf as to how the probabilities appearing in bold under "summary(pred$surv)" are calculated? Do these represent acumulative probability distribution in time (not including censored time)? Thanks very much, parmee *fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)*
2009 Oct 15
2
Data frame search and remove questions
Hello, I have a couple questions about removing rows from a data frame and creating a new data frame with the removed values. I provided an example data frame (d) below. Questions: 1) How can I search for "-999.000" and remove the entire row from data frame "d"? (all -999 values will be in sd_diff) 2) Can I create a new data frame "d.new" that only contains the