Displaying 14 results from an estimated 14 matches similar to: "read table and import of a text file"
2009 Oct 07
1
Buglet in qbeta?
Hi,
I sometimes play around with extreme parameters for distributions and
found that qbeta is not always monotone as the following example shows.
I don't know whether this is serious enough to submit a bug report (as
this example is near to the limitations of floating point arithmetic).
Josef
> x <- qbeta((0:100)/100,0.01,5)
> x
[1] 0.000000e+00 1.253990e-201 1.589622e-171
2009 Jul 26
2
problems hist() and density
Hello,
I have a problem with the hist() function and showing densities. The
densities sum to 50 and not to 1! I use R version 2.9.1 (2009-06-26) and
I load the seqinR library.
My data is the following vector:
[1] 0.1400000 0.2000000 0.2200000 0.2828283 0.1600000 0.1600000
0.3600000
[8] 0.1600000 0.2200000 0.2600000 0.2000000 0.3000000 0.2200000
0.2342342
[15] 0.1800000 0.2200000 0.1600000
2010 Feb 25
2
reducing data.frame
Hi All,
Is there an easy way to reduce a data.frame to 1 'id' per row while keeping
information from the other rows of that same variable, if applicable? e.g.:
# data
multi[1:15,]
id r n wi wi.tau z k alliance a.rater eml
treatment outcome o.rater german
1 100 0.2800000 44 41 21.72514 0.2876821 210 <NA> <NA> <NA>
<NA>
2011 Oct 24
4
Lm function: Error in model.frame.default
Hello,
I am trying to get a linear model of y ~ log(x).
*> lm (y~log(x))*
However, I always get an error report:
/Error in model.frame.default(formula = y ~ log(x), drop.unused.levels =
TRUE) :
variable lengths differ (found for 'log(x)')/
*Here was my y:*
> y
[1] 0.4500000 0.0500000 0.5000000 0.4000000 0.0000000
0.5000000 0.4000000
[8] 0.0500000
2010 Feb 20
3
aggregating using 'with' function
Hi All,
I am interested in aggregating a data frame based on 2
categories--mean effect size (r) for each 'id's' 'mod1'. The
'with' function works well when aggregating on one category (e.g.,
based on 'id' below) but doesnt work if I try 2 categories. How can
this be accomplished?
# sample data
id<-c(1,1,1,rep(4:12))
n<-c(10,20,13,22,28,12,12,36,19,12,
2010 Feb 11
1
Using sapply on a two argument function
Dear R users,
I have a function (simplified here) that accepts two arguments and performs
various calculations:
foo <- function(y, x) {
a <- y*sqrt(x)
b <- a+2
c <- a*b
return(c)
}
If I call the function as follows I get the result I desire:
> foo(.1, 1:12)
[1] 0.2100000 0.3028427 0.3764102 0.4400000 0.4972136 0.5498979 0.5991503
0.6456854 0.6900000 0.7324555 0.7733250
2011 Jan 25
0
win7,ruby1.9.2p136. gem install rails not working.
Hello!
I cannot install rails. `gem install rails` returns:
ERROR: While executing gem ... (TypeError)
incompatible marshal file format (can''t be read)
format version 4.8 required; 0.0 given
Interpreter is installed from http://rubyinstaller.org/ .
Fresh installation, `gem list --local` returns:
*** LOCAL GEMS ***
minitest (1.6.0)
rake (0.8.7)
rdoc (2.5.8)
`gem env`
2017 Jul 05
4
Help with reshape/reshape2 needed
Hi all:
I'm struggling with getting my data re-formatted using functions in
reshape/reshape2 to get from:
1957 0.862500000
1958 0.750000000
1959 0.300000000
1960 0.287500000
1963 0.675000000
1964 0.937500000
1965 0.025000000
1966 0.387500000
1969 0.087500000
1970 0.275000000
1973 0.500000000
1974 0.362500000
1976 0.925000000
1978 0.712500000
1979 0.337500000
1980 0.700000000
1981 0.425000000
2008 Jun 19
1
replacing segments of vector by their averages
Given a vector of numeric of length n, I need to find segments that are >= 0.2, compute the average of individual segments, and replace the original values in each segment by their corresponding averages.
For example, there are three segments that are >= 0.2, the average of 1st segment is 0.3, 2nd is 0.5, and the 3rd is 0.5333333
>
2017 Jul 05
0
Help with reshape/reshape2 needed
This does not use reshape/reshape2, but it is pretty straightforward. Assuming X is your example data:
> Y <- split(X[, 2], X[, 1])
> vals <- sapply(Y, length)
> pad <- max(vals) - vals
> Y2 <- lapply(seq_along(Y), function(x) c(Y[[x]], rep(NA, pad[x])))
> names(Y2) <- names(Y)
> X2 <- do.call(cbind, Y2)
> X2[, 1:6]
1957 1958 1959
2006 Oct 21
2
problem with mode of marginal distriubtion of rdirichlet{gtools}
Hi all,
I have a problem using rdirichlet{gtools}.
For Dir( a1, a2, ..., a_n), its mode can be found at $( a_i -1)/ (
\sum_{i}a_i - n)$;
The means are $a_i / (\sum_{i} a_i ) $;
I tried to study the above properties using rdirichlet from gtools. The code
are:
##############
library(gtools)
alpha = c(1,3,9) #totoal=13
mean.expect = c(1/13, 3/13, 9/13)
mode.expect = c(0, 2/10, 8/10) #
2009 Oct 14
1
change order of bar plot categories
Is this what you want?
temp<-c(rep("Low",2),rep("Medium",2),rep("High",2))
light<-rep(c("Dark","light"),3)
avg<-dat.avg2[,3] #
se<-dat.avg2[,4]
dat.avg.temp<-data.frame(cbind(avg,se))
dat.avg.temp<-data.frame(cbind(temp,light,dat.avg.temp))
dat.plot<-qplot(light,avg, fill=factor(temp),data=dat.avg.temp,
geom="bar",
2010 Jun 06
1
I need help in analyzing
I'm sory for my weak english. I need to analyze this subject :
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 y
0 0 1 0 0 1 0 0 1 0 czarne
1 1 0 0 0 0 1 0 0 0 rude
0 0 1 0 0 1 1 0 0 0 braz
0 0 1 0 1 0 1 0 0 0 blond
1 0 0 0 0 1 0 0 0 1 rude
1 1 0 0 0 0 0 0 0 1 blond
0 0 1 1 0 0 0 0 1 0 czarne
1 0 0 1 0 0 1 0 0 0 blond
0 0 1 0 0 1 1 0 0 0 blond
1 0 0 0 0 1 1 0 0 0 czarne
0 0 1 0 0 1 0 0 0 1 czarne
1 0 1 0 0 0
2002 Feb 13
3
xtabs
Hi,
In Splus if I call the function crosstabs() the output is a contigency
table; in each cell of the table is printed: N, N/RowTotal,
N/ColTotal, N/Total. N is the number of observations in each cell.
The same call to xtabs() in R will produce the contigency table but the
only entry in each cell is N.
How can I get the same relative frequencies that crosstabs() gives?
Thanks,
mike
--