Displaying 20 results from an estimated 200 matches similar to: "relative frequencies for hist()"
2013 Apr 11
2
Read the data from a text file and reshape the data
I have a data set for different time intervals. The data has three comment
lines before data for each time interval. For each time interval there are
500 data points. I want to change the dataset such that I have the following
format:
t1 t2 t3 ................
0.00208 0.00417 0.00625 .................
a1 a2 a3 ...................
2009 Aug 19
2
Contrasts within ANOVA frame (Repost)
Would like to try my luck to see if I can catch your eyes.
I was trying to do some contrasts within ANOVA. I searched the archive and
found a clue posted by Steffen Katzner
( http://tolstoy.newcastle.edu.au/R/help/06/01/19385.html)
I have three levels for a factor names "StdLot" and want to make three
comparisons, 1 vs 2, 1 vs 3 and 2 vs 3.
First,
2009 Apr 09
2
failed when merging two dataframes, why
Hi, R-listers,
Failed, when I tried to merge df1 and df2 by "codetot" in df1 and "codetoto"
in df2. I want to know the reason and how to merge them together. Data
frames and codes I have used were listed as followed. Thanks a lot in
advance.
df1:
popcode codetot p3need
BCPy01-01 BCPy01-01-1 100.0000
BCPy01-01 BCPy01-01-2 100.0000
BCPy01-01 BCPy01-01-3 100.0000
BCPy01-02
2009 Apr 10
4
split a character variable into several character variable by a character
Dear Mao Jianfeng,
"r-help-owner" is not the place for help, but:
r-help at r-project.org
(CC-ed here)
In any case, strsplit() does the job, i.e.:
> unlist(strsplit("BCPy01-01", "-"))
[1] "BCPy01" "01"
You can work with the whole variable, like:
splitpop <- strsplit(df1$popcode, "-")
then access the first part with
>
2011 Mar 01
1
which does the "S.D." returned by {Hmisc} rcorr.cens measure?
Dear R-help,
This is an example in the {Hmisc} manual under rcorr.cens function:
> set.seed(1)
> x <- round(rnorm(200))
> y <- rnorm(200)
> round(rcorr.cens(x, y, outx=F),4)
C Index Dxy S.D. n missing
uncensored Relevant Pairs Concordant Uncertain
0.4831 -0.0338 0.0462 200.0000
2012 Jan 11
1
R CMD check pkg and 32/64 bit.
R gurus:
I'm trying to get another round of rconifers out and I need some advice/help crushing differences in the examples test.
I'm trying to make sure the max sdi values are being respected.
I've added a tests/rconifers-Ex.Rout.save (from windows i386-pc-mingw32) and when I ran R CMD check (both R-2.13.0), I got the following results:
* using log directory
2009 Jan 18
1
?grep
Dear Rxperts,
I have the following data:
Study Study.Name C Category TC Time QC QO SD FSD Theta
1 NONE 0 P(22) 0 0.00 7.5596 0 0 8.0361e-03 0
1 NONE 6 G(50) 0 0.00 1.0000 0 0 0.0000e+00 0
1 NONE 2 F(02) 0 0.00 100.0000 0 0 0.0000e+00 0
1 NONE 3 F(03) 0 0.00 13.2280 0 0 1.6732e-02 0
2011 Jan 19
1
Problem in using bdh function for Govt tickers
Hi, all
I wanted to fetch data from Bloomberg for govt bonds, and analyse it
further.
I am having trouble in getting data as when I use field=PX_LAST, it is
giving the prices but when I use field=CPN, or ISSUE_DT, it is not giving
the results and just bouncing back <NA> for that.
This is the piece of code:
> library(rJava)
Warning message:
package 'rJava' was built
2005 Apr 15
2
negetative AIC values: How to compare models with negative AIC's
Dear,
When fitting the following model
knots <- 5
lrm.NDWI <- lrm(m.arson ~ rcs(NDWI,knots)
I obtain the following result:
Logistic Regression Model
lrm(formula = m.arson ~ rcs(NDWI, knots))
Frequencies of Responses
0 1
666 35
Obs Max Deriv Model L.R. d.f. P C Dxy
Gamma Tau-a R2 Brier
701 5e-07 34.49
2008 Jul 02
2
help appreciated to make a plot
Dear All:
I have the following data:
0 100
0.5 79.9605
0.75 84.7098
1.5 72.1793
2.5 97.4924
4.5 90.9696
8.5 59.9794
24.5 76.4859
456 100.0000
457.5 116.7381
460.5 118.7550
464.5
2016 Jun 14
2
Getting HTTP path-prefix to work with syslinux.efi
>> Doesn't work. Apparently the Dell UEFI PXE firmware doesn't know HTTP.
>
> There's a lot of variation. Do you have a shell option
> in your boot selections?
None of Dell OptiPlex 990 (firmware A19), 9010 (firmware A22), nor 9020
(firmware A16) seem have a built-in EFI shell option, but in all three
cases I had success running the external EFI shell from
2011 Jul 20
1
Fwd: Help please
Hi All,
This is not really an R question but a statistical one. If someone could
either give me the brief explanation or point me to a reference that might
help, I'd appreciate it.
I want to estimate the mean of a log-normal distribution, given the (log
scale normal) parameters mu and sigma squared (sigma2). I understood this
should simply be:
exp(mu + sigma2)
... but I the following code
2008 Nov 05
1
Problems computing 2-way-mixed-model ANOVA
Dear Experts,
I am new to R and unfortunately cannot start with a simply statistical
analysis:
I manually determined the volume of the right and left hippocampus in
a group of meditators and in a group of controls. My data-sheet looks
as follows:
observation subject group age gender hemisphere volume
1 am04 m 25 f left 3.637
2 am04 m 25 f right 3.713
3 ao08 m 47 m left 3.715
4 ao08 m 47
2008 Mar 20
1
Interpretation of Variance decomposition in VAR model
Hi all,
This question is not really R related, rather on Statistics subject itself. Even I did not do those using R. however still I want to post it here, because my hope is I could get help from great statisticians who are the very active member of this group.
My problem is to interpret Variance decomposition of VAR model in layman's language.
Using EViews I got following :
Variance
2006 Apr 06
1
interpreting anova summary tables - newbie
Hello,
Apologies if this is the wrong list, I am a first-time poster here. I
have an experiment in which an output is measured in response to 42
different categories.
I am only interested which of the categories is significantly different
from a reference category.
Here is the summary of the results:
summary(simple.fit)
Call:
lm(formula = as.numeric(as.vector(TNFa)) ~ Mutant.ID, data =
2010 Feb 26
2
dramatic speed difference in lapply
So I have a function that does lapply's for me based on dimension. Currently
only works for length(pivotColumns)=2 because I haven't fixed the rbinds. I
have two versions. One runs WAYYY faster than the other. And I'm not sure
why.
Fast Version:
fedb.ddplyWrapper2Fast <- function(data, pivotColumns, listNameFunctions,
...){
lapplyFunctionRecurse <- function(cdata, level=1,
2007 Jul 31
2
choosing between Poisson regression models: no interactions vs. interactions
R gurus,
I'm working on data analysis for a small project. My response
variable is total vines per tree (median = 0, mean = 1.65, min = 0,
max = 24). My predictors are two categorical variables (four sites
and four species) and one continuous (tree diameter at breast height
(DBH)). The main question I'm attempting to answer is whether or not
the species identity of a tree has
2012 Feb 28
1
Error in solve.default(res$hessian * n.used) :Lapack routine dgesv: system is exactly singular
Hi there!
I´m a noob when it comes to R and I´m using it to run statisc analysis.
With the code for ARIMA below I´m getting this error: Error in
solve.default(res$hessian * n.used) :Lapack routine dgesv: system is
exactly singular
The code is:
> s.ts <- ts(x[,7], start = 2004, fre=12)
> get.best.arima <- function (x.ts, maxord=c(1,1,1,1,1,1))
+ {
+ best.aic <- 1e8
+ n <-
2008 Aug 08
2
aggregate
Dear All-
I have a dataset that is comprised of the following:
doy yr mon day hr hgt1 hgt2 hgt3 co21 co22 co23 sig1 sig2 sig3 dif flag
244.02083 2005 09 01 00 2.6 9.5 17.8 375.665 373.737 373.227 3.698 1.107
0.963 -0.509 PRE
244.0625 2005 09 01 01 2.6 9.5 17.8 393.66 384.773 379.466 15.336 11.033
5.76 -5.307 PRE
244.10417 2005 09 01 02 2.6 9.5 17.8 411.162 397.866 387.755 6.835 5.61
6.728
2006 Sep 28
2
a decimal aligned column
Hello,
For numbers in the range 100 to 100,000,000 I'd like to decimal align
a right-justified comma-delineated column of numbers, but I haven't
been able to work out the proper format statement. format(num,
justify=right, width=15, big.mark=",") gets me close, but numbers
larger than 1,000,000 project a digit beyond the right edge of the
column, which I really don't