Displaying 20 results from an estimated 71 matches for "0.156".
Did you mean:
0.15
2004 Nov 06
3
how to read this matrix into R
the following the the lower.tri matrix in a file named luxry.car
and i want to read it in R as a lower.tri matrix.how can i do?
i have try to use help.search("read"),but no result what i want.
1.000
0.591 1.000
0.356 0.350 1.000
2000 Dec 20
1
Question about coplot() ...
Dear R-friends,
For the following data:
> xy
x y i
1 731 0.313 2
2 739 0.340 2
3 790 0.373 2
4 855 0.451 2
5 980 0.608 2
6 575 0.156 1
7 608 0.207 1
8 630 0.249 1
9 670 0.332 1
10 838 0.377 1
11 964 0.466 1
> coplot(y ~ x|i, data=xy)
coplot gives 3 panels, rather than 2, namely one for i=1 and one for
i=2.
Futhermore, when I extand data fram xy to have i=3 as follows:
2009 Jan 12
4
fitting curve to data
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I have the following data:
> y
[1] 0.000 0.004 0.008 0.016 0.024 0.032 0.044 0.064 0.072 0.088 0.108 0.140
[13] 0.156 0.180 0.208 0.236 0.264 0.296 0.320 0.360 0.408 0.444 0.472 0.524
[25] 0.576
> x
[1] 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500
[16] 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500
I'd
2005 Jun 02
3
How to change all name of variables
Dear R-helpers,
First I apologize if my question is quite simple
I have a large datasets which more 100 variables.
For a research I need to change all name of variables with add one or
more letters on each variables.
For example,
> data(Pima.tr)
> Pima.tr[1:5,]
npreg glu bp skin bmi ped age type
1 5 86 68 28 30.2 0.364 24 No
2 7 195 70 33 25.1 0.163 55 Yes
3 5
2012 Jun 04
0
Negative variance with lavaan in a multigroup analysis.
Hi list members,
I saw a couple lavaan posts here so I think I?m sending this to the
correct list.
I am trying to run a multigroup analysis with lavaan in order to
compare behavioural correlations across two populations. I?m following
the method suggested in the paper by Dingemanse et al. (2010) in
Behavioural Ecology.
In one of the groups, lavaan returns negative variance for one path
and I?m
2010 Feb 04
2
help needed using t.test with factors
I am trying to use t.test on the following data:
date type INTERVAL nCASES MTF SDF MTO SDO
nFST MF nOBS MO MB BIASCV BIASEV ME MAE
RMSE CRCF
2001-06-15 avn GE1.00 4385 0.246 0.300 1.502
0.556 1367 1.373 4385 1.502 1.471 0.285 0.164
-1.256 1.266 1.399 0.056
2001-06-15 avn
2004 Jun 30
2
Question about plotting related to roll-up
Hello R'ers,
I have a large set of data which has many y samples for each unit x. The data
might look like:
Seconds Response_time
---------- ----------------
0 0.150
0 0.202
0 0.065
1 0.110
1 0.280
2 0.230
2 0.156
3 0.070
3 0.185
3 0.255
3 0.311
3 0.120
4
.... and so on
When I do a basic plot with type=l or the default of points it obviously plots
every
2005 Jul 22
1
virtual routing issue
A most puzzling network conundrum has arisen while I was attempting to
create a virtual network behind a virtual router which in turn connects the
virtual network to my real network.
My machine (192.168.103.23) is on the network with my router
(192.168.103.1). The virtual router, tiara, has to connect my
192.168.103.* network with the virtual 10.0.0.* network which comprises two
other virtual
2005 Nov 24
4
Survreg Weibull lambda and p
Hi All,
I have conducted the following survival analysis which appears to be OK
(thanks BRipley for solving my earlier problem).
> surv.mod1 <- survreg( Surv(timep1, relall6)~randgrpc, data=Dataset,
dist="weibull", scale = 1)
> summary(surv.mod1)
Call:
survreg(formula = Surv(timep1, relall6) ~ randgrpc, data = Dataset,
dist = "weibull", scale = 1)
2008 Aug 22
1
problem with rbind
I am trying to use rbind to have the two data on the top of each other but I
am getting an extra X on the column header and the rows are numberd , How to
get rid of this problem? I appreciate your help
x1<- read.table(file="data1.txt", header=T, sep="\t")
x2<-read.table(file="data2.txt", header=T, sep="\t")
y<-rbind(x1,x2)
> y
X0
2011 Sep 27
1
binomial logistic regression question
Dear subscribers,
I am looking for a function which would allow me to model the dependent
variable as the number of successes in a series of Bernoulli trials. My data
looks like this
ID TRIALS SUCCESSESS INDEP1 INDEP2 INDEP3
1 4444 0 0.273 0.055 0.156
2 98170 74 0.123 0.456 0.789
3 145486 30 0.124
2007 Nov 27
1
Rsync stalls
I am trying to rsync a machine running CYGWIN_NT-5.2 server 1.5.24(0.156/4/2)
to another which runs FreeBSD 6.2-STABLE; both with rsync 2.6.9.
I'm trying to "pull" the files from the Cygwin machine with:
/usr/local/bin/rsync -avz --delete --delete-excluded -e "ssh"
[user]@[host]:"/cygdrive/e/Shared" /home/[host]
However, rsync stalls. This seems to be
2011 Nov 14
3
What is the CADF test criterion="BIC" report?
Hello:
I am a rookie in using R. When I used the unit root test in
"CADFtest", I got the different t-test statistics between using
criterion="BIC" and no using criterion. But when I checked the result
with eviews, I find out that no using criterion is correct. Why after
using criterion="BIC", I got the different result?
Paul
> data(Canada)
> ADFt
2008 Jan 15
1
flac default is -l8 (but says -l5)
$ ls -l a.wav
-rw-r--r-- 1 rootboy None 30587804 Jan 7 23:00 a.wav
$ flac -v
flac 1.2.1
$ type flac
flac is hashed (/usr/bin/flac)
$ uname -a
CYGWIN_NT-5.1 DFBJ7M51 1.5.24(0.156/4/2) 2007-01-31 10:57 i686 Cygwin
$ flac -f a.wav
a.wav: wrote 16323314 bytes, ratio=0.534
$ flac -l8 -f a.wav
a.wav: wrote 16323314 bytes, ratio=0.534
$ flac -l5 -f a.wav
a.wav: wrote 16398095 bytes, ratio=0.536
$
2013 Mar 15
1
multiple frequencies per second again
Dear R People:
I have the following situation. I have observations that are 128 samples
per second, which is fine. I want to fit them with ARIMA models, also fine.
My question is, please: when I do my forecasting, do I need to do anything
special to the "n.ahead" parm, please? Here is the initial setup:
> xx <- ts(rnorm(128),start=0,freq=128)
> str(xx)
Time-Series
2005 Jun 24
1
lme4 extracting individual variance components
Hi,
For further calculations I need to extract indivdual Variances of
different random effects from a fitted model.
I found out how to extract the correlations
(VarCorr(m1)@reSumry$group1) but I was not able to find a way to
extract the other components individually.
To extract the Residuals I tried: (ranef(m1)@ stdErr) which
unfortunately did not work.
Thank you very much for your help!
2012 Jun 06
2
Main effects and interactions in mixed linear models
Dear all,
This question may be too basic quesition for this list, but if someone has
time to answer I will be happy. I have tried to find out, but haven't found
a consice answer.
As an example I use "Pinheiro, J. C. & Bates, D. M. 2000. Mixed-effects
models in S and S-PLUS. Springer, New York." page 225, where rats are fed
by 3 different diets over time, which body mass has
2012 May 02
1
coxph reference hazard rate
Hi,
In the following results I interpret exp(coef) as the factor that multiplies
the base hazard rate if the corresponding variable is TRUE. For example,
when the bucket is ks008 and fidelity <= 3, then the rate, compared to the
base rate h_0(t), is h(t) = 0.200 h_0(t). My question is then, to what case
does the base hazard rate correspond to? I would expect the reference to be
the first
2007 Jun 11
1
2 iosnoop scripts: different results
I am teaching a DTrace class and a student noticed that 2 iosnoop scripts run in two different windows were producing different results. I was not able to answer why this is. Can anyone explain this. Here are the reults from the two windows:
# io.d
...
sched 0 <none> 1024 dad1 W 0.156
bash 1998
2009 Feb 23
1
why results from regression tree (rpart) are totally inconsistent with ordinary regression
Hi,
In my analysis of impacts of insecticide-treated bednets on malaria, I
look at the relationship between malaria incidence and mosquito
behaviors. The condensed data set is copied here. Ordinary regression
(lm) shows that Incidence was negatively related to Mortality. This
makes sense because the latter reflected the strength of killing
mosquitoes by insecticide-treated nets. Since the