Displaying 20 results from an estimated 10000 matches similar to: "xyplot() groups scope issue"
2007 Sep 17
1
Create correlated data with skew
Hi all,
I understand that it is simple to create data with a specific
correlation (say, .5) using mvrnorm from the MASS library:
> library(MASS)
> set.seed(1)
>
> a=mvrnorm(
+ n=10
+ ,mu=rep(0,2)
+ ,Sigma=matrix(c(1,.5,.5,1),2,2)
+ ,empirical=T
+ )
> a
[,1] [,2]
[1,] -1.0008380 -1.233467875
[2,] -0.1588633 -0.003410001
[3,] 1.2054727 -0.620558768
2007 Jun 26
2
Power calculation with measurement error
Hi all,
Hopefully this will be quick, I'm looking for pointers to packages/
functions that would allow me to calculate the power of a t.test when
the DV has measurement error. That is, I understand that, ceteris
paribus, experiments using measure with more error (lower
reliability) will have lower power.
Mike
--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie
2008 May 09
1
lme() with two random effects
Hi all,
I have collected response time data from 178 participants ('sub') for
each combination of 4 within-Ss factors ('con','int','tone','cue').
Additionally, I have recorded the gender of each participant, so this
forms a between-Ss factor ('gender'). Normally this would be analyzed
using aov:
2007 Sep 27
3
Aggregate factor names
Hi all,
A suggestion derived from discussions amongst a number of R users in
my research group: set the default column names produced by aggregate
() equal to the names of the objects in the list passed to the 'by'
object.
ex. it is annoying to type
with(
my.data
,aggregate(
my.dv
,list(
one.iv = one.iv
,another.iv = another.iv
,yet.another.iv = yet.another.iv
)
2007 Jun 16
0
Fwd: How to set degrees of freedom in cor.test?
You could calculate the confidence interval of the correlation at
your desired df: http://davidmlane.com/hyperstat/B8544.html
The below code takes as arguments the observed correlation, N, and
alpha, calculates the confidence interval and checks whether this
includes 0.
cor.test2=function(r,n,a=.05){
phi=function(x){
log((1+x)/(1-x))/2
}
inv.phi=function(x){
2007 Jul 13
2
Suggestion to extend aggregate() to return multiple and/or named values
Hi all,
This is my first post to the developers list. As I understand it,
aggregate() currently repeats a function across cells in a dataframe
but is only able to handle functions with single value returns.
Aggregate() also lacks the ability to retain the names given to the
returned value. I've created an agg() function (pasted below) that is
apparently backwards compatible (i.e.
2007 Aug 08
3
SWF animation method
Hi all,
Just thought I'd share something I discovered last night. I was
interested in creating animations consisting of a series of plots and
after finding very little in the usual sources regarding animation in
R directly, and disliking the imagemagick method described here
(http://tolstoy.newcastle.edu.au/R/help/05/10/13297.html), I
discovered that if one exports the plots to a
2007 Oct 01
3
optimize() stuck in local plateau ?
Hi all,
Consider the following function:
####
my.func = function(x){
y=ifelse(x>-.5,0,ifelse(x< -.8,abs(x)/2,abs(x)))
print(c(x,y)) #print what was tested and what the result is
return(y)
}
curve(my.func,from=-1,1)
####
When I attempt to find the maximum of this function, which should be
-.8, I find that optimize gets stuck in the plateau area and doesn't
bother testing the
2007 May 24
2
Calculation of ratio distribution properties
Hi all,
Looking to calculate the expected mean and variance of a ratio
distribution where the source distributions are gaussian with known
parameters and sample values are correlated. I see (from wikipedia:
http://en.wikipedia.org/wiki/
Ratio_distribution#Gaussian_ratio_distribution) that this calculation
is quite involved, so I'm hoping that someone has already coded a
function to
2007 Oct 06
1
Tricky vectorization problem
Hi all,
I'm using the code below within a loop that I run thousands of times
and even with the super-computing resources at my disposal this is
just too slow. The snippet below takes about 10s on my machines,
which is an order of magnitude or two slower than would be
preferable; in the end I'd like to set the number of monte carlo
experiments to 1e4 or even 1e5 to ensure stable
2008 Jul 10
2
Lattice: merged strips?
Hi all,
By default a call to xyplot from the Lattice package when using 2
factors [eg xyplot( dv~iv | XY * AB ) ] yields the following shingle
structure:
|_A_|_A_|_B_|_B_|
|_X_|_Y_|_X_|_Y_|
However, I'm wondering if it is possible to merge the upper shingle
within levels of that factor, as in:
|___A___|___B___|
|_X_|_Y_|_X_|_Y_|
Mike
--
Mike Lawrence
Graduate Student, Department of
2005 Jul 30
1
xyplot auto.key issue
Hi all,
I'm having a problem with the auto.key function in xyplot. I hate to bother the
list like this and I'm positive I must be missing something very simple, yet
I've spent the last day searching for a solution to no avail.
Essentially, I want a key that contains entries in which the plot points are
superimposed on a line of the same color as the points, like this: o--o--o
Now,
2008 Jul 12
2
Quick plotmath question
Hi all,
Worked & looked around for a while on this to no avail. I'm trying to
create a plotmath expression that achieves:
?i >> 0
and while:
expression(Delta*i>0)
comes close, I'd prefer to have the >> (denoting "very much greater
than"). Maybe >> is a non-standard expression and therefore not
supported?
Mike
--
Mike Lawrence
Graduate
2008 Jul 15
1
aov error with large data set
I'm looking to analyze a large data set: a within-Ss 2*2*1500 design
with 20 Ss. However, aov() gives me an error, reproducible as follows:
id = factor(1:20)
a = factor(1:2)
b = factor(1:2)
d = factor(1:1500)
temp = expand.grid(id=id, a=a, b=b, d=d)
temp$y = rnorm(length(temp[, 1])) #generate some random DV data
this_aov = aov(
y~a*b*d+Error(id/(a*b*d))
, data=temp
)
While yields the
2008 Jul 17
2
Sampling distribution (PDF & CDF) of correlation
Hi all,
I'm looking for an analytic method to obtain the PDF & CDF of the
sampling distribution of a given correlation (rho) at a given sample
size (N).
I've attached code describing a monte carlo method of achieving this,
and while it is relatively fast, an analytic solution would obviously
be optimal.
get.cors <- function(i, x, y, N){
end=i*N
2008 Jul 10
1
compiling pnmath on an intel processor running mac OS 10.5
Has anyone successfully compiled pnmath (http://www.stat.uiowa.edu/~luke/R/experimental
) for an intel processor running mac OS 10.5? When I attempt to do so
via the R package installer (choosing "Local Source Package" and
pointing to the pnmath_0.0-2.tar.gz file), I get the following errors:
* Installing *source* package 'pnmath' ...
** libs
** arch - i386
gcc -arch i386
2006 Jun 06
2
error bars in lattice xyplot *with groups*
Hi all,
I'm trying to plot error bars in a lattice plot generated with xyplot. Deepayan
Sarkar has provided a very useful solution for simple circumstances
(https://stat.ethz.ch/pipermail/r-help/2005-October/081571.html), yet I am
having trouble getting it to work when the "groups" setting is enabled in
xyplot (i.e. multiple lines). To illustrate this, consider the singer data
2007 Jun 20
2
how to create cumulative histogram from two independent variables?
Hi all,
I am extremely newbie to R. Can anybody jump-start me with any clues as to
how do I get a cumulative histogram from two independent variables,
cumhist(X,Y) ?
-jose
[[alternative HTML version deleted]]
2007 Jun 22
1
connecting to running process possible?
Hello,
i'm trying to find a more modern system to reproduce the functionality that
was available through the Histoscope program (from Fermilab). Namely, the
capability of connecting to a running process and having plots update in
realtime in response to new data. Is this possible with R? Thank you,
Charles Cosse
[[alternative HTML version deleted]]
2005 Oct 02
0
What is Mandel's (Fitting) Test?
Hello everyone,
A little background first:
I have collected psychophysical data from 12 participants, and each
participant's data is represented as a scatter plot (Percieved roughness versus
Physical roughness). I would like to know whether, on average, this data is
best fit by a linear function or by a quadratic function. (we have a priori
reasons to expect a quadratic)
Some of my