Displaying 20 results from an estimated 2000 matches similar to: "Help using substitute and expression functions"
2017 Oct 20
1
create a loop
Hi R Users,
I do have very big data sets and wanted to run some of the analyses many
times with randomization (1000 times).
I have done the analysis using an example data but it need to be done with
randomized data (1000 times). I am doing manually for 10000 times but
taking so much time, I wonder whether it is possible to perform the
analysis with creating a loop for many replicated datasets?
2005 Mar 29
2
matching vectors against vectors
Hi all.
I have a re-occuring typical problem that I don't know how to solve
efficiently.
The situation is the following: I have a number of data-sets
(A,B,C,...) , consisting of an identifier (e.g. 11,12,13,...,20) and a
measurement (e.g. in the range 100-120). I want to compile a large
table, with all availabe identifiers in all data-sets in the rows, and
a column for every
2007 Apr 26
2
path autocompletion in 2.5.0
Hi,
R 2.5.0 isn't auto-completing paths properly as it used to. E.g.
suppose I have:
> dir("CEL/choe")
[1] "chipC-rep1.CEL" "chipC-rep2.CEL" "chipC-rep3.CEL" "chipS-rep1.CEL"
[5] "chipS-rep2.CEL" "chipS-rep3.CEL"
Now if I do:
ReadAffy("CEL/choe/ch<tab> # => ReadAffy("CEL/choe/chip
2007 Apr 13
2
replicates in repeated ANOVA
Hi,
I have sort of a newbie question. I've seriously put a lot of effort into how to handle simple replicates in a repeated ANOVA design, but haven't had much luck.
I really liked reading "Notes on the use of R for psychology experiments and questionnaires", by Jonathan Baron and Yuelin Li ( http://www.psych.upenn.edu/~baron/rpsych/rpsych.html ) but still didn't run across
2011 Jun 20
0
Mixed model for count data?
Hi all,
I have a rather peculiar dataset that I'm not sure how to model
properly. This is data from an instrument that measures the size of
particles but instead of giving a continuous value, it generates a
"histogram" of the counts for a particular bin size. So the data looks
like this:
Condition 1
Condition 2
Dimension Rep1.A Rep1.B
2007 Sep 08
1
Problem with the aggregate command
Dear friends,
I have a data set with 23 columns and 38000 rows. It is a panel running from the years 1991 through 2005. I want to aggregate the data and get the medians of each of the 23 columns for each of the years. In other words my output should be like this
Year Median
1991 123
1992 145
1993 132
etc.
The sample lines of code to do this operation is
set1 <-
2012 Apr 13
5
Merging two data frames with different columns names
I am trying to merge two data frames, but one of the column headings are
different in the two frames. How can I rjoin or rbind the tho frames?
Johnny
# Generate 2 blocks by confounding on abc
d1 <- conf.design(c(1,1,1), p=2, block.name="blk", treatment.names =
c("A","B","C"))
d2 <- conf.design(c(1,1,1), p=2, block.name="blk",
2003 Dec 23
0
Permissions Problems Problem solved!
I had forgotten to set the permissions on the upload folder on the
remote server. Once I set the perms correctly, it worked. Thanks to
everyone for the help!
-----Original Message-----
Here's my command copied from a shell script:\
rsync --verbose --progress --stats --compress --rsh=/usr/bin/ssh
--recursive --times --perms --links \
/home/* trt@xxxxxx.xxxxxxx.xxx:/remotebackups/
2003 Dec 23
4
Permissions Problems
Here's my command copied from a shell script:\
rsync --verbose --progress --stats --compress --rsh=/usr/bin/ssh
--recursive --times --perms --links \
/home/* trt@xxxxxx.xxxxxxx.xxx:/remotebackups/
Here's some (a small part) of the output:
jk/.recently-used
253 100% 0.00kB/s 0:00:00
rsync: recv_generator: mkdir "jk/.secpanel/.runfiles": Permission denied
(2)
stat
2013 May 01
3
grep help (character ommission)
Hello,
Banging my head against a wall here ... can anyone light the way to a
pattern modification that would make the following TRUE?
identical(
grep(
"^Intensity\\s[^HL]",
c("Intensity","Intensity L", "Intensity H", "Intensity Rep1")),
as.integer(c(1,4)))
Thank you for your time.
Sincerely, Joh
2006 Aug 15
0
Help with workaround for: Function '`[`' is not in thederivatives table
Earl F. Glynn asks:
> -----Original Message-----
> From: r-help-bounces at stat.math.ethz.ch
[mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Earl F. Glynn
> Sent: Tuesday, 15 August 2006 8:44 AM
> To: r-help at stat.math.ethz.ch
> Subject: [R] Help with workaround for: Function '`[`' is not in
thederivatives table
>
> # This works fine:
> > a <- 1
2004 Aug 23
1
Two factor ANOVA with lm()
The following is a data frame
> "jjd" <- structure(list(Observations = c(6.8, 6.6, 5.3, 6.1,
7.5, 7.4, 7.2, 6.5, 7.8, 9.1, 8.8, 9.1), LevelA = structure(c(1,
1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3), .Label = c("A1", "A2",
"A3"), class = "factor"), LevelB = structure(c(1, 1, 2, 2,
1, 1, 2, 2, 1, 1, 2, 2), .Label =
2002 Oct 17
2
data.frame bug?
I'd like to create a data frame with components
> jk$x1
[1] 2
> jk$x2
[,1] [,2]
[1,] 0 0
I used to be able to do it with
> jk <- data.frame(x1=2,x2=I(matrix(0,1,2)))
But now I get a error message.
Can I still do what I want? Thanks for any help.
Chong Gu
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list --
2013 Apr 16
1
An error occurred while installing jk-ferret (0.11.8.3), and Bundler cannot continue.
*Hi guys..
i got this type of error when i configured Gemfile
then what to do?*
gem "thinking-sphinx"
gem ''ferret''
Updating git://github.com/mark-moseley/linecache
Fetching gem metadata from https://rubygems.org/.......
Fetching gem metadata from https://rubygems.org/..
Resolving dependencies...
Using rake (10.0.3)
Using ClothRed (0.4.1)
Using RedCloth (4.2.9)
2004 Aug 06
0
RE: Please confirm your message
> -----Original Message-----
> From: speex-dev@xiph.org [mailto:speex-dev@xiph.org]
> Sent: Sunday, May 11, 2003 11:26 PM
> To: jk@pageshare.com
> Subject: Please confirm your message
>
>
> Hello, this is the mailing list anti-spam filter at Xiph.Org.
> We need you to confirm your e-mail message with the subject of
> "Using speex.".
>
> Please
2004 Feb 13
1
How to get time differences in consistent units?
I'm still having trouble getting to grips with time classes.
I wish to calculate the difference in days between events.
Browse[1]> insp.j$First
[1] "2002-02-19 13:00:00 NZDT"
Browse[1]> spray.j$Date
[1] "2001-11-29 13:00:00 NZDT"
Browse[1]> insp.jk - spray.j$Date
Time difference of 82 days
If I save insp.jk to a vector, I get a nice useful value of 82.
2016 Feb 24
2
Deleting / Removing users
i have a list of users that I?ve removed from LDAP and I want to delete their mail storage.
sdbox
Dovecot 2.2.15.8
I have mail messages in one location and indexes in another. Should I just
rm -rf /messages/<username>
rm -rf /indexes/<username>
Thanks,
JK
-----------------------------------------------------------------------
John "JK" Krug
System Administrator
The
2009 Feb 03
1
Poor performance in Jedi Knight: Dark Forces II
Hi,
So I'm going to be upfront about this: I don't know how to fix this problem or even who to talk to. I've tried the JK community, #winehq, #ati, and even the ATI forums, but no there was help to be offered.
Here's the problem...
Playing Jedi Knight: Dark Forces II (http://appdb.winehq.org/objectManager.php?sClass=application&iId=122) on Wine, I get such horrid framerates
2005 May 31
1
read.delim2 regarding "#"
Hello R experts:
When I tried to read my data into R, it does not take
# sign
A subset of Exp.txt is:
Experiment name assay id Varname
(A1)DBA TPA 6h/DBA Acetone rep1(A1) #3 4090 A90C1
(A2)DBA TPA 6h/DBA Acetone rep2(A2) #3 4091 A91C1
The command is:
Exp <- read.delim2("Exp.txt",check.names=F,as.is=T)
It is excuted but gave me all the NAs. Can you all
drop me a hint?
Thanks
2004 May 20
1
Repeated measures ANOVA
Dear friends
I am not sure that I am conducting this analysis correctly. I would really appreciate if someone can verify what I've done.
I conducted repeated measures ANOVA for some bugs data. These bugs were measured repeatedly over 32 weeks at the same trapping plots. I want to test a full model for the effect of time ("week") (the "within subject" variable), and the