similar to: Simulate phi-coefficient

Displaying 20 results from an estimated 2000 matches similar to: "Simulate phi-coefficient"

2005 Sep 27
1
Simulate phi-coefficient (correlation between dichotomous vars)
Newsgroup members, I appreciate the help on this topic. David Duffy provided a solution (below) that was quite helpful, and came close to what I needed. It did a great job creating two vectors of dichotomous variables with a known correlation (what I referred to as a phi-coefficient). My situation is a bit more complicated and I'm not sure it is easily solved. The problem is that I must
2006 Jun 28
1
Simulate dichotomous correlation matrix
Newsgroup members, Does anyone have a clever way to simulate a correlation matrix such that each column contains dichotomous variables (0,1) and where each column has different prevalence rates. For instance, I would like to simulate the following correlation matrix: > CORMAT[1:4,1:4] PUREPT PTCUT2 PHQCUT2T ALCCUTT2 PUREPT 1.0000000 0.5141552 0.1913139 0.1917923 PTCUT2
2008 Nov 08
4
missing value where TRUE/FALSE needed
Hello dear R people, for my MSc thesis I need to program some functions, and some of them simply do not work. In the following example, I made sure both vectors have the same length (10), but R gives me the following error: Error in if (vector1[i] == vector2[j]) { : missing value where TRUE/FALSE needed I googled for possible solutions, but I did not find a good explanation for this...
2005 Jun 20
3
vectorisation suggestion
Hi All, I am counting the number of occurrences of the terms listed in one vector in another vector. My code runs: for( i in 1:length(vector3)){ vector3[i] = sum(1*is.element(vector2, vector1[i])) } where vector1 = vector containing the terms whose occurrences I want to count vector2 = made up of a number of repetitions of all the elements of vector1 vector3 = a vector of NAs that is
2012 Jul 30
4
A "matching problem"
Dear all, I was encountering with a typical Matching problem and was wondering whether R can help me to solve it directly. Let say, I have 2 vectors of equal length: vector1 <- LETTERS[1:6] vector2 <- letters[1:6] Now I need to match these 2 vectors with all possible ways like: (A,B,C,D,E) & (a,b,c,d,e) is 1 match. Another match can be (A,B,C,D,E) & (b,a,c,d,e), however there
2006 Jan 05
4
ylim problem in barplot
R Version 2.2.0 Platform: Windows When I use barplot but select a ylim value greater than zero, the graph is distorted. The bars extend below the bottom of the graph. For instance the command produces a problematic graph. barplot(c(200,300,250,350),ylim=c(150,400)) Any help would be appreciated. Paul [[alternative HTML version deleted]]
2005 May 31
3
lars / lasso with glm
We have been using Least Angle Regression (lars) to help identify predictors in models where the outcome is continuous. To do so we have been relying on the lars package. Theoretically, it should be possible to use the lars procedure within a general linear model (glm) framework - we are particular interested in a logistic regression model. Does anyone have examples of using lars with logistic
2002 Jan 16
4
faster vector subtraction??
hi is there a faster way to do this? i <- 1 for(x in vector1) for(y in vector2) { m[[i]] <- (x - y) i <- i + 1 } regards soren -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in
2008 Jun 13
1
x86 SSE* Pointer Favors
Dear Statisticians--- This is not even an R question, so please forgive me. I have so much ignorance in this matter that I do not know where to begin. I hope someone can point me to documentation and/or a sample. I want to compute a covariance as quickly as non-humanly possible on an Intel core processor (up to SSE4) under linux. Alas, I have no idea how to engage CPU vectorization. Do I need
2010 Jan 30
2
parsing files for plot
Hi, I have many files containing one column of data. I like to use the scan function to parse the data. Next I like to bind to a large vector. I try this like: count<-1 files <- list.files() # all files in the working directory for(i in files) { tmp <- scan(i) assign(files[count], tmp) count<-count+1 } This part works! Now I like to plot the data in a boxplot.
2013 Sep 04
3
Fwd: Bienvenido a la lista de distribuciĆ³n R-help-es
Hola Jose, si CONCATENAR significa APILAR, es decir, concantenar verticalmente, por decirlo de algun modo, podrias hacerlo con rbind(): nuevovector <- rbind(vector1,vector2) Si ademas quieres que cada valor de los vectores originales sea identificado en el nuevovector, puedes usar: nuevovector <- stack(vector1,vector2) en este ultimo caso se agrega una columna adicional tipo factor, con
2009 Apr 03
1
Hello! I got error in C - R
Hello, My name is Ick Hoon Jin and I am Ph. D. student in Texas A & M Univ.. When I run the C embedded in R in the Linux system, I confront the following error after 6,000 iteration. By googling I found this error is from the problem in C. *** caught segfault *** address (nil), cause 'memory not mapped' My C code is following:
2010 Feb 02
1
Finding the difference between two vectors
Hello everyone, I have two vectors having only one element different: vector1 vector2 vector1 TWC TWC TWC VFC TWX NA VIA/B VFC VFC
2003 Aug 26
3
matching-case sensitivity
Hi All, I am trying to match two character arrays (email lists) using either pmatch(), match() or charmatch() functions. However the function is "missing" some matches due to differences in the cases of some letters between the two arrays. Is there any way to disable case sensitivity or is there an entirely better way to match two character arrays that have identical entries but written
2006 Aug 24
2
Why are lagged correlations typically negative?
Recently, I was working with some lagged designs where a vector of observations at one time was used to predict a vector of observations at another time using a lag 1 design. In the work, I noticed a lot of negative correlations, so I ran a simple simulation with 2 matched points. The crude simulation example below shows that the correlation can be -1 or +1, but interestingly if you do this
2005 May 26
2
read.spss in R 2.1.0 & make basic dataframe
Recent changes to read.spss() in the foreign package return a dataframe containing additional attributes. For example, >TEMP<-read.spss(choose.files(), to.data.frame=T,use.value.labels=F) > str(TEMP) `data.frame': 780 obs. of 8 variables: $ EXPOS01: atomic 1 1 2 1 2 3 2 4 2 1 ... ..- attr(*, "value.labels")= Named num 5 4 3 2 1 .. ..- attr(*,
2008 Jan 08
1
plotting help request II
Dear all, meanwhile i found out how to handle the coordinate thing and plot the lines like i like. the remaining problem is, i need something like names.arg does in barplot for my plot. my plot connects several dots with several lines. and now i like characters as the names of the intersects of the x-axis. vector1=c(a,b) vector2=c(c,d) # this vector makes sure the yellow line starts at
2005 Jan 24
1
mcnemar.test odds ratios, CI, etc.
Does anyone know of another version of the Mcnemar test that provides: 1. Odds Ratios 2. 95% Confidence intervals of the Odds Ratios 3. Sample probability 4. 95% Confidence intervals of the sample probability Obviously the Odds Ratios and Sample probabilities are easy to calculate from the contingency table, but I would appreciate any help on how to calculate the confidence
2011 Nov 02
1
problem with merging two matrices
Dear all, I hope you can forgive me my stupid questions, but I am a very new R user (; So, this is my question: I have two matrices, those are: matrix1 <- matrix(cbind(vector1, vector2), 1,2, dimnames = list(c("values"), c("T value", "p value"))) matrix2 <- matrix(dcbind,2,6,dimnames =
2005 May 31
1
apply the function "factor" to multiple columns
I have a case where I would like to change multiple columns containing numbers to factors. I can change each column one at a time as in: TEMP.FACT$EXPOS01<-factor(TEMP.FACT$EXPOS01,levels=c(1,2,3),labels=c("No ne","Low Impact","MedHigh Imp")) TEMP.FACT$EXPOS02<-factor(TEMP.FACT$EXPOS02,levels=c(1,2,3),labels=c("No ne","Low