similar to: A contingency table of counts by case

Displaying 20 results from an estimated 2000 matches similar to: "A contingency table of counts by case"

2006 Sep 22
3
Compiling a contingency table of counts by case
I have asked a similar question before but this time the problem is somewhat more involved. I have the following data: case;name;x 1;Joe;1 1;Mike;1 1;Zoe;1 2;Joe;1 2;Mike;0 2;Zoe;1 2;John;1 3;Mike;1 3;Zoe;0 3;Karl;0 I would like to count the number of "case" in which any two "name" a. both have "x=1", b. the first has "x=0" - the second has
2006 Oct 03
1
Reshape into a contingency table/Fisher's test
Dear all, how can I "reshape"/"cast" the following matrix 00;01;10;11 John.Mike;123;313;12;31 John.Jim;54;57;39;36 John.Steve;135;47;47;74 Mike.Jim;63;37;27;16 Mike.Steve;15;15;5;61 Jim.Steve;6;10;34;35 into a set of stacked 2x2 contingency tables 0;1 John;123;12 Mike;313;31 John;54;39 Jim;57;36 John;135;47 Steve;47;16 ... so that the "fisher.test" and
2007 Feb 06
1
Questions on counts by case
Hi all, for the data below I would like to 1. generate a dummy variable for each group "gr" of the same composition by people, then save each portion in a separate file, 2. compute the frequency of "1"'s in "x" for each person by group "gr". So, "mike" will have freq=2/3, as he has two "1" and one "0" in 3 groups.
2006 Dec 14
3
Delete all dimnames
Hello, how can I get rid of all dimnames so that: $amat Var3 Var2 Var1 8 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 7 1 1 1 0 1 0 0 0 1 0 0 0 0 0 0 6 1 1 0 1 0 1 0 0 0 1 0 0 0 0 0 5 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 4 1 0 1 1 0 0 1 0 0 0 0 1 0 0 0 3 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 2 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
2005 Nov 23
2
vector of permutated products
Given an x-vector with, say, 3 elements, I would like to compute the following vector of permutated products (1-x1)*(1-x2)*(1-x3) (1-x1)*(1-x2)*x3 (1-x1)*x2*(1-x3) x1*(1-x2)*(1-x3) (1-x1)*x2*x3 x1*(1-x2)*x3 x1*x2*(1-x3) x1*x2*x3 Now, I already have the correctly sorted matrix of permutations! So, the input looks something like: #input x<-c(0.3,0.1,0.2) Nx<-length(x) Ncomb<-2^Nx
2007 Nov 28
1
Order observations in a dataframe
Dear All, Suppose I have the following dataframe: country;weight;group bul;10;1 cze;12;1 grc;12;1 hun;12;1 prt;12;1 rom14;1 fra;29;2 ita;29;2 gbr;29;2 aut;10;3 bel;12;3 The "group" variable denotes the id-number of a group of countries. How can I re-label the groups in the descending order of their cumulative "weight", which wound be: country;weight;group fra;29;1 ita;29;1
2007 Nov 28
4
Replacing values job
Hallo, I have two vectors of different lengths which contain the same set of values: X < -c(2,6,1,7,4,3,5) Y <- c(1,1,6,4,6,1,4,1,2,3,6,6,1,2,4,4,5,4,1,7,6,6,4,4,7,1,2) How can I replace the values in Y with the index (!) of the corresponding values in X. So 2 appears in X in the first coordinate, so all 2’s in Y should be replaced by 1, etc. Thank you for your help, Serguei
2007 Feb 26
1
Adding duplicates by rows
Hi, I am trying to add duplicates of matrix "mat" by row. Commands subset(mat,duplicated(rownames(mat))) or mat[which(duplicated(rownames(mat))),] return only half of the required indices. How can I find the remaining ones, ie the matches, so that I can add them up? Thanks, Serguei ___________________________________________________________________ Austrian Institute of Economic
2008 Dec 24
1
Implementing a linear restriction in lm()
Dear All! I want to test a coeffcient restriction beta=1 in a univariate model lm (y~x). Entering lm((y-x)~1) does not help since anova test requires the same dependent variable. What is the right way to proceed? Thank you for your help and marry xmas, Serguei Kaniovski ________________________________________ Austrian Institute of Economic Research (WIFO)
2005 Dec 04
4
Construct a data.frame in a FOR-loop
Say I have a FOR-loop for computing powers (just a trivial example) for(i in 1:5) { x<-i^2 y<-i^3 } How can I create a data.frame and a 3D plot of (i,x(i),y(i)), i.e. for each iteration Thanks, Serguei Kaniovski -- ___________________________________________________________________ ??sterreichisches Institut f??r Wirtschaftsforschung (WIFO) Name: Serguei Kaniovski
2006 Dec 04
1
Count cases by indicator
Hi, In the data below, "case" represents cases, "x" binary states. Each "case" has exactly 9 "x", ie is a binary vector of length 9. There are 2^9=512 possible combinations of binary states in a given "case", ie 512 possible vectors. I generate these in the order of the decimals the vectors represent, as:
2007 Dec 05
1
Information criteria for kmeans
Hello, how is, for example, the Schwarz criterion is defined for kmeans? It should be something like: k <- 2 vars <- 4 nobs <- 100 dat <- rbind(matrix(rnorm(nobs, sd = 0.3), ncol = vars), matrix(rnorm(nobs, mean = 1, sd = 0.3), ncol = vars)) colnames(dat) <- paste("var",1:4) (cl <- kmeans(dat, k)) schwarz <- sum(cl$withinss)+ vars*k*log(nobs) Thanks
2009 Apr 18
5
Dummy (factor) based on a pair of variables
Dear All! my data is on pairs of countries, i and j, e.g.: y,i,j 1,AUT,BEL 2,AUT,GER 3,BEL,GER I would like to create a dummy (indicator) variable for use in regression (using factor?), such that it takes the value of 1 if the country is in the pair (i.e. EITHER an i-country OR an j-country). Thank you for your help, Serguei ________________________________________ Austrian Institute of
2006 Oct 04
1
Optim: Function definition
Hi all, I apply "optim" to the function "obj", which minimizes the goodness of fit statistic and obtains Pearson minimum chi-squared estimate for x[1], x[2] and x[3]. The vector "fr" contains the four observed frequencies. Since "fr[i]" appears in the denominator, I would like to substitute "0" in the sum if fr[i]=0. I tried an
2005 Dec 03
1
Correlation matrix from a vector of pairwise correlations
I've a vector of pairwise correlations in the order low-index element precedes the high-index element, say: corr(1,2)=0.1, corr(1,3)=0.2, corr(2,3)=0.3, corr(3,4)=0.4 How can I construct the corresponding correlation matrix? I tried using the "combn"-function in "combinat" package: library(combinat) combn(c(0.1,0.2,0.3,0.4),2) , but to no avail... Thank you for your
2011 Oct 19
1
Estimating bivariate normal density with constrains
Dear R-Users I would like to estimate a constrained bivariate normal density, the constraint being that the means are of equal magnitude but of opposite signs. So I need to estimate four parameters: mu (meanvector (mu,-mu)) sigma_1 and sigma_2 (two sd deviations) rho (correlation coefficient) I have looked at several packages, including Gaussian mixture models in Mclust, but I am not sure
2008 Dec 26
1
starting values update
Hi all, does anyone know how to automatically update starting values in R? I' m fitting multiple nonlinear models and would like to know how I can update starting values without having to type them in. thank all --- On Fri, 12/26/08, r-help-request@r-project.org <r-help-request@r-project.org> wrote: From: r-help-request@r-project.org <r-help-request@r-project.org> Subject:
2009 Jun 26
3
Compute correlation matrix for panel data with specific ordering
Hello All, I have a panel date - here a small-scale example: df <- data.frame(cbind(rep(c("AUT","BEL","DEN","GER"),4),cbind(rep(c(1999,2000,2001,2002),4)),sample(10,16,replace=T))) names(df) <- c("country","year","x") SORT <- c("GER","BEL","DEN","AUT") I need to compute the
2008 Jan 29
1
Correlation matrix for data in long format
Hello, I cannot figure out how to use "tapply" to compute the correlation matrix in the variable "x" between the states? The data is in long format, e.g.: state,year,x Alabama,2001,0.45 Alabama,2002,0.47 Alabama,2003,0.48 Alabama,2004,0.44 Arizona,2001,0.34 Arizona,2002,0.32 Arizona,2003,0.38 Arizona,2004,0.36 Thank you in advance for your help, Serguei Kaniovski
2009 Aug 11
1
Help on a combinatorial task (lists?)
Hello! I have the following combinatorial problem. Consider the cumulative sums of all permutations of a given weight vector 'w'. I need to know how often weight in a certain position brings the cumulative sums equal or above the given threshold 'q'. In other words, how often each weight is decisive in raising the cumulative sum above 'q'? Here is what I do: w <-