similar to: How to assign scores to rows based on column values

Displaying 20 results from an estimated 2000 matches similar to: "How to assign scores to rows based on column values"

2006 Dec 31
7
zero random effect sizes with binomial lmer
I am fitting models to the responses to a questionnaire that has seven yes/no questions (Item). For each combination of Subject and Item, the variable Response is coded as 0 or 1. I want to include random effects for both Subject and Item. While I understand that the datasets are fairly small, and there are a lot of invariant subjects, I do not understand something that is happening
2010 Oct 08
4
function using values separated by a comma
Hello, I have a dataframe (tab separated file) which looks like the example below - two values separated by a comma, and tab separation between each of these. [,1] [,2] [,3] [ ,4] [1,] 0,1 1,3 40,10 0,0 [2,] 20,5 4,2 10,40 10,0 [3,] 0,11 1,2 120,10 0,0 I would like to calculate the percentage of the smallest number separated by the comma by: 1) summing the values e.g. for
2010 May 02
2
Replace query
Hi, I'm trying to replace all values equal to 1 in one file (a) with the value in the corresponding column in a separate file (b). Example below. Any help (and brief notes if poss) much appreciated. Thanks!! file a: 0,0,1,1,0 1,0,0,0,1 0,0,0,0,0 1,0,1,1,0 file b: 3,4,6,8,11 output request: 0,0,6,8,0 3,0,0,0,11 0,0,0,0,0 3,0,6,8,0 -- View this message in context:
2010 Apr 09
3
How to replace all non-maximum values in a row with 0
Hi, I would like to replace all the max values per row with "1" and all other values with "0". If there are two max values, then "0" for both. Example: from: 2 3 0 0 200 30 0 0 2 50 0 0 3 0 0 0 0 8 8 0 to: 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 Thanks! -- View this message in context:
2010 Apr 27
2
Histogram not plotting correct breaks
Hi, I'm using the hist function to plot the frequency of 21 variables, but it keeps starting the x-axis from 0 and adding variables 1 and 2 together (all other vairables have the correct frequencies). I suspect it adds 1 and 2 together so that 0 can fit in with demarcations at intervals of 5. Using "xlim=c(1,21)" to specify that i don't want to include 0 and using the
2010 Oct 12
1
Comparison of two files with multiple arguments
Hello, I have an example file which can be generated using: dat <- read.table(tc <- textConnection( 'T T,G G T C NA G G A,T A A NA'), sep="") I also have a reference file with the same number of rows, for example: G C A I would like to transform the file to numerical values using the following arguments: 1) Where data points have two letters separated by a comma, e.g.
2010 Apr 30
1
How to generate a distance matrix?
Hi, I'm trying to generate a distance matrix between sample pairs (example below). I'm not very familiar with the loop command which I expect I will need for this. The example below demosntrates what I'd like to get out of the data - essentially, to calculate the proportion of positions where two samples differ. Any help much appreciated! Also, any notes on how the functions work
2015 Dec 18
2
Problem! Dovecot 2.2.9 does not send the information on ending the quota to user
Hi. I have Dovecot + Postfix + MySQL. Version of Postix: postfix_2.11.3-1ubuntu1_amd64 Version of Dovecot: 2.2.9 Operations system is: Ubuntu 15.04 x64 Postfix have patched (patch VDA - http://vda.sourceforge.net) for using with quota, it means that file "maildirsize" in mail directory already exists and changed when add/delete mail. Quota for virtual box take in MySQL db.
2011 Jul 19
2
strang behaviour of mice package
I am using mice package for multiple imputation. For one data (attached), mice doesn't impute all missing values. Specifically, some variables were not imputed at all. the reproducible code library(mice) test.df<-read.table(c:\\test.txt',header=T,sep=',') mi<-mice(test.df,maxit=10,m=5) sum(is.na(complete(mi,1))) >129 and x41, x50... were not imputed at all. Any
2008 Oct 09
4
runs of heads when flipping a coin
Can someone recommend a method to answer the following type of question: Suppose I have a coin with a probability hhh of coming up heads (and 1-hhh of coming up tails) I plan on flipping the coin nnn times (for example, nnn = 500) What is the expected probability or frequency of a run of rrr heads* during the nnn=500 coin flips? Moreover, I would probably (excuse the pun) want the answer for a
2009 Oct 15
1
calculating p-values by row for data frames
Hello R-users, I am looking for an elegant way to calculate p-values for each row of a data frame. My situation is as follows: I have a gene expression results from a microarray with 64 samples looking at 25626 genes. The results are in a data frame with the dimensions 64 by 25626 I want to create a volcano plot of difference of means vs. ?log(10) of the p-values, comparing normal samples to
2006 Dec 31
0
(no subject)
> > If one compares the random effect estimates, in fact, one sees that > > they are in the correct proportion, with the expected signs. They are > > just approximately eight orders of magnitude too small. Is this a bug? > > BLUPs are essentially shrinkage estimates, where shrinkage is > determined with magnitude of variance. Lower variance more > shrinkage towards
2005 Jul 10
3
not supressing leading zeros when reading a table?
Dear R list, I have a dataset with a column which should be read as character, like this: name surname answer 1 xx yyy "00100" 2 rrr hhh "01" When reading this dataset with read.table, I get 1 xx yyy 100 2 rrr hhh 1 The string column consists in answers to multiple choice questions, not all having the same number of answers. I could format the
2006 Dec 31
2
zero random effect sizes with binomial lmer [sorry, ignore previous]
I am fitting models to the responses to a questionnaire that has seven yes/no questions (Item). For each combination of Subject and Item, the variable Response is coded as 0 or 1. I want to include random effects for both Subject and Item. While I understand that the datasets are fairly small, and there are a lot of invariant subjects, I do not understand something that is happening here, and in
2006 Oct 20
6
summing elements in a list of functions
Dear all, I have looked for an answer for a couple of days, but can't come with any solution. I have a set of functions, say: > t0 <- function(x) {1} > t1 <- function(x) {x} > t2 <- function(x) {x^2} > t3 <- function(x) {x^3} I would like to find a way to add up the previous 4 functions and obtain a new function: > rrr <- function(x) {1+x+x^2+x^3} without,
2017 Oct 31
2
Help with Nesting
How do i resolve this? symbol <- c('RRR' ,'GGG') for(i in seq_along(symbol)) { dat <- Quandl("LLL/symbol[i]") } required solutionis a loop where Quandl is a function and it loops as flows, Quandl("LLL/RRR") Quandl("LLL/GGG")
2009 Nov 02
2
a prolem with constrOptim
Hi, I apologize for the long message but the problem I encountered can't be stated in a few lines. I am having some problems with the function constrOptim. My goal is to maximize the likelihood of product of K multinomials, each with four catagories under linear constraints on the parameter values. I have found that the function does not work for many data configurations. #The likelihood
2010 Aug 29
2
OSX 10.6.4 error with -R option
Hi All, I have had reports of problems with the -R option on OSX 10.6.4. Just tested it myself and found this odd result: When I run this "dtruss -f path/to/rsync -aHAXNR --fileflags --force-change --protect-decmpfs --stats -v /Users/astrid/Documents/main.m /Users/astrid/Desktop/rrr it produces the expected results with the relative folder paths in place
2011 May 31
1
How to get the rows corresponding to the maximum of a factor
I have a data frame as follows: MsgType eotpd fn FI 2011-05-13 01:40:00 0 FF 2011-05-13 01:39:53 0 TC 2011-05-13 01:39:45 0 FI 2011-05-14 00:58:46 1 FF 2011-05-14 00:58:46 1 FI 2011-05-15 00:48:32 2 FF 2011-05-15 00:48:21 2 TC 2011-05-15 00:48:15 2 FI 2011-05-16
2002 Jan 24
5
aggregate, by tapply
Dear R users I searched some sources but i did not find an answer.Please give me some hint to following problem. I would like to compute a summary statistic for some vector for different factor levels. I know I can use tapply or aggregate but I do not know if there is a way how to use function with several (two) variable input (like weighted.mean). I wrote a simple a function for factor