search for: mysubset

Displaying 9 results from an estimated 9 matches for "mysubset".

2012 Nov 07
8
Aggregate data frame across columns
Folks, I have a data frame with columns 200401, 200402, ..., 201207, 201208. These represent years/months. What would be the best way to sum these columns by year? What about by quarter? Thanks for your time, KW -- [[alternative HTML version deleted]]
2003 Dec 14
3
Problem with data conversion
...re not saved. I tried mma1 <- as.numeric(mma) but I was not successful in converting mma from a character variable to a numeric variable. So, to edit and "clean" the data, I exported the dataset as a text file to Epi Info 2002 (version 2, Windows). I used the following code: mysubset <- subset(workingdat, select = c(age,sex,status, mma, dma)) write.table(mysubset, file="mysubset.txt", sep="\t", col.names=NA) After I made changes in the variables using Epi Info (I created a new variable called "statusrec" containing values "case" and...
2007 Apr 27
5
weight
Hi, I have the file below called happyguys. It is a subset of data. How do I apply the weight variable (WTPP) to this file? Can i just multiply each column (except the first column because it is a record id) by WTPP? If the answer is yes, how do I multiply one variable name by another? Thanks, Nat PROV REGION GRADE Y_Q10A WTPP 83 48 4 7 2 342233324020 115
2007 Apr 27
0
like SPSS
...below & it works great. My question is: how do i then calculate the frequencies of smokers (1) versus non-smokers (2) after having weighted my file? or even the process that SPSS is going through to aggregate the data? Thanks, Nat Here is my code: myfile<-("c:/test2.txt") mysubset<-myfile mysubset$Y_Q02 <-mysubset$DVSELF <-NULL mysubset2<-mysubset mysubset2$Y_Q10B <-mysubset2$GP2_07 <-NULL myVariableNames<-c("PUMFID","PROV","REGION","GRADE","Y_Q10A","WTPP") myVariableWidths<-c(5,2,1,2,1,12.4...
2012 Mar 07
4
Subset problem
Good Morning ??? I have a small question regarding the function subset. I am copying data from one table but I just want to collect data from a user. When do I take the view, presents the results I want. The problem arises when can I make the tab. for RES_ID, introduces me to zero results do not envision noVIEW val_user='16' x.sub <-
2004 Feb 02
3
ordering and plotting question
Hi, I am trying to plot several rows out of a list of thousands. I have 40 columns, and about 16,000 rows with the following Df structure. ID X01 X02 X03..X40 AI456 45 64 23... AI943 14 3 45 .. AI278 78 12 68.. BW768 -2 -7 34.. ... My question is, I have a list of 100 IDs generated elsewhere (Df-"Ofinterest"), I would like to plot the 100 IDs from that data frame over the 40 columns
2013 Feb 21
2
ggplot2, geomtile fill assignment
...1 623 # 2 2 0.0125 0.0375 -0.025 1 654 # 3 3 0.0125 0.0625 -0.025 1 685 # 4 4 0.0125 0.0875 -0.025 1 1598 # 5 5 0.0125 0.1125 -0.025 1 2200 # 6 6 0.0125 0.1375 -0.025 1 1917 depths<- with(input, sort(unique(depth))) depths # [1] 1 2 3 4 mysubset<-function(input, column.name, expression.to.match){ output <- input[column.name==expression.to.match,] return(output) } sub1 <- mysubset(input, input$depth, depths[1]) sub2 <- mysubset(input, input$depth, depths[2]) sub3 <- mysubset(input, input$depth, depths[3]) sub4 <- mysubs...
2003 Oct 20
4
selecting subsets of data from matrix
Probably a stupid question, but I don't seem to be able to find the answer I'm looking for from any of the R literature. Basically I have a matrix with several thousand rows and 20 columns(weather stations) of wind direction data. I am wanting to extract a matrix which contains data for all columns conditional on column 20 having a value of _either_ less than 45 or greater than 315. (ie I
2013 Jan 03
3
Small changes to big objects (1)
Martin Morgan commented in email to me that a change to any slot of an object that has other, large slot(s) does substantial computation, presumably from copying the whole object. Is there anything to be done? There are in fact two possible changes, one automatic but only partial, the other requiring some action on the programmer's part. Herewith the first; I'll discuss the second