Displaying 20 results from an estimated 120 matches similar to: "Merging two data sets"
2009 Oct 15
2
Data frame search and remove questions
Hello,
I have a couple questions about removing rows from a data frame and
creating a new data frame with the removed values. I provided an
example data frame (d) below.
Questions:
1) How can I search for "-999.000" and remove the entire row from data
frame "d"? (all -999 values will be in sd_diff)
2) Can I create a new data frame "d.new" that only contains the
2007 Nov 07
1
Aggregate with non-scalar function
R-Helpers,
I'm sorry to have to ask this -- I've not used R very much in the last
8 or 10 months, and I've gotten rusty.
I have the following (ff2 is a subset of a much, much larger dataset):
> ff2
hostName user sys idle obsTime
10142 fred 0.4 0.5 98.0 2007-11-01 02:02:18
16886 barney 0.5 0.2 94.6 2007-10-25 19:12:12
8795 fred 0.0 0.1 99.8
2010 Dec 23
1
Running sweave automatically using cygwin
Hi all,
Hope someone could help me.
I am trying to run automatically the conversion of an Rwn file to a tex
file.
I am using windows 7, and cygwin.
I tried to run automatically the Sweave.sh script, in its the most
recent version available at R webpage:
http://cran.r-project.org/contrib/extra/scripts/Sweave.sh
Unfortunately, I got this error message:
===========================
Raquel at
2008 Jul 25
0
glht after lmer with "$S4class-" and "missing model.matrix-" errors with DATA
maybe it's in the data? So here it comes.
> sv.growth
Grouped Data: length ~ meas | box_id
meas spec comp water box_id sprouts leaves length
long.sprout
1 1 Sv control moist 1 8.800000 37.80 211.2000
60.6
2 1 Sv xfull moist 2 7.000000 8.00 174.8000
62.8
3 1 Sv control moist 3 9.000000
2004 Dec 29
1
Discrepancy between intervals.lme and coef.lme
I'm using R on Windows v2.0.1 with the nlme package (v3.1-53) and am finding some unexpected discrepancies in the output of intervals.lme and coef.lme. I've included a toy dataset at the end, but briefly, the data are longitudinal data from couples in marital therapy. Each spouse's relationship satisfaction is measured 4 times; I've fit both linear and quadratic models to the
2012 Jan 11
1
ANNOUNCE: oz 0.8.0 release
All,
I'm pleased to announce release 0.8.0 of Oz. Oz is a program for
doing automated installation of guest operating systems with limited
input from the user.
Release 0.8.0 is a (long overdue) bugfix and feature release for
Oz. Some of the highlights between Oz 0.7.0 and 0.8.0 are:
- Optional virtualenv make target
- Conversion of unittests to py.test
- Replace
2020 Jun 04
1
virt-v2v: test-v2v-oa-option.sh: output is not preallocated
Hi All,
Thanks to your previous assistance, I'm able to compile virt-v2v
1.42.0 on Debian testing. However, when I run `make check`
tests/test-v2v-oa-option.sh fails with the following log:
-8<------------------------------------------------------------------
test-v2v-oa-option.sh: info: you can skip this test by setting SKIP_TEST_V2V_OA_OPTION_SH=1
[ 0.0] Opening the source -i libvirt
2009 Mar 11
2
Question about datatypes/plotting issue
Hi,
I am trying to plot the Case-Shiller index found at: http://www2.standardandpoors.com/spf/pdf/index/CSHomePrice_History_022445.xls
The way I'm importing it into R is as follows:
library(gdata)
W <- read.xls("http://www2.standardandpoors.com/spf/pdf/index/CSHomePrice_History_022445.xls
", header=TRUE)
attach(W)
To give you and idea of what the data looks like:
>
2009 Oct 25
1
Datasets for "The Statistical Sleuth"
Hi everyone,
I wonder if there already exists any R packages containing all the
data sets for the book "The Statistical Sleuth"
(http://www.proaxis.com/~panorama/home.htm; also available at StatLib
http://lib.stat.cmu.edu/datasets/sleuth).
I'm writing an R package with a friend for one of our stat courses
where SAS is the main tool being used. As the time is limited and half
of the
2005 Aug 02
6
can we manage memory usage to increase speed?
Hi,
Thanks for reading.
I am running a process in R for microarray data analysis. RedHat Enterprise Linux 4, dual AMD CPU, 6G memory. However, the R process use only a total of <200M memory. And the CPU usage is total to ~110% for two. The program takes at least 2 weeks to run at the current speed. Is there some way we can increase the usage of CPUs and memories and speed up? Any
2011 Nov 18
3
tip: large plots
Hi all,
I'm working with a bunch of large graphs, and stumbled across
something useful. Probably many of you know this, but I didn't and so
others might benefit.
Using pch="." speeds up plotting considerably over using symbols.
> x <- runif(1000000)
> y <- runif(1000000)
> system.time(plot(x, y, pch="."))
user system elapsed
1.042 0.030 1.077
1999 Nov 09
2
Problems with read.table
Hi I am using R65.1 in Windows 95
I have a CSV file from Excell
>
a<-read.table("c:/heberto/mgc/tst.csv",header=T,sep=",")
> attach(a)
> a
manolo fvcpp fevpp fvvcpp tlcpp rvpp rvtlpp plmaxpp
1 1 99.28 97.67 98.38 91.14 102.9 111.25 117.64
2 1 86.97 68.56 78.89 94.60 112.34 118.53 159.20
3 1 81.12 71.76 88.37 89.16
2010 Aug 23
2
lmer() causes segfault
Hello lmer() - users,
A call to the lmer() function causes my installation of R (2.11.1 on
Mac OS X 10.5.8) to crash and I am trying to figure out the problem.
I have a data set with longitudinal data of four subsequent
performance measures of 1133 individuals nested in 88 groups. The data
is in long format. I hypothesize a performance increase for each
individual over time and intend to
2009 Oct 29
5
Summing identical IDs
Hello All,
I would like to select records with identical IDs then sum an attribute
then and return them to the data frame as a single record. Please consider
Acres<-c(100,101,100,130,156,.5,293,300,.09)
Bldgid<-c(1,2,3,4,5,5,6,7,7)
DF=cbind(Acres,Bldgid)
DF<-as.data.frame(DF)
So that:
Acres Bldgid
1 100.00 1
2 101.00 2
3 100.00 3
4 130.00 4
5 156.00 5
2009 Oct 15
1
tapply() and using factor() on a factor
Dear List,
Shouldn't result1 and result2 be equal in the following case?
Note that log$RequestID is a factor. That is, is.factor(log$RequestID)
yields TRUE.
result1 <- tapply(log$Flag,factor(log$RequestID),sum)
result2 <- tapply(log$Flag,log$RequestID,sum)
Yet, when I summarize the output, I get the following:
summary(result1)
Min. 1st Qu. Median Mean 3rd Qu.
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2007 Jun 26
1
Subscripting specified variables in a function
I'm trying to create a function which will allow me to subset a data set
based on values of various specified variables. I also want to then
apply some other function(s) (e.g., summary).
This is what I've tried so far....
> test.fx <- function(dta, expvar, expval) {
+ newdta <- subset(dta, eval(expvar)>expval)
+ summary(newdta$eval(expvar))
+ }
>
>
2010 Feb 24
2
How to read percentage and currency data?
I'm struggling to find any help on this seemingly simple question - how does
one read data with percentage (%) or currency (?,$ etc.) signs? When I try
to read a data file which has any of those symbols in the data fields, they
are read as characters rather than values. Is there a function or library
which can deal with such values?
As an example, I use this sample from one of chinna's
2011 May 24
0
ProgeCAD Layer drop down menu opening up off screen
rucker222 wrote:
> The drop down layer selection menu at the top left of the screen directly above where the drawing 1 tab is opens upwards off of the screen once the drawing has a few layers on it.
jjmckenzie wrote:
> Log file please.
I loaded a drawing with plenty of layers and then clicked on the layer drop down menu a couple of times before exiting ProgeCAD.
Log file below. (sorry
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)