similar to: Convert factor to "double"?

Displaying 20 results from an estimated 10000 matches similar to: "Convert factor to "double"?"

2011 Jan 02
1
How to compute the density of a variable that follows a proportional error distribution
Hello, I am trying to compute the density of a variable k that is either (1) Normally distributed; (2) Log-Normally distributed; or (3) follows proportional error distribution. I tried to search R-help and the answer for normal distribution was easy to find (please see 1c). I am not sure if my formula for dlnorm is correct (please see 2c below)? I really don't know what function to use for the
2010 Mar 17
2
How can I return rows from a data frame with maximum value by factor?
Hi, I'm new to R and new to this forum. I'm struggling with trying to extract certain rows of data from my data.frame. The data.frame has eleven columns. Among those columns are "FISH_ID" and "DATE_TIME". FISH_ID is a factor. For each of my 21 unique FISH_IDs (levels) I have a few to a few thousand rows, each row with a unique DATE_TIME value. I would like to obtain,
2017 Nov 09
1
weighted average grouped by variables
Hello, Using base R only, the following seems to do what you want. with(mydf, ave(speed, date_time, type, FUN = weighted.mean, w = n_vehicles)) Hope this helps, Rui Barradas Em 09-11-2017 13:16, Massimo Bressan escreveu: > Hello > > an update about my question: I worked out the following solution (with the package "dplyr") > > library(dplyr) > > mydf%>% >
2014 Oct 10
1
fixes for quota support on NetBSD
Hi! dovecot-2.2.13 already has quota support for NetBSD, but it's buggy. The attached patches by Manuel Bouyer <bouyer at NetBSD.org> fix the issues. There is one thing that's not nice in them: one include is now for "/usr/include/quota.h" since dovecot comes with its own file "quota.h" which is earlier in the search path. Perhaps dovecot's copy can be
2006 Jun 15
3
Can I call MySql statements directly??
Hi All. I have a mysql statement that I would really really like to call from my Ruby program which goes like this: SELECT a, b, DAYOFWEEK(date_time) as DOW, HOUR(date_time) at hr, AVG(x/y) FROM records; This is possible by creating a 3-dimentional array of a, b, date_time containing x/y, and then finding averages and putting it into a 4-dimensional array of a, b, dow,
2009 Jul 14
2
hi friends, is there any wait function in R
hi, is there any wait function in R. I am running one R script to plot many graphs it is in the for loop. its showing no error but its not plotting well I think i can solve this problem with a wait function. Please help me in this regards. If u need any clarification about programme. u can find the script below. best regards, Deepak.M.R Biocomputing Group University of Bologana. #!/usr/bin/R
2017 Nov 09
0
weighted average grouped by variables
Hello an update about my question: I worked out the following solution (with the package "dplyr") library(dplyr) mydf%>% mutate(speed_vehicles=n_vehicles*mydf$speed) %>% group_by(date_time,type) %>% summarise( sum_n_times_speed=sum(speed_vehicles), n_vehicles=sum(n_vehicles), vel=sum(speed_vehicles)/sum(n_vehicles) ) In fact I was hoping to manage everything in a
2017 Nov 09
4
weighted average grouped by variables
hi all I have this dataframe (created as a reproducible example) mydf<-structure(list(date_time = structure(c(1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000), class = c("POSIXct", "POSIXt"), tzone = ""), direction = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L), .Label = c("A", "B"), class =
2006 Feb 08
1
Possible AGI Bug in Asterisk?
Dear All, I seem to have stumbled across an AGI problem; I have written an AGI Script (bottom of this email); The script does the following; Makes a CDR entry when called Records the call Updates the CDR Finds a corresponding DNIS from the SMDR table (captured via a serial port logger) Matches up the record and updates the CDR. The script works perfectly in my test lab and has been doing so
2009 Oct 06
1
ggplot2 applying a function based on facet
Look at the bottom of the message for my question #here is a little function that I wrote USGS <- function(input="discharge", days=7){ library(chron) library(gsubfn) #021973269 is the Waynesboro Gauge on the Savannah River Proper (SRS) #02102908 is the Flat Creek Gauge (ftbrfcms) #02133500 is the Drowning Creek (ftbrbmcm) #02341800 is the Upatoi Creek Near Columbus (ftbn) #02342500 is
2008 Jan 20
4
read.table: wrong error message? (PR#10592)
--Apple-Mail-44--797532055 Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit I believe read.table may report misleading errors. In this example, where a header line in a file has an incorrect number of row names (28 instead of 29), I get the error message "duplicate row.names are not allowed". However, I cannot not find any
2010 Oct 27
1
Fill in missing times in a timeseries with NA
Hi, I have a irregularly spaced time series dataset, which reads in from a .csv. I need to convert this to a regularly spaced time series by filling in missing rows of data with NAs. So my data, called NtuMot, looks like this (I've removed some of the additional rows for simplicity).... ELEID date_time height slope 1 2009-06-24 00:00:00
2009 Jun 11
3
deSolve question
Dear All, I like to simulate a physiologically based pharmacokinetics model using R but am having a problem with the daspk routine. The same problem has been implemented in Berkeley madonna and Winbugs so that I know that it is working. However, with daspk it is not, and the numbers are everywhere! Please see the following and let me know if I am missing something... Thanks a lot in advance,
2017 Nov 09
2
weighted average grouped by variables
Hi Thanks for working example. you could use split/ lapply approach, however it is probably not much better than dplyr method. sapply(split(mydf, mydf$type), function(speed, n_vehicles) sum(mydf$speed*mydf$n_vehicles)/sum(mydf$n_vehicles)) gives you averages aggregate(mydf$n_vehicles, list(mydf$type), sum)$x gives you sums Cheers Petr > -----Original Message----- > From: R-help
2017 Nov 11
0
weighted average grouped by variables
> On 9 Nov 2017, at 14:58, PIKAL Petr <petr.pikal at precheza.cz> wrote: > > Hi > > Thanks for working example. > > you could use split/ lapply approach, however it is probably not much better than dplyr method. > > sapply(split(mydf, mydf$type), function(speed, n_vehicles) sum(mydf$speed*mydf$n_vehicles)/sum(mydf$n_vehicles)) > gives you averages > The
2010 May 18
2
Function that is giving me a headache- any help appreciated (automatic read )
note: whole function is below- I am sure I am doing something silly. when I use it like USGS(input="precipitation") it is choking on the precip.1 <- subset(DF, precipitation!="NA") b <- ddply(precip.1$precipitation, .(precip.1$gauge_name), cumsum) DF.precip <- precip.1 DF.precip$precipitation <- b$.data part, but runs fine outside of the function: days=7
2017 Nov 09
1
weighted average grouped by variables
Dear Massimo, It seems straightforward to use weighted.mean() in a dplyr context library(dplyr) mydf %>% group_by(date_time, type) %>% summarise(vel = weighted.mean(speed, n_vehicles)) Best regards, ir. Thierry Onkelinx Statisticus / Statistician Vlaamse Overheid / Government of Flanders INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND FOREST Team
2010 Jun 07
1
Patch for legend.position={left,top,bottom} in ggplot2
Hi Hadley and everyone, here's a patch for ggplot2 that fixes the behavior of opts(legend.position={left,top,bottom}). If you try the following code in an unmodified ggplot2 options(warn = -1) suppressPackageStartupMessages(library("ggplot2")) data <- data.frame( x = c(1, 2, 3, 4, 5, 6), y = c(2, 3, 4, 3, 4, 5), colour = c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE))
2007 Dec 26
1
seekViewport error
Why does the seekViewport at the bottom give an error? > xyplot(Sepal.Length ~ Sepal.Width, iris, group = Species, col = 11:13, + auto.key = TRUE) > grid.ls(view = TRUE) ROOT GRID.rect.89 plot1.toplevel.vp plot1.xlab.vp plot1.xlab 1 plot1.ylab.vp plot1.ylab 1 plot1.strip.1.1.off.vp GRID.segments.90 1 plot1.strip.left.1.1.off.vp
2009 Oct 06
2
ggplot cumsum refined question (?)
OK, so maybe last night was a little too much at one throw, so I have reduced the data to two stations- one that has precipitation and one that does not. This is going to be in the context of a larger data set. I would like to be able to issue a ggplot command and have cum sum just act on the facets (factors) to apply this. library(chron) library(ggplot2) DF <- structure(list(date_time =