Displaying 20 results from an estimated 5000 matches similar to: "Truncating dates (and other date-time manipulations)"
2017 Nov 09
4
weighted average grouped by variables
hi all
I have this dataframe (created as a reproducible example)
mydf<-structure(list(date_time = structure(c(1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000), class = c("POSIXct", "POSIXt"), tzone = ""),
direction = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L), .Label = c("A", "B"), class =
2017 Nov 09
0
weighted average grouped by variables
Hello
an update about my question: I worked out the following solution (with the package "dplyr")
library(dplyr)
mydf%>%
mutate(speed_vehicles=n_vehicles*mydf$speed) %>%
group_by(date_time,type) %>%
summarise(
sum_n_times_speed=sum(speed_vehicles),
n_vehicles=sum(n_vehicles),
vel=sum(speed_vehicles)/sum(n_vehicles)
)
In fact I was hoping to manage everything in a
2006 Jun 15
3
Can I call MySql statements directly??
Hi All.
I have a mysql statement that I would really really like to call from my
Ruby program which goes like this:
SELECT a, b, DAYOFWEEK(date_time) as DOW,
HOUR(date_time) at hr,
AVG(x/y)
FROM records;
This is possible by creating a 3-dimentional array of a, b, date_time
containing x/y, and then finding averages and putting it into a
4-dimensional array of a, b, dow,
2017 Nov 09
1
weighted average grouped by variables
Hello,
Using base R only, the following seems to do what you want.
with(mydf, ave(speed, date_time, type, FUN = weighted.mean, w = n_vehicles))
Hope this helps,
Rui Barradas
Em 09-11-2017 13:16, Massimo Bressan escreveu:
> Hello
>
> an update about my question: I worked out the following solution (with the package "dplyr")
>
> library(dplyr)
>
> mydf%>%
>
2010 Oct 27
1
Fill in missing times in a timeseries with NA
Hi,
I have a irregularly spaced time series dataset, which reads in from a .csv.
I need to convert this to a regularly spaced time series by filling in
missing rows of data with NAs.
So my data, called NtuMot, looks like this (I've removed some of the
additional rows for simplicity)....
ELEID date_time height slope
1 2009-06-24 00:00:00
2006 Feb 08
1
Possible AGI Bug in Asterisk?
Dear All,
I seem to have stumbled across an AGI problem;
I have written an AGI Script (bottom of this email);
The script does the following;
Makes a CDR entry when called
Records the call
Updates the CDR
Finds a corresponding DNIS from the SMDR table (captured via a serial
port logger)
Matches up the record and updates the CDR.
The script works perfectly in my test lab and has been doing so
2009 Oct 06
1
ggplot2 applying a function based on facet
Look at the bottom of the message for my question
#here is a little function that I wrote
USGS <- function(input="discharge", days=7){
library(chron)
library(gsubfn)
#021973269 is the Waynesboro Gauge on the Savannah River Proper (SRS)
#02102908 is the Flat Creek Gauge (ftbrfcms)
#02133500 is the Drowning Creek (ftbrbmcm)
#02341800 is the Upatoi Creek Near Columbus (ftbn)
#02342500 is
2017 Nov 09
2
weighted average grouped by variables
Hi
Thanks for working example.
you could use split/ lapply approach, however it is probably not much better than dplyr method.
sapply(split(mydf, mydf$type), function(speed, n_vehicles) sum(mydf$speed*mydf$n_vehicles)/sum(mydf$n_vehicles))
gives you averages
aggregate(mydf$n_vehicles, list(mydf$type), sum)$x
gives you sums
Cheers
Petr
> -----Original Message-----
> From: R-help
2017 Nov 11
0
weighted average grouped by variables
> On 9 Nov 2017, at 14:58, PIKAL Petr <petr.pikal at precheza.cz> wrote:
>
> Hi
>
> Thanks for working example.
>
> you could use split/ lapply approach, however it is probably not much better than dplyr method.
>
> sapply(split(mydf, mydf$type), function(speed, n_vehicles) sum(mydf$speed*mydf$n_vehicles)/sum(mydf$n_vehicles))
> gives you averages
>
The
2010 Mar 17
2
How can I return rows from a data frame with maximum value by factor?
Hi,
I'm new to R and new to this forum. I'm struggling with trying to extract
certain rows of data from my data.frame. The data.frame has eleven columns.
Among those columns are "FISH_ID" and "DATE_TIME". FISH_ID is a factor. For
each of my 21 unique FISH_IDs (levels) I have a few to a few thousand rows,
each row with a unique DATE_TIME value. I would like to obtain,
2010 May 18
2
Function that is giving me a headache- any help appreciated (automatic read )
note: whole function is below- I am sure I am doing something silly.
when I use it like USGS(input="precipitation") it is choking on the
precip.1 <- subset(DF, precipitation!="NA")
b <- ddply(precip.1$precipitation, .(precip.1$gauge_name), cumsum)
DF.precip <- precip.1
DF.precip$precipitation <- b$.data
part, but runs fine outside of the function:
days=7
2011 May 18
3
Date_Time detected as Duplicated (but they are not!)
I have a problem with duplicated date_time stamps that I do not see as
duplicated.
I read a file with observations taken every 30 minutes:
> aur2009=read.csv(paste(datadir,"AUR_ECPP_2009.csv",sep="/"),sep=";",stringsAsFactors=F)
> aur2009[1:3,1:5]
Date.Time E_filled E_filled_flag LE_filled LE_filled_flag
1 1/1/2009 0:00 0 NaN 5.86
2006 Sep 12
1
openssh (OpenBSD) , bsdauth and tis authsrv
nuqneH,
I've tried using TIS authsrv authentication via bsd auth and found
it quite limited. The most important restriction it does not log
ip and fqdn of the remote peer, nor the application name, to
the authentication server. It does not matter much for TIS authsrv,
but since other applications do provide such information, our
authsrv version uses it for extra authentication restrictions.
2009 Oct 06
2
ggplot cumsum refined question (?)
OK, so maybe last night was a little too much at one throw, so I have
reduced the data to two stations- one that has precipitation and one
that does not. This is going to be in the context of a larger data
set. I would like to be able to issue a ggplot command and have cum
sum just act on the facets (factors) to apply this.
library(chron)
library(ggplot2)
DF <- structure(list(date_time =
2017 Nov 09
1
weighted average grouped by variables
Dear Massimo,
It seems straightforward to use weighted.mean() in a dplyr context
library(dplyr)
mydf %>%
group_by(date_time, type) %>%
summarise(vel = weighted.mean(speed, n_vehicles))
Best regards,
ir. Thierry Onkelinx
Statisticus / Statistician
Vlaamse Overheid / Government of Flanders
INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND
FOREST
Team
2010 May 14
2
Subscripting a matrix-like object
I have an S3 class called "tis" (Time Indexed Series) which may or may
not have multiple columns. I have a function "[<-.tis" that I've
reproduced below.
My question is this: inside of "[<-.tis", how can I distinguish between
calls of the form
x[i] <- someValue
and
x[i,] <- someValue ?
In either case, nargs() is 3, and looking at the values
2010 May 14
2
Subscripting a matrix-like object
I have an S3 class called "tis" (Time Indexed Series) which may or may
not have multiple columns. I have a function "[<-.tis" that I've
reproduced below.
My question is this: inside of "[<-.tis", how can I distinguish between
calls of the form
x[i] <- someValue
and
x[i,] <- someValue ?
In either case, nargs() is 3, and looking at the values
2008 Dec 02
2
question about the tisPlot function in package tis
List,
I am using the 'tisPlot' function in Jeff Hallman's excellent tis package
and was hoping that someone could spare me from having to dig into the
code of his 'tisPlot' function. So far as I can tell, the preferred
method of controlling the plotting of the x-axis is using the 'xTickFreq'
and 'xTickSkip' options. Unfortunately, the where the data ends
2005 Dec 15
6
passing parameters to link_to OR better way to do this?
Hi All:
I''m writing my 1st Rails app and I can''t seem to find the answer on
the web or in the book.
I''m making a table, and I want to be able to expand a filename. The
code is basically as as follows below. In the last <td> entry, I want
to call an action and pass in the test_results_path, which I will go
and read a file and munge the data for a separate
2006 Jul 19
4
sorting and pagination
Hello All,
Okay i think I''m finally getting all of what i want out of ferret
working, thanks mostly to reading this forum and also getting ALOT of
questions answered, thanks alot everyone. Anyway my last ferret task is
too get the results sorted by a field called date_registered and have
this working with pagination.
here is what i''m doing at the moment: