Displaying 20 results from an estimated 100 matches similar to: "ggplot2 applying a function based on facet"
2010 May 18
2
Function that is giving me a headache- any help appreciated (automatic read )
note: whole function is below- I am sure I am doing something silly.
when I use it like USGS(input="precipitation") it is choking on the
precip.1 <- subset(DF, precipitation!="NA")
b <- ddply(precip.1$precipitation, .(precip.1$gauge_name), cumsum)
DF.precip <- precip.1
DF.precip$precipitation <- b$.data
part, but runs fine outside of the function:
days=7
2009 Oct 06
2
ggplot cumsum refined question (?)
OK, so maybe last night was a little too much at one throw, so I have
reduced the data to two stations- one that has precipitation and one
that does not. This is going to be in the context of a larger data
set. I would like to be able to issue a ggplot command and have cum
sum just act on the facets (factors) to apply this.
library(chron)
library(ggplot2)
DF <- structure(list(date_time =
2017 Nov 09
4
weighted average grouped by variables
hi all
I have this dataframe (created as a reproducible example)
mydf<-structure(list(date_time = structure(c(1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000), class = c("POSIXct", "POSIXt"), tzone = ""),
direction = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L), .Label = c("A", "B"), class =
2017 Nov 09
1
weighted average grouped by variables
Hello,
Using base R only, the following seems to do what you want.
with(mydf, ave(speed, date_time, type, FUN = weighted.mean, w = n_vehicles))
Hope this helps,
Rui Barradas
Em 09-11-2017 13:16, Massimo Bressan escreveu:
> Hello
>
> an update about my question: I worked out the following solution (with the package "dplyr")
>
> library(dplyr)
>
> mydf%>%
>
2006 Jun 15
3
Can I call MySql statements directly??
Hi All.
I have a mysql statement that I would really really like to call from my
Ruby program which goes like this:
SELECT a, b, DAYOFWEEK(date_time) as DOW,
HOUR(date_time) at hr,
AVG(x/y)
FROM records;
This is possible by creating a 3-dimentional array of a, b, date_time
containing x/y, and then finding averages and putting it into a
4-dimensional array of a, b, dow,
2017 Nov 09
0
weighted average grouped by variables
Hello
an update about my question: I worked out the following solution (with the package "dplyr")
library(dplyr)
mydf%>%
mutate(speed_vehicles=n_vehicles*mydf$speed) %>%
group_by(date_time,type) %>%
summarise(
sum_n_times_speed=sum(speed_vehicles),
n_vehicles=sum(n_vehicles),
vel=sum(speed_vehicles)/sum(n_vehicles)
)
In fact I was hoping to manage everything in a
2006 Feb 08
1
Possible AGI Bug in Asterisk?
Dear All,
I seem to have stumbled across an AGI problem;
I have written an AGI Script (bottom of this email);
The script does the following;
Makes a CDR entry when called
Records the call
Updates the CDR
Finds a corresponding DNIS from the SMDR table (captured via a serial
port logger)
Matches up the record and updates the CDR.
The script works perfectly in my test lab and has been doing so
2010 Oct 27
1
Fill in missing times in a timeseries with NA
Hi,
I have a irregularly spaced time series dataset, which reads in from a .csv.
I need to convert this to a regularly spaced time series by filling in
missing rows of data with NAs.
So my data, called NtuMot, looks like this (I've removed some of the
additional rows for simplicity)....
ELEID date_time height slope
1 2009-06-24 00:00:00
2010 Mar 17
2
How can I return rows from a data frame with maximum value by factor?
Hi,
I'm new to R and new to this forum. I'm struggling with trying to extract
certain rows of data from my data.frame. The data.frame has eleven columns.
Among those columns are "FISH_ID" and "DATE_TIME". FISH_ID is a factor. For
each of my 21 unique FISH_IDs (levels) I have a few to a few thousand rows,
each row with a unique DATE_TIME value. I would like to obtain,
2017 Nov 09
1
weighted average grouped by variables
Dear Massimo,
It seems straightforward to use weighted.mean() in a dplyr context
library(dplyr)
mydf %>%
group_by(date_time, type) %>%
summarise(vel = weighted.mean(speed, n_vehicles))
Best regards,
ir. Thierry Onkelinx
Statisticus / Statistician
Vlaamse Overheid / Government of Flanders
INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND
FOREST
Team
2017 Nov 09
2
weighted average grouped by variables
Hi
Thanks for working example.
you could use split/ lapply approach, however it is probably not much better than dplyr method.
sapply(split(mydf, mydf$type), function(speed, n_vehicles) sum(mydf$speed*mydf$n_vehicles)/sum(mydf$n_vehicles))
gives you averages
aggregate(mydf$n_vehicles, list(mydf$type), sum)$x
gives you sums
Cheers
Petr
> -----Original Message-----
> From: R-help
2006 Jun 29
1
Newbie: Help Please - Model Validation Error
Hi,
I''d be grateful for your help.
I get the error (see below) everytime I add the
following into a newly generated (via scaffold) model
class:
validates_presence_of :myname, :mymessage, :mytel
Without it, I can insert records into my database.
With it, I get the error :(
ERROR>>
ArgumentError in AdminController#create
wrong number of arguments (1 for 0)
RAILS_ROOT:
2017 Nov 11
0
weighted average grouped by variables
> On 9 Nov 2017, at 14:58, PIKAL Petr <petr.pikal at precheza.cz> wrote:
>
> Hi
>
> Thanks for working example.
>
> you could use split/ lapply approach, however it is probably not much better than dplyr method.
>
> sapply(split(mydf, mydf$type), function(speed, n_vehicles) sum(mydf$speed*mydf$n_vehicles)/sum(mydf$n_vehicles))
> gives you averages
>
The
2011 May 18
3
Date_Time detected as Duplicated (but they are not!)
I have a problem with duplicated date_time stamps that I do not see as
duplicated.
I read a file with observations taken every 30 minutes:
> aur2009=read.csv(paste(datadir,"AUR_ECPP_2009.csv",sep="/"),sep=";",stringsAsFactors=F)
> aur2009[1:3,1:5]
Date.Time E_filled E_filled_flag LE_filled LE_filled_flag
1 1/1/2009 0:00 0 NaN 5.86
2008 Mar 17
0
arules - getting transaction data in
Hi All
Hoping someone can help me with the "transactions" object. I am struggling
to get my data in. I know the answer is in the help somewhere I'm sure, I
just cannot find it. Essentially, I have data in this format (though I can
change it if it particularly unsuitable)
Transaction_id, store , salesman, date_time , items
1 , waterfront, john ,
2009 Apr 03
1
Convert factor to "double"?
Hi!
I'm reading a tab-seperated CVS file with:
test1 <- read.table("data.txt", header=TRUE)
It's in the following format:
Date_Time qK qL vL vP ...
0 30 22 110 88 ...
...
(BTW: It seems to me R shifts the column descriptions by one.)
Anyway, I would like to Fourier-transform one column. So I say:
> fft(test1$vP)
Error in levels(x)[x] : invalid subscript type
2005 Dec 15
6
passing parameters to link_to OR better way to do this?
Hi All:
I''m writing my 1st Rails app and I can''t seem to find the answer on
the web or in the book.
I''m making a table, and I want to be able to expand a filename. The
code is basically as as follows below. In the last <td> entry, I want
to call an action and pass in the test_results_path, which I will go
and read a file and munge the data for a separate
2006 Jul 19
4
sorting and pagination
Hello All,
Okay i think I''m finally getting all of what i want out of ferret
working, thanks mostly to reading this forum and also getting ALOT of
questions answered, thanks alot everyone. Anyway my last ferret task is
too get the results sorted by a field called date_registered and have
this working with pagination.
here is what i''m doing at the moment:
2006 Jul 31
0
MY worker won''t stop working
> On Jul 30, 2006, at 5:11 PM, Chris H wrote:
>
>> Hi Ezra,
>>
>> thanks for the reply.
>>
>> There''s a ruby process that appears in top when I fire off the do_work
>> method.
>> It uses around 30-50% cpu and disappears once all processing has
>> completed.
>>
>> When I try to stop processing using delete_worker I was
2008 Dec 20
1
How to do indexing after splitting my data-frame?
Hello,
after splitting a data-frame I want to access the results.
Maybe the problem is, that the factor/index is a string...
...or do I miss knowing details of the index-uasge?
Please look and help:
=======================================
> weblog <- read_weblog("web.log")
>
>
> str(weblog)
'data.frame': 2247 obs. of 18 variables:
$ host : Factor w/ 77