similar to: xyplot() does not plot legends with "relation=free" scales

Displaying 20 results from an estimated 6000 matches similar to: "xyplot() does not plot legends with "relation=free" scales"

2012 Jul 30
2
distance matrix and hclustering
Dear R Users,i am very new to R. I want your help on an issue regarding distance matrix and cluster analysis i had discharge data of 4 rivers(a,b,c,d) in 4 vectors each having 364 values > dput(qmu)structure(list(a = c(0.26, 0.25, 0.25, 0.25, 0.24, 0.23, 0.22, 0.21, 0.21, 0.21, 0.2, 0.19, 0.19, 0.19, 0.19, 0.18, 0.18, 0.18, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17,
2009 Oct 06
2
ggplot cumsum refined question (?)
OK, so maybe last night was a little too much at one throw, so I have reduced the data to two stations- one that has precipitation and one that does not. This is going to be in the context of a larger data set. I would like to be able to issue a ggplot command and have cum sum just act on the facets (factors) to apply this. library(chron) library(ggplot2) DF <- structure(list(date_time =
2013 May 15
1
x and y lengths differ
I have a problem with R. I try to compute the confidence interval for my df. When I want to create the plot I have this problem: Error in xy.coords(x, y, xlabel, ylabel, log) : 'x' and 'y' lengths differ. I try this code: library(dplR) df.rwi <- detrend(rwl = df, method = "Spline",nyrs=NULL) write.table(df.rwi,file="rwi.txt",quote=FALSE,row.names=TRUE)
2012 Jan 13
1
apply transformation
Hello All, I have the following dataset: Year 2006 2007 Jan Jan 0.0204 0.0065 Feb Feb 0.0145 0.0082 Mar Mar 0.0027 0.0122 > dput(d_tmp) structure(list(Year = c("Jan", "Feb", "Mar"), `2006` = c(0.0204, 0.0145, 0.0027), `2007` = c(0.0065, 0.0082, 0.0122)), .Names = c("Year", "2006", "2007"), row.names = c("Jan",
2006 Sep 18
2
problems in sourcing R script
Dear list, First my information: platform i386-pc-linux-gnu arch i386 os linux-gnu system i386, linux-gnu status major 2 minor 3.1 year 2006 month 06 day 01 svn rev 38247 language R version.string Version 2.3.1 (2006-06-01) Now my question: How is it possible that a command in an R script is not
2010 Mar 31
1
Weird R behaviour?
Dear list, I have observed a weird behaviour from R --- apologies if I am missing something obvious! df3f826f28 df3f826f28 Say you type in R: >c.preec <- 10074 >c.gd <- 2200 >p1 <- .2 >c.neo <- p1*9451 + (1-p1)*3883 >n.preec <- 3710 >n.gd <- 2650 >n.neo <- 2120 >n.pcos <- 53000 >unit.met <- 94 >cost.met <- 94*n.pcos >effect <-
2013 Mar 06
3
About basic logical operators
Hello everyone,           I have a basic question regarding logical operators. > x<-seq(-1,1,by=0.02) > x   [1] -1.00 -0.98 -0.96 -0.94 -0.92 -0.90 -0.88 -0.86 -0.84 -0.82 -0.80 -0.78  [13] -0.76 -0.74 -0.72 -0.70 -0.68 -0.66 -0.64 -0.62 -0.60 -0.58 -0.56 -0.54  [25] -0.52 -0.50 -0.48 -0.46 -0.44 -0.42 -0.40 -0.38 -0.36 -0.34 -0.32 -0.30  [37] -0.28 -0.26 -0.24 -0.22 -0.20 -0.18 -0.16
2013 May 23
1
sample(c(0, 1)...) vs. rbinom
Greetings.? My wife is teaching an introductory stat class at UC Davis.? The class emphasizes the use of simulations, rather than mathematics, to get insight into statistics, and R is the mandated tool.?? A student in the class recently inquired about different approaches to sampling from a binomial distribution.? I've appended some code that exhibits the idea, the gist of which is that using
2013 Apr 11
1
Dotchart per groups
Hi all, I would like to ask you for help. I did a dotplot - using dotchart function. There are two localites (loc) with values 75 or 56 in my data ZZ. The f column has 4 levels: P1, S1, S8, R6. The dataframe is ordered by N value, pchloc value is assign to use "pch" in plot. > head(ZZ) loc f N color ordered pchloc 98 75 S1 6.39 green 1 16 99 75 S8 6.44 blue
2011 Apr 17
1
side by side histogram after splitting data by year
Hi everyone, I'm looking to produce a side-by-side histogram of the number of trips taken by jays with a particular number of acorns after accounting for year (year "one" and year "two"). I know this involves indexing first then creating a histogram but I'm not sure how I'd do this. I want to explore the possibilities that jays are altering their strategies in
2012 May 03
2
Help with readBin
I'm trying to read a binary file created by a fortran code using readBin and readChar. Everything reads fine (integers and strings) except for double precision numbers, they are read as huge or very small number (1E-250,...). I tried various endianness, swap, But nothing has worked so far. I also tried on R 64 bit for linux and windows (R 2.14) and R 2.11 on windows XP 32 bit. Any help would
2012 Feb 20
2
stats on transitions from one state to another
Folks, I'm trying to get stats from a matrix for each transition from one state to another. I have a matrix x as below. structure(c(0, 2, 2, 2, 0, 0, 0, 1, 1, 1, 1, 2, 2, 1, 1, 1, 0, 0, 2, 2, 0.21, -0.57, -0.59, 0.16, -1.62, 0.18, -0.81, -0.19, -0.76, 0.74, -1.51, 2.79, 0.41, 1.63, -0.86, -0.81, 0.39, -1.38, 0.06, 0.84, 0.51, -1, -1.29, 2.15, 0.39, 0.78, 0.85, 1.18, 1.66, 0.9, -0.94,
2009 Nov 26
1
lattice --- different properties of lines corresponding to type=c("l", "a") respectively
I think the subject says it all. I want to make a simple lattice plot, using xyplot with the argument type=c("l","a"). The problem then is that in the resulting plot it is difficult/impossible to see which plot corresponds to the average and which to the individual profiles. I triedthings like extra arguments lwd=c(1,3) or col=c("blue","red") hoping
2016 Apr 22
0
Finding Highest value in groups
Assuming your dataframe is in a variable x: > require(dplyr) > x %>% group_by(ID) %>% summarise(maxVal = max(Value,na.rm=TRUE)) On Fri, 2016-04-22 at 13:51 +0000, Saba Sehrish via R-help wrote: > Hi > > > I have two columns in data frame. First column is based on "ID" assigned to each group of my data (similar ID depicts one group). From second column, I
2016 Apr 22
2
Finding Highest value in groups
Hi I have two columns in data frame. First column is based on "ID" assigned to each group of my data (similar ID depicts one group). From second column, I want to identify highest value among each group and want to assign the same ID to that highest value. Right now the data looks like: ID Value 1 0.69 1 0.31 2 0.01 2 0.99 3 1.00 4 NA 4
2001 Oct 26
3
question about anova() output
Hello, I am getting output from anova() and summary(aov()) that depends on the order of the factors in the fitted model object, and this has me baffled. I see this dependency with the data.frame below but not with an example (table 6.4) from Montgomery's DOE book. This is with R 1.3.0 on Debian GNU-Linux. Where have I gone wrong? > centerpts run sample CH50mg 1 day1 dev126 0.56 2
2018 May 15
2
Systemfit
OK, Let's try this again! Here is the reproducible script; it is long because I had to copy the panel dataset here. My question is related to systemfit; I don't know how to get the result for the entire panel. #Reproducible script Empdata<- read.csv("/Users/ngwinuiazenui/Documents/UPLOADemp.csv") View(Empdata) install.packages("systemfit")
2012 Jul 02
1
How to get prediction for a variable in WinBUGS?
Dear all,I am a new user of WinBUGS and need your help. After running the following code, I got parameters of beta0 through beta4 (stats, density), but I don't know how to get the prediction of the last value of h, the variable I set to NA and want to model it using the following code.Does anyone can given me a hint? Any advice would be greatly appreciated.Best
2011 Sep 02
1
Maximum Likelihood using optim()
Dear mailing list, I would like to use the optim() command in order to maximize the logged likelihood of the following function, where p is the parameter of interest and should be constrained between 0 and positive infinity. y = 1/2 * ((te - x)/(te - tc))^p x and y are given by x <- c(5.18, 6.28, 7.00, 7.08, 7.54, 7.90, 8.24, 8.64, 12.17, 12.89, 14.27, 15.38, 15.80, 16.46, 20.41, 21.27,
2018 May 16
0
Systemfit
Sadly you failed to set your email program to send plain text and the data is corrupted at my end. I also think you need to reduce the size of the data set... the intent here is to increase your understanding, not debug your particular analysis. I will say that I am having a very challenging time understanding what you are trying to accomplish though. What are the equations that you think need