Displaying 20 results from an estimated 7000 matches similar to: "using predict() with poly(x, raw=TRUE)"
2008 May 12
2
ggplot2: font size mismatch for pdf output
Hi
In the following, the graph I see on the screen and the .png output
coincide. However, in the .pdf file, the fonts seem to be scaled
fairly larger, resulting in the label for the top legend disappearing.
Is this an infelicity or bug, or is there something I've missed?
More generally, how do I control the size of fonts used in legends
and axis labels?
library(car)
library(ggplot2)
qp
2009 Jul 13
0
problem predict/poly
Dear R experts,
I am observing undesired behavior of predict(fit, newdata), in case when fit object is produced by lm() involving a poly(). Here is how to reproduce:
x <- c(1:10)
y <- sin(c(1:10))
fit <- lm(formula=y~poly(x, 5, raw=TRUE))
predict(fit, newdata=data.frame(x=c(1:10))) ## this works
predict(fit, newdata=data.frame(x=c(1:1))) ## this is broken, error below
Error in poly(x,
2002 Jan 12
2
Bug in predict(newdata=x) with poly() (PR#1258)
Bug in predict.lm & poly
The predict function doesn't work when used with poly and newdata.
For example, I'd expect the following code to work, and plot a fitted
cubic to the nearly straight line:
x <- 1:10
y <- x + rnorm(10)/100
plot(x,y)
fit <- lm(y ~ poly(x,3))
newx <- seq(1,10,len=100)
lines(newx,predict(fit,newdata=data.frame(x=newx)))
However, the plotted
2002 Nov 02
1
problem with expand.model.frame
Dear R list members,
I'm encountering a problem with expand.model.frame(): Suppose that I define
the following simple function (meant
just to illustrate the problem):
> fun <- function(model){
+ expand.model.frame(model, all.vars(formula(model)))
+ }
>
and I have the following model, created with an explicit data argument:
> mod
Call:
2002 Dec 01
1
generating contrast names
Dear R-devel list members,
I'd like to suggest a more flexible procedure for generating contrast
names. I apologise for a relatively long message -- I want my proposal to
be clear.
I've never liked the current approach. For example, the names generated by
contr.treatment paste factor to level names with no separation between the
two; contr.sum simply numbers contrasts (I recall an
2009 Nov 08
2
linear trend line and a quadratic trend line.
Dear list users
How is it possible to visualise both a linear trend line and a quadratic trend line on a plot
of two variables?
Here my almost working exsample.
data(Duncan)
attach(Duncan)
plot(prestige ~ income)
abline(lm(prestige ~ income), col=2, lwd=2)
Now I would like to add yet another trend line, but this time a quadratic one. So I have two
trend lines. One linear trend line
2002 Mar 29
0
use of expand.model.frame
Dear R-help list members,
I'm encountering problems using expand.model.frame. To illustrate, consider
the function
> test <- function(model){
+ expand.model.frame(model, "income")
+ }
>
The data frame Prestige (from the car library) has several variables,
including prestige, income and education. I've attached this data frame
>
2005 Apr 18
0
Discrepancy between gam from gam package and gam in S-PLUS
Dear Trevor,
I've noticed a discrepancy in the degrees of freedom reported by gam() from
the gam package in R vs. gam() in S-PLUS. The nonparametric df differ by 1;
otherwise (except for things that depend upon the df), the output is the
same:
--------- snip ------------
*** From R (gam version 0.93):
> mod.gam <- gam(prestige ~ lo(income, span=.6), data=Prestige)
>
2003 Mar 31
4
"font problems in X11 with linux R"
Hello,
I''m inexperienced with linux, X11 and R. A font problem have surfaced. When I
use pairs in John Fox''s car library e.g.:
> pairs(cbind(prestige, income, education, women))
Error in text.default(x, y, txt, cex = cex, font = font) :
X11 font at size 16 could not be loaded
In addition: Warning message:
freeing previous text buffer in GText
>
Evidently
2002 Mar 29
1
expand.model.frame fails when call creating model has no data (PR#1423)
I've encounted a problem using expand.model.frame. To illustrate, consider
the function
> test <- function(model){
+ expand.model.frame(model, "income")
+ }
>
The data frame Prestige (from the car library) has several variables,
including prestige, income and education. I've attached this data frame and
fit the following model
>
2017 Oct 15
0
Bootstrapped Regression
Hello,
Much clearer now, thanks.
It's a matter of changing the function boot calls to return the
predicted values at the point of interess, education = 50, income = 75.
I have changed the way the function uses the indices a bit, the result
is the same, it's just the way I usually do it.
pred.duncan.function <- function(data, indices) {
mod <- lm(prestige ~ education +
2011 Jan 14
1
Question about scatterplot in package car
I am getting an error message from scatterplot:
> library(car)
> scatterplot(Prestige$income~Prestige$type)
Error in Summary.factor(c(2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, :
range not meaningful for factors
In addition: Warning message:
In Ops.factor(x[floor(d)], x[ceiling(d)]) : + not meaningful for factors
>
The command does output the kind of graph that I want (boxplots).
2008 Apr 23
0
poly() can exceed degree k - 1 for k distinct points (PR#11251)
The poly() function can create more variables than can be fitted when
there are replicated values. In the example below, 'x' has only 5
distinct values, but I can apparently fit a 12th-degree polynomial with
no error messages or even nonzero coefficients:
R> x = rep(1:5,3)
R> y = rnorm(15)
R> lm(y ~ poly(x, 12))
Call:
lm(formula = y ~ poly(x, 12))
Coefficients:
2008 Apr 22
1
Bug in poly() (PR#11243)
Full_Name: Russell Lenth
Version: 2.6.2
OS: Windows XP Pro
Submission from: (NULL) (128.255.132.36)
The poly() function allows a higher-degree polynomial than it should, when
raw=FALSE.
For example, consider 5 distinct 'x' values, each repeated twice. we can fit a
polynomial of degree 8:
=====
R> x = rep(1:5, 2)
R> y = rnorm(10)
R> lm(y ~ poly(x, 8))
Call:
lm(formula = y ~
2002 Jul 03
0
poly.transform in R
Dear all,
I am trying to transform polynomial coefficients from orthogonal form to
the standard power basis. There's poly.transform in S-plus. Does anybody
know how to do that in R ? I've found question about that in the
archives of R-help but no real answer.
Example : I'm doing polynomial regression of percentage of one insect in
a community on altitude, precipitations,
2005 Feb 14
0
using poly in a linear regression in the presence of NA fails (despite subsetting them out)
I ran into a to me surprising result on running lm with an orthogonal
polynomial among the predictors.
The lm command resulted in
Error in qr(X) : NA/NaN/Inf in foreign function call (arg 1)
Error during wrapup:
despite my using a "subset" in the call to get rid of NA's.
poly is apparently evaluated before any NA's are subsetted out
of the data.
Example code (attached to
2007 Jan 25
1
poly(x) workaround when x has missing values
Often in practical situations a predictor has missing values, so that poly
crashes. For instance:
> x<-1:10
> y<- x - 3 * x^2 + rnorm(10)/3
> x[3]<-NA
> lm( y ~ poly(x,2) )
Error in poly(x, 2) : missing values are not allowed in 'poly'
>
> lm( y ~ poly(x,2) , subset=!is.na(x)) # This does not help?!?
Error in poly(x, 2) : missing values are not allowed in
2009 Nov 28
1
R function that duplicates Octave's poly function?
By any chance is anyone aware of an R function that duplicates Octave's poly function?
Here is a description of Octave's poly function:
Function File: poly (A)
If A is a square N-by-N matrix, `poly (A)' is the row vector of
the coefficients of `det (z * eye (N) - a)', the characteristic
polynomial of A. As an example we can use this to find the
eigenvalues
2010 Aug 03
0
Issue with prediction from lm object with poly
DDear developeRs,
about a year ago, Alex Stolpovsky posted an issue with predict.lm on a
fit generated using poly with the raw=TRUE option and too few new data
(slightly modified reproducible example below). Alex did not get any
reply. I have just stumbled on the same problem, and I think that this
is a bug of function poly, which arises from the check whether the
polynomial degree is
2008 Feb 13
1
use of poly()
Hi,
I am curious about how to interpret the results of a polynomial regression--
using poly(raw=TRUE) vs. poly(raw=FALSE).
set.seed(123456)
x <- rnorm(100)
y <- jitter(1*x + 2*x^2 + 3*x^3 , 250)
plot(y ~ x)
l.poly <- lm(y ~ poly(x, 3))
l.poly.raw <- lm(y ~ poly(x, 3, raw=TRUE))
s <- seq(-3, 3, by=0.1)
lines(s, predict(l.poly, data.frame(x=s)), col=1)
lines(s,