Displaying 20 results from an estimated 80 matches similar to: "Error: figure margins too large"
2012 Jul 06
2
Error in plot.new() : figure margins too large
Hello All,
I am running the following code in RStudio, and I keep on getting an error message that says: "Error in plot.new() : figure margins too large"
Is there something that I am doing wrong?
# Import Data
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv", sep=",")
nba
#Sort Data (sorting by Points, but could be sorting by any other variable)
2001 Apr 02
0
Constructing a contingency table
Greetings. I'm having some trouble constructing a contingency table from
raw data (actually read in via RPgSQL). Here's the deal:
- What I've got: a data frame in the form:
Groupcode code1 code2 code3 code4 code5 .. coden
where groupcode is one of {P,C,B,S,U,X} and code{1..n} is TRUE or
FALSE. code{1..n} are NOT mutually exclusive, so between 0 and n of them
can be TRUE.
-
2004 Jul 14
0
Convex smoothing via 'Iterative Convex Minorant' ?
I've been asked, and interested myself:
Has anybody implemented the above in R or another S language dialect?
We are talking about the algorithms / methodology
by Wellner, Groeneboom and Jongbloed, e.g., from the following article
@Article{Jongbloed:1998:ICM,
author = "Geurt Jongbloed",
title = "The Iterative Convex Minorant Algorithm for
2010 Aug 13
7
Push changes to clients
I was wondering how to configure the puppet clients to only listen,
not to periodically pull configs down from the puppetmaster.
I''d rather push the configs out from the puppetmaster with
puppetrun...
At a guess I need to set runinterval to 0 in /etc/puppet/puppet.conf?
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To post
2005 Nov 24
10
Any change of rsync using threads instead of fork?
On a typical embedded Linux device, with no MMU, there is no fork() or
it returns ENOSYS.
The nearest replacements are vfork() (which is only useful before
exec*()), or to create threads with pthread_create().
rsync would be a very useful program on such devices, and I was a bit
disappointed to build it, only to find the compile went fine but it
failed at runtime due to ENOSYS.
Is there any
2011 Jul 14
2
cbind in aggregate formula - based on an existing object (vector)
Hello!
I am aggregating using a formula in aggregate - of the type:
aggregate(cbind(var1,var2,var3)~factor1+factor2,sum,data=mydata)
However, I actually have an object (vector of my variables to be aggregated):
myvars<-c("var1","var2","var3")
I'd like my aggregate formula (its "cbind" part) to be able to use my
"myvars" object. Is it
2011 May 04
3
SAPPLY function XXXX
Hello everyone,
I am attempting to write a function to count the number of non-missing
values of each column in a data frame using the sapply function. I have the
following code which is receiving the error message below.
> n.valid<-sapply(data1,sum(!is.na))
Error in !is.na : invalid argument type
Ultimately, I would like for this to be 1 conponent in a larger function
that will produce
2008 Sep 22
1
Deleting multiple variables
Hi All,
i have searched the web for a simple solution but have been unable to find
one. Can anyone recommend a neat way of deleting multiple variable?
I see, i need to use dataframe$VAR<-NULL to get rid of one variable,
In my situation i need to delete all vars between two points.
I've used the 'which' function to find these out and have assigned to myvar
>myvars
[1] 2 17
but
2012 Jul 21
4
rhsape2 bug?
All,
I believe I am running the latest version of rshape2 (1.2.1). But this code:
library(reshape2)
tmp <- melt(smiths,
id.vars=1:2,
measure.vars=c("age","weight","height"),
variable.name="myvars",
value.name="myvals"
)
names(tmp)
Produces this output:
> names(tmp)
[1] "subject" "time"
2009 Mar 25
5
Subscribe to a recursive file...
Hi All....
I''ve got this configuration to manage bind, I want the exec to be run
whenever anything under /var/named or the file /etc/named.conf gets
updated....
file { "/etc/named.conf":
owner => root,
group => root,
mode => 0644,
require =>
2011 Jun 07
1
Help on selecting genes showing highest variance
Hi
I have a problem for which I would like to know a solution. I have a gene expression data and I would like to choose only lets say top 200 genes that had the highest expression variance across patients.
How do i do this in R?
I tried x=apply(leukemiadata,1,var)
x1=x[order(-1*x)]
but the problem here is x and x1 are numeric data , If I choose the first 200 after sorting in descending, so I
2011 Aug 04
2
Efficient way of creating a shifted (lagged) variable?
Hello!
I have a data set:
set.seed(123)
y<-data.frame(week=seq(as.Date("2010-01-03"), as.Date("2011-01-31"),by="week"))
y$var1<-c(1,2,3,round(rnorm(54),1))
y$var2<-c(10,20,30,round(rnorm(54),1))
# All I need is to create lagged variables for var1 and var2. I looked
around a bit and found several ways of doing it. They all seem quite
complicated - while in
2002 Sep 26
1
bugs
Dear Samba team
Well im a linux/java programmer of IT Company in indonesia,ive been using samba for almost 5 years , right now i have to join my linux system into domain PDC (NT system) .. well its all ok , then i have to share something in ours user home,i have allready done , everything is ok before i set many valid user on my share .. here is my share definition in smb.conf
2012 Jul 17
1
weighted mean by week
Hello!
I wrote a code that works, but it looks ugly to me - it's full of loops.
I am sure there is a much more elegant and shorter way to do it.
Thanks a lot for any hints!
Dimitri
# I have a data frame:
x<-data.frame(group=c("group1","group2","group1","group2"),
myweight=c(0.4,0.6,0.4,0.6),
2012 Feb 08
4
The program ChairGun2 can't be started up
Hello,
The application ChairGun2 can't be started up.
When starting it shows the mistake:
Code:
Run-time error ?429?
ActiveX conponent can?t create object
Wine version 1.2.3 for Cent OS 6.2. is used.
This application also didn't work on older versions.
What should I do? How to solve the problem?
Download ChairGun2:
http://turbobit.net/sj18namk8947.html
2011 Oct 19
1
Subsetting data by eliminating redundant variables
Dear All,
I am new to R, I have one question which might be easy.
I have a large data with more than 250 variable, i am reducing number of
variables by redun function as in the example below,
n <- 100
x1 <- runif(n)
x2 <- runif(n)
x3 <- x1 + x2 + runif(n)/10
x4 <- x1 + x2 + x3 + runif(n)/10
x5 <- factor(sample(c('a','b','c'),n,replace=TRUE))
x6 <-
2012 Aug 01
1
optim() for ordered logit model with parallel regression assumption
Dear R listers,
I am learning the MLE utility optim() in R to program ordered logit
models just as an exercise. See below I have three independent
variables, x1, x2, and x3. Y is coded as ordinal from 1 to 4. Y is not
yet a factor variable here. The ordered logit model satisfies the
parallel regression assumption. The following codes can run through,
but results were totally different from what I
2012 Mar 19
2
Reshape from long to wide
Hi,
I'm a total beginner in R and this question is probably very simple but I've
spent hours reading about it and can't find the answer. I'm trying to
reshape a data table from long to wide format. I've tried reshape() and
cast() but I get error messages every time and I can't figure why. In my
data, I have the length of two fish from each family. My data table (called
2010 Jan 21
2
Help with subset
I am so happy about learning how to read in multiple Excel files, that I have
to try and make another improvement. I know what I have been doing is
clumsy, but it works. Hopefully, someone can suggest a more elegant
solution. As a novice, I have been using MS-Word and mail merge to write my
code. I start with about 2 pages of code, and end up with 2,220 merged pages
that I copy and paste into R.
2012 Jan 11
1
Help with speed (replacing the loop?)
Dear R-ers,
I have a loop below that loops through my numeric variables in data
frame x and through levels of the factor "group" and multiplies (group
by group) the values of numeric variables in x by the corresponding
group-specific values from data frame y. In reality, my:
dim(x) is 300,000 rows by 100 variables, and
dim(y) is 120 levels of "group" by 100 variables.
So, my