Displaying 20 results from an estimated 60000 matches similar to: "Outline For Asterisk Book - Please Review & Comment"
2004 Sep 27
1
Peer Review - Linuxfest Presentation Outline
Hello all,
I've been invited to do a presentation on Asterisk for the Ohio
Linuxfest in Columbus this weekend (http://www.ohiolinux.org). Rough
estimates are that nearly 500 people will be attending. I've been working
on an outline for a couple of weeks and I would like to have some peer
review of the information presented.
I am going to have to cut down the content to make it fit in
2007 Aug 10
3
having problems with factor()
Dear R Help,
I have a set of data of heights of trees described by area that they are in. The areas are numerical (0 to 7).
ht area
1 320 3
2 410 4
3 230 2
4 360 3
5 126 1
6 280 2
7 260 2
8 280 2
9 280 2
10 260 2
.......
180 450 4
181 90 1
182 120 1
183 440 4
184 210 2
185 330 3
186 210 2
187 100 1
188 0 0
I want to convert the
2015 Oct 22
2
Moderators needed for LLVM Developers' Meeting
All,
I'm needing volunteers to help moderate the sessions of the LLVM Developers' Meeting. All you need to do is introduce the speaker, make sure the speaker stays on time, and run Q&A at the end (run a microphone, select people, etc). Its a pretty easy job, but critical for our meeting to run smoothly.
If you are interested in moderating, please send me your top 2 session choices.
2013 Feb 16
1
odd behavior within R2HTML
Dear R People:
I'm using R2HTML but having a strange result.
Here is the original data:
resp trt block
90.3 A I
89.2 A II
98.2 A III
93.9 A IV
87.4 A V
97.9 A VI
92.5 B I
89.5 B II
90.6 B III
94.7 B IV
87.0 B V
95.8 B VI
85.5 C I
90.8 C II
89.6 C III
86.2 C IV
88.0 C V
93.4 C VI
82.5 D I
89.5 D II
85.6 D III
87.4 D IV
78.9 D V
90.7 D VI
And here are the commands:
> resin1.df <-
2009 Aug 02
3
two-factor linear models with missing cells
I am wondering how to interpret the parameter estimates that lm()
reports in this sort of situation:
y = round(rnorm(n=24,mean=5,sd=2),2)
A = gl(3,2,24,labels=c("one","two","three"))
B = gl(4,6,24,labels=c("i","ii","iii","iv"))
# Make both observations for A=1, B=4 missing
y[19] = NA
y[20] = NA
data.frame(y,A,B)
nonadd = lm(y ~
2010 May 28
5
difference in sort order linux/Windows (R.2.11.0)
Dear R users,
I'm a bit perplexed with the effect sort has here, as it is different on
Windows vs. linux.
It makes my factor levels and subsequent plots different on the two systems.
Given:
types <- c("PC-D-Euro-0", "PC-D-Euro-1", "PC-D-Euro-2", "PC-D-Euro-3",
"PC-D-Euro-4", "PC-D-Euro-5", "PC-D-Euro-6",
2008 Apr 07
2
How to add background color of a 2D chart by quadrant
Hi,
I have a 2D chart that is divided into four quadrants, I, II, III, IV:
plot(1:10,ylim=c(0,10),xlim=c(0,10),type="n")
abline(v=5,h=5)
text(x=c(7.5,7.5,2.5,2.5),y=c(2.5,7.5,7.5,2.5),labels=c("I","II","III","IV"))
I would like to fill each quadrant with a background color unique to the
quadrant. Does anyone know how to do this in R?
Thanks,
--
2004 May 01
1
[LLVMdev] (no subject)
Dear LLVM users,
The development of LLVM has been supported primarily by funding from
the National Science Foundation and the University of Illinois. in
order to maintain our sources of funding and attract new ones, It is
important for us to be able to document how LLVM is benefiting
companies, universities, other organizations, and individuals in the
outside world. (The information
2008 Feb 12
3
sort a data frame according to roman characters
R-help,
I have a data frame with one column containing roman numbers
The data are not sorted as : I II III IV V VI VII VIII IX
X XI XII XIII XIV XV
Using data[order(data$Roman),] does not do the job.
How can this be done?
Thanks in advance.
2005 Oct 20
3
different F test in drop1 and anova
Hi,
I was wondering why anova() and drop1() give different tail
probabilities for F tests.
I guess overdispersion is calculated differently in the following
example, but why?
Thanks for any advice,
Tom
For example:
> x<-c(2,3,4,5,6)
> y<-c(0,1,0,0,1)
> b1<-glm(y~x,binomial)
> b2<-glm(y~1,binomial)
> drop1(b1,test="F")
Single term deletions
Model:
y ~
2017 Jul 13
3
How to formulate quadratic function with interaction terms for the PLS fitting model?
I have two ideas about it.
1-
i) Entering variables in quadratic form is done with the command I
(variable ^ 2) -
plsr (octane ~ NIR + I (nir ^ 2), ncomp = 10, data = gasTrain, validation =
"LOO"
You could also use a new variable NIR_sq <- (NIR) ^ 2
ii) To insert a square variable, use syntax I (x ^ 2) - it is very
important to insert I before the parentheses.
iii) If you want to
2007 Aug 04
3
Normality tests
Hello All,
I am new to R, and I am writing to seek your advice on how best to use it to run
R's various normality tests in an automated way.
In a nutshell, my situation is as follows. I work in an investment bank, and my
team and I are concerned that the assumption we make in our models that the
returns of assets are normally distributed may not be justified for certain
asset classes. We are
2001 Jul 30
2
functions, `...' and .Rprofile
I'm experiencing some confusion with the ellipsis argument
(...).
In my .Rprofile, I have the following functions:
stderr <- function(x, ...) {
sqrt( var(x, ...) / length(x) )
}
se <- stderr
I can use tapply to calculate some means:
> tapply( Diameter, factor(Region), mean, na.rm=TRUE )
I II III IV V
0.02896429
2018 May 22
2
Plot qualitative y axis
Many thanks,
My goal is to make a plott like attached but the Y axis starts in XIV and
end at top in I. Generally for instance in excel X axis is categories but
Y axis is numbers I want the contrary plotted in lines, your last help is
near what I look but barplot is not needed.
Hope you can help me thanks in advance.
2018-05-22 0:58 GMT+02:00 Jim Lemon <drjimlemon at gmail.com>:
> Hi
2008 Nov 17
1
Type III ANOVA of package car depends on factor level order
## Question1: How to define IV with interaction alone, without main effects?
## Question2: Should Type III ANOVA in package car be independent of
the factor level order?
## data from http://www.otago.ac.nz/sas/stat/chap30/sect52.htm
drug <- c(t(t(rep(1,3)))%*%t(1:4));
disease <- c(t(t(1:3)) %*% t(rep(1,4)));
y <- t(matrix(c(
42 ,44 ,36 ,13 ,19 ,22
,33 ,NA ,26 ,NA ,33 ,21
,31 ,-3 ,NA
2017 Nov 18
3
Complicated analysis for huge databases
The loop :
AllMAFs <- list()
for (i in length(SeparatedGroupsofmealsCombs) {
AllMAFs[[i]] <- apply( SeparatedGroupsofmealsCombs[[i]], 2, function(x)maf( tabulate( x+1) ))
}
gives these errors (I tried this many times and I'm sure I copied it entirely) :-
Error in apply(SeparatedGroupsofmealsCombs[[i]], 2, function(x) maf(tabulate(x + :
object 'i' not found
> }
2009 Dec 08
1
Sort a data frame according to romans
R-help,
I have a data frame:
> mydata
strata nh Nh Wh fh
1 I 10 26 0.048 0.385
2 II 32 84 0.154 0.381
3 III 16 42 0.077 0.381
4 IV 4 11 0.020 0.364
5 V 10 26 0.048 0.385
7 VII 64 168 0.309 0.381
8 VIII 49 129 0.237 0.380
9 IX 22 58 0.107 0.379
91 VI 0 0 0.000 0.000
and I wish to rearrange the data are sorted according to the roman
2017 Jul 13
0
How to formulate quadratic function with interaction terms for the PLS fitting model?
Below.
-- Bert
Bert Gunter
On Thu, Jul 13, 2017 at 3:07 AM, Luigi Biagini <luigi.biagini at gmail.com> wrote:
> I have two ideas about it.
>
> 1-
> i) Entering variables in quadratic form is done with the command I
> (variable ^ 2) -
> plsr (octane ~ NIR + I (nir ^ 2), ncomp = 10, data = gasTrain, validation =
> "LOO"
> You could also use a new variable
2018 May 21
2
Plot qualitative y axis
Hi all,
I?m trying to plot this data
N M W
I 10 106
II 124 484
III 321 874
IV 777 1140
V 896 996
VI 1706 1250
VII 635 433
VIII 1437 654
IX 693 333
X 1343 624
XI 1221 611
XII 25 15
XIII 3
XIV 7 8
So that in de Y axis will be the level (qualitative data) and in the X axis
will be M and W variables. So x axis will be wwith a lenght between 0 and
2000.
I would like to plot a line with M and other
2013 Aug 23
1
A couple of questions regarding the survival:::cch function
Dear all,
I have a couple of questions regarding the survival:::cch function.
1) I notice that Prentice and Self-Prentice functions are giving identical standard errors (not by chance but by programming design) while their estimates are different. My guess is they are both using the standard error form from Self and Prentice (1986). I see that standard errors for both methods are