Displaying 20 results from an estimated 100 matches for "bernoullis".
Did you mean:
bernoulli
2011 Feb 01
1
Lmer binomial distribution x HLM Bernoulli distribution
Dear R-users,
I'm running a lmer model using the lme4 package. My dependent variable is
dichotomous and I'm using the "binomial" family. The results
are slightly different from the HLM results based on a Bernoulli
distribution. I read that a Bernoulli distribution is an extension of a
binomial distribution. Is that right? If so, how can I adapt my R model to a
Bernoulli
2008 Jan 02
1
Random Bernoulli sequences with given point-biserial correlation?
Dear R-listers,
Can someone suggest a method for generating a finite Bernoulli
sequence that is likely to have a given point-biserial correlation
with an existing Bernoulli sequence?
_____________________________
Professor Michael Kubovy
University of Virginia
Department of Psychology
USPS: P.O.Box 400400 Charlottesville, VA 22904-4400
Parcels: Room 102 Gilmer Hall
2007 Jul 03
3
generating correlated Bernoulli random variables
Hi all,
I was wondering how to generate samples for two RVs X1 and X2.
X1 ~ Bernoulli (p1)
X2 ~ Bernoulli (p2)
Also, X1 and X2 are correlated with correlation \rho.
Regards,
Vineet
[[alternative HTML version deleted]]
2008 Aug 27
0
How to calculate cumulative values for a simple Bernoulli's distribution?
Hi there,
I have two questions and believe that there is an extremely easy solution.
Being a beginner with R makes thinks a bit more complicated.
This is the code:
rpois(15,3)
n<-15
DATA<-cbind(D,rpois(15,3))
data<-as.data.frame(DATA)
colnames(data)<-c("D","X")
*# 1. question: is it possible to put the following creation of x in a nicer
form?*
2010 May 23
2
Bernoulli random variable with different probability
Dear R-helpers,
I would like to generate a variable that takes 0 or 1, and each subject has
different probabilities of taking the draw.
So, which of the following code I should use ?
suppose there are 5 subjects, and their probabilities of this Bernoulli
variable is p=c(0.2, 0.9, 0.15, 0.8, 0.75)
n<-5
Ber.var <- rbimon(n,1,p) ## I doubt if this will take the first probability,
which is
2006 Oct 06
1
Sum of Bernoullis with varying probabilities
Hi Folks,
Given a series of n independent Bernoulli trials with
outcomes Yi (i=1...n) and Prob[Yi = 1] = Pi, I want
P = Prob[sum(Yi) = r] (r = 0,1,...,n)
I can certainly find a way to do it:
Let p be the vector c(P1,P2,...,Pn).
The cases r=0 and r=n are trivial (and also are exceptions
for the following routine).
For a given value of r in (1:(n-1)),
library(combinat)
Set <- (1:n)
2010 Apr 26
3
R.GBM package
HI, Dear Greg,
I AM A NEW to GBM package. Can boosting decision tree be implemented in
'gbm' package? Or 'gbm' can only be used for regression?
IF can, DO I need to combine the rpart and gbm command?
Thanks so much!
--
Sincerely,
Changbin
--
[[alternative HTML version deleted]]
2006 Feb 03
5
pbinom with size argument 0 (PR#8560)
Full_Name: Uffe H?gsbro Thygesen
Version: 2.2.0
OS: linux
Submission from: (NULL) (130.226.135.250)
Hello all.
pbinom(q=0,size=0,prob=0.5)
returns the value NaN. I had expected the result 1. In fact any value for q
seems to give an NaN. Note that
dbinom(x=0,size=0,prob=0.5)
returns the value 1.
Cheers,
Uffe
2007 Feb 07
3
generate Binomial (not Binary) data
Dear All,
I am looking for an R function or any other reference to generate a series of correlated Binomial (not a Bernoulli) data. The "bindata" library can do this for the binary not the binomial case.
Thank you,
Bernard
---------------------------------
[[alternative HTML version deleted]]
2008 Sep 22
1
gbm error
Good afternoon
Has anyone tried using Dr. Elith's BRT script? I cannot seem to run
gbm.step from the installed gbm package. Is it something external to gbm?
When I run the script itself
<- gbm.step(data=model.data,
gbm.x = colx:coly,
gbm.y = colz,
family = "bernoulli",
tree.complexity = 5,
learning.rate = 0.01,
bag.fraction = 0.5)
... I
2005 Jul 13
3
nlme, MASS and geoRglm for spatial autocorrelation?
Hi.
I'm trying to perform what should be a reasonably basic analysis of some
spatial presence/absence data but am somewhat overwhelmed by the options
available and could do with a helpful pointer. My researches so far
indicate that if my data were normal, I would simply use gls() (in nlme)
and one of the various corSpatial functions (eg. corSpher() to be
analagous to similar analysis in SAS)
2001 Oct 02
4
plot of Bernoulli data
I have some Bernoulli data something like this:
x<-sort(runif(100,1,20))
p<-pnorm(x,10,3)
y<-as.numeric(runif(x)<p)
plot(x,y)
lines(x,p)
This plot is not very satisfactory because the ogive does not visually
fit the (0,1) points very well, and also because the points tend to fall
on top of one another. The second problem can be eliminated by adding
vertical jitter. However I was
2011 Mar 21
1
Randomly generating data
Hi, everybody,
I have a problem and need your help.
There are two columns that look like this:
[1,] "t" "f"
[2,] "f" "t"
[3,] "t" "f"
[4,] "t" "t"
[5,] "f" "f"
I just want to generate the third column based on these two columns. First,
I randomly choose one of the two columns,
2009 Jun 17
1
gbm for cost-sensitive binary classification?
I recently use gbm for a binary classification problem. As expected, it gets very good results, based on Area under ROC with 7-fold cross validation. However, the application (malware detection) is cost-sensitive, getting a FP (classify a clean sample as a dirty one) is much worse than getting a FN (miss a dirty sample). I would like to tune the gbm model biased to very low FP rate.
For this
2010 Jul 09
1
Appropriate tests for logistic regression with a continuous predictor variable and Bernoulli response variable
I have a data with binary response variable, repcnd (pregnant or not) and one predictor continuous variable, svl (body size) as shown below. I did Hosmer-Lemeshow test as a goodness of fit (as suggested by a kind “R-helper” previously). To test whether the predictor (svl, or body size) has significant effect on predicting whether or not a female snake is pregnant, I used the differences between
2006 May 27
2
boosting - second posting
Hi
I am using boosting for a classification and prediction problem.
For some reason it is giving me an outcome that doesn't fall between 0
and 1 for the predictions. I have tried type="response" but it made no
difference.
Can anyone see what I am doing wrong?
Screen output shown below:
> boost.model <- gbm(as.factor(train$simNuance) ~ ., # formula
+
2005 Apr 25
1
Failed to install gbm_1.4-2 (PR#7814)
Full_Name: The Manager
Version: 2.0.1
OS: Solaris 9
Submission from: (NULL) (129.67.80.243)
> install.packages("gbm")
trying URL `http://cran.uk.r-project.org/src/contrib/PACKAGES'
Content type `text/plain; charset=ISO-8859-1' length 52975 bytes
opened URL
==================================================
downloaded 51Kb
trying URL
2011 Jan 11
1
glm specification where response is a 2col matrix
Hi,
when I apply a glm() model in two ways,
first with the response in a two column matrix specification with
successes and failures
y <- matrix(c(
5, 1,
3, 3,
2, 2,
0, 4), ncol=2, byrow=TRUE)
X <- data.frame(x1 = factor(c(1,1,0,0)),
x2 = factor(c(0,1,0,1)))
glm(y ~ x1 + x2, data = X, family="binomial")
second with a model matrix that
2012 Jun 18
6
Trying to speed up an if/else statement in simulations
Dear R-help,
I am trying to write a function to simulate datasets of size n which contain
two time-to-event outcome variables with associated 'Event'/'Censored'
indicator variables (flag1 and flag2 respectively). One of these indicator
variables needs to be dependent on the other, so I am creating the first and
trying to use this to create the second using an if/else statement.
2004 Jul 10
1
Exact Maximum Likelihood Package
Dear R users,
I am a mathematics postdoc at UC Berkeley. I have written a package
in a Computational Algebra System named Singular
http://www.singular.uni-kl.de
to compute the Maximum Likelihood of a given probability distribution over
several discrete random variables. This package gives exact answers to the
problem. But more importantly, it gives All MLE solutions.
My understanding is that