Displaying 20 results from an estimated 37 matches for "probabilties".
Did you mean:
probabilities
2009 Sep 04
1
Best Way to Compute/Approximate(?) Probabilty of a Point in a Given Distribution
AFAIK, R only has "pnorm" which computes the probability of getting a
value smaller or equal to "x" from
a normal distribution N[mean,stdev]. For example:
R> pnorm(0, 4, 10)
[1] 0.3446
means there is 34.46% chance of getting a value equal to or smaller
than 0 from a N(4, 10) distribution.
What I intend to get is: given the observed value "x", mean, and stdev
2009 Apr 17
5
Binomial simulation
Hi Guy's
I was wondering if someone could point me in the right direction.
dbinom(10,1,0.25)
I am using dbinom(10,1,0.25) to calculate the probabilty of 10 judges
choosing a certain brand x times.
I was wondering how I would go about simulating 1000 trials of each x value
?
regards
Brendan
--
View this message in context:
2005 Jan 14
2
probabilty calculation in SVM
Hi All,
In package e1071 for SVM based classification, one can get a probability
measure for each prediction. I like to know what is method that is used for
calculating this probability. Is it calculated using logistic link function?
Thanks for your help.
Regards,
Raj
2007 Jul 04
2
probabilty plot
Hi all,
I am a freshman of R,but I am interested in it! Those days,I am
learning pages on NIST,with url
http://www.itl.nist.gov/div898/handbook/eda/section3/probplot.htm,
I am meeting a problem about probability plot and I don't know how to
plot a data set with R.
Could somebody tell me the answer,and a example is the best! I will
look forward to your answer.
Thank you very much.
2006 Jun 14
1
Estimate region of highest probabilty density
Estimate region of highest probabilty density
Dear R-community
I have data consisting of x and y. To each pair (x,y) a z value (weight) is assigned. With kde2d I can estimate the densities on a regular grid and based on this make a contour plot (not considering the z-values). According to an earlier post in the list I adjusted the kde2d to kde2d.weighted (see code below) to estimate the
2010 Dec 10
2
survival package - calculating probability to survive a given time
Dear R users,
i try to calculate the probabilty to survive a given time by using the
estimated survival curve by kaplan meier.
What is the right way to do that? as far as is see i cannot use the
predict-methods from the survival package?
library(survival)
set.seed(1)
time <- cumsum(rexp(1000)/10)
status <- rbinom(1000, 1, 0.5)
## kaplan meier estimates
fit <- survfit(Surv(time,
2005 Jan 16
2
Empirical cumulative distribution with censored data
Dear list,
I would like to plot the empirical cumulative distribution of the time
needed by a treatment to attain a certain goal. A number of
experiments is run with a strict time limit. In some experiments the
goal is attained before the time limit, in other experiments time
expires before the goal is attained. The situation is very similar to
survivial analysis with censored data. I tryed
2009 Jul 10
2
predict.glm -> which class does it predict?
Hi,
I have a question about logistic regression in R.
Suppose I have a small list of proteins P1, P2, P3 that predict a
two-class target T, say cancer/noncancer. Lets further say I know that I
can build a simple logistic regression model in R
model <- glm(T ~ ., data=d.f(Y), family=binomial) (Y is the dataset of
the Proteins).
This works fine. T is a factored vector with levels cancer,
2006 Jan 12
2
Basis of fisher.test
...observed table, compute the probability
of this table conditional on the marginal totals.
2) Rank the possible tables in order of a measure
of discrepancy between the table and the null
hypothesis of "no association".
3) Locate the observed table, and compute the sum
of the probabilties, computed in (1), for this
table and more "extreme" tables in the sense of
the ranking in (2).
The question is: what "measure of discrepancy" is
used in 'fisher.test' corresponding to stage (2)?
(There are in principle several possibilities, e.g.
value of a Pears...
2004 Jul 11
1
your reference on this problem highly appreciated
please help me on this
----- Message Text -----
Dear all R users
first, sorry for that this question might not be appropriate to ask here.
I wanna know theories or techinques aimed at following questions:
I have a sample, say,K(at the
2005 Mar 16
1
Help in persp (VERY URGENT ASSISTANCE)
Dear All,
I am very new to R projects.May be i am wrong in some steps.I have given the code which i tried for drawing 3d surface using persp.I need to label the axes with scales
z <- array(topnew2$V2, dim=c(600,2))
x <- 10 * (1:nrow(z))
y <- (1:ncol(z))
persp(x, y, z, theta = 30, phi = 30, expand = 0.5, col = "lightblue", xlab ="fluidlevel", ylab
2005 Sep 22
1
(survexp:) compute the expected remaining survival time
DeaR list
I would like to know if there is a direct method to compute the
expected remaining survival time for a subject having survived up to
time t. survexp gives me the probabilty for subject S to survive up to
day D
Thanks
--
Anne
2007 Jan 05
1
Efficient multinom probs
Dear R-helpers,
I need to compute probabilties of multinomial observations, eg by doing the
following:
y=sample(1:3,15,1)
prob=matrix(runif(45),15)
prob=prob/rowSums(prob)
diag(prob[,y])
However, my question is whether this is the most efficient way to do this.
In the call prob[,y] a whole matrix is computed which seems a bit of a
waste.
Is...
2010 Jan 21
1
superimpose histogram and fitted gamma pdf
Hi r-users,
I try to draw histogram (in terms of probabilty) and superimpose with the gamma pdf. Using this code below I can get the plots BUT the y scale for the density is out of scale. How do I change the y-axis scale to max 1? Another thing, how I do draw smooth line rather that points?
Nota that my observed data is hume_pos and the fitted data is rgam1.
hist(hume_pos,prob=TRUE)
2009 Oct 29
1
correlated binary data and overall probability
Dear All,
I try to simulate correlated binary data for a clinical research project.
Unfortunately, I do not come to grips with bindata().
Consider
corr<-data.frame(ID=as.factor(rep(c(1:10), each=5)),
task=as.factor(rep(c(1:5),10)))
[this format might be more appropriate:]
corr2<-data.frame(ID=as.factor(rep(c(1:10),5)),
tablet=as.factor(rep(c(1:5),each=10)))
Now, I want to
2011 Jul 14
1
t-test on a data-frame.
Dear R-helpers,
In a data frame I have 100 securities,monthly closing value,from 1995 to
present,which I have to
1. Sampling with replacement,make 50 samples of 10 securities each,each
sample hence will be a data frame with 10 columns.
2. With uniform probabilty,mark a month from 2000 onwards as a "special"
month,t=0.
3. I have to subtract the market index from each column of each
2017 Aug 24
1
rmutil parameters for Pareto distribution
In https://en.wikipedia.org/wiki/Pareto_distribution, it is clear what the
parameters are for the pareto distribution: *xmin *the scale parameter and
*a* the shape parameter.
I am using rmutil to generate random deviates from a pareto distribution.
It says in the documentation that the probabilty density of the pareto
distribution
The Pareto distribution has density
f(y) = s (1 + y/(m
2009 Jun 08
1
Interpreting R -results for Bivariate Normal
HI Guys,
I know that this forum is not for homework but I am trying to interpret R
output code.
I was just wondering if someone might be able to help.
I have been given the following.
For (X1,X2) distributed bivariate normal with parameters
mu1 = 5.8
mu2 = 5.3
sd1 = sd2 = 0.2
and p = 0.6
The r-code and inpit/output are as follows
input
m <- 5.3 + 0.6*(6.3 - 5.8)
s <-
2005 Oct 09
1
enter a survey design in survey2.9
...e data from a complexe stratified survey. The population is divide in 12 regions and a region consist to and urban area and rural one. there to region just with urbain area.
stratification variable is a combinaison of region and area type (urban/rural)
In rural area, subdivision are sample with probabilties proporionnal to size in population then enuration area are sample in selected division and finally households are selected in those EA.
In urban area, EA are directly selected and finally household are selected.
to schematise we have:
(12 regions)
each region is divised in two regions / Ur...
2011 Aug 31
6
Weights using Survreg
Dear R users,
I have been trying to understand what the Weights arguments is doing in the
estimation of the parameters when using the Surreg function.
I looked through the function's code but I am not sure if I got it or not.
For example, if I inclue the Surv function in it:
survreg(Surv(vector, status)~1,weights=vector2,dist="Weibull")
will it try to maximize the likelihood with