search for: probabilty

Displaying 20 results from an estimated 37 matches for "probabilty".

Did you mean: probability
2009 Sep 04
1
Best Way to Compute/Approximate(?) Probabilty of a Point in a Given Distribution
AFAIK, R only has "pnorm" which computes the probability of getting a value smaller or equal to "x" from a normal distribution N[mean,stdev]. For example: R> pnorm(0, 4, 10) [1] 0.3446 means there is 34.46% chance of getting a value equal to or smaller than 0 from a N(4, 10) distribution. What I intend to get is: given the observed value "x", mean, and stdev
2009 Apr 17
5
Binomial simulation
Hi Guy's I was wondering if someone could point me in the right direction. dbinom(10,1,0.25) I am using dbinom(10,1,0.25) to calculate the probabilty of 10 judges choosing a certain brand x times. I was wondering how I would go about simulating 1000 trials of each x value ? regards Brendan -- View this message in context: http://www.nabble.com/Binomial-simulation-tp23106347p23106347.html Sent from the R help mailing list archive at Nabble.c...
2005 Jan 14
2
probabilty calculation in SVM
Hi All, In package e1071 for SVM based classification, one can get a probability measure for each prediction. I like to know what is method that is used for calculating this probability. Is it calculated using logistic link function? Thanks for your help. Regards, Raj
2007 Jul 04
2
probabilty plot
Hi all, I am a freshman of R,but I am interested in it! Those days,I am learning pages on NIST,with url http://www.itl.nist.gov/div898/handbook/eda/section3/probplot.htm, I am meeting a problem about probability plot and I don't know how to plot a data set with R. Could somebody tell me the answer,and a example is the best! I will look forward to your answer. Thank you very much.
2006 Jun 14
1
Estimate region of highest probabilty density
Estimate region of highest probabilty density Dear R-community I have data consisting of x and y. To each pair (x,y) a z value (weight) is assigned. With kde2d I can estimate the densities on a regular grid and based on this make a contour plot (not considering the z-values). According to an earlier post in the list I adjusted the kd...
2010 Dec 10
2
survival package - calculating probability to survive a given time
Dear R users, i try to calculate the probabilty to survive a given time by using the estimated survival curve by kaplan meier. What is the right way to do that? as far as is see i cannot use the predict-methods from the survival package? library(survival) set.seed(1) time <- cumsum(rexp(1000)/10) status <- rbinom(1000, 1, 0.5) ## kapl...
2005 Jan 16
2
Empirical cumulative distribution with censored data
...tuation is very similar to survivial analysis with censored data. I tryed the function: plot(survfit(Surv(time),data=mydata,conf.int=F)) from the package "survival". Nevertheless, what i would like to see is an increase of probability as time increases, and not a decrease of survival probabilty. I tried to play with ecdf(), but dealing with the censored data is quite hard-working in this case. Is there anything for censored data in ecdf like-functions or a way to adapt plot.survfit to my case? Thank you for consideration, Ragards, ----------------------------------------------------...
2009 Jul 10
2
predict.glm -> which class does it predict?
...h levels cancer, noncancer. Proteins are numeric. Now, I want to use predict.glm to predict a new data. predict(model, newdata=testsamples, type="response") (testsamples is a small set of new samples). The result is a vector of the probabilites for each sample in testsamples. But probabilty WHAT for? To belong to the first level in T? To belong to second level in T? Is this fallowing expression factor(predict(model, newdata=testsamples, type="response") >= 0.5) TRUE, when the new sample is classified to Cancer or when it's classified to Noncancer? And why not the o...
2006 Jan 12
2
Basis of fisher.test
I want to ascertain the basis of the table ranking, i.e. the meaning of "extreme", in Fisher's Exact Test as implemented in 'fisher.test', when applied to RxC tables which are larger than 2x2. One can summarise a strategy for the test as 1) For each table compatible with the margins of the observed table, compute the probability of this table conditional on the
2004 Jul 11
1
your reference on this problem highly appreciated
...ns: I have a sample, say,K(at the range from 0 to 20000); the sample data's central moments m(1)---m(j) are estimated(j can be large). also, I can use some methodology to calculate the upper and lower bound of the probabilty of any interested interval, say, for the interval (400--800) with all these information, I wanna recover the distribution of the data, at least recover to some approximating analytic form.Does anybady know such theo...
2005 Mar 16
1
Help in persp (VERY URGENT ASSISTANCE)
........... [,1] [,2] [1,] 0 0 [2,] 0 0 [3,] 0 0 [4,] 0 0 [5,] 0 0 [6,] 0 0 [7,] 0 0 [8,] 0 0 ................... i need to label x axis with scale from 10 to 6000 with length of 10 and y axis with 1 and 2 and z from 0 to 1(probabilty) kindly guide me...(may be i have misunderstood some concepts in commands)... thanks in advance for the help and patience --------------------------------- [[alternative HTML version deleted]]
2005 Sep 22
1
(survexp:) compute the expected remaining survival time
DeaR list I would like to know if there is a direct method to compute the expected remaining survival time for a subject having survived up to time t. survexp gives me the probabilty for subject S to survive up to day D Thanks -- Anne
2007 Jan 05
1
Efficient multinom probs
Dear R-helpers, I need to compute probabilties of multinomial observations, eg by doing the following: y=sample(1:3,15,1) prob=matrix(runif(45),15) prob=prob/rowSums(prob) diag(prob[,y]) However, my question is whether this is the most efficient way to do this. In the call prob[,y] a whole matrix is computed which seems a bit of a waste. Is there maybe a vectorized version of dmultinom which
2010 Jan 21
1
superimpose histogram and fitted gamma pdf
Hi r-users,   I try to draw histogram (in terms of probabilty) and superimpose with the gamma pdf. Using this code below I can get the plots BUT the y scale for the density is out of scale.  How do I change the y-axis scale to max 1?  Another thing, how I do draw smooth line rather that points?   Nota that my observed data is hume_pos and the fitted data is r...
2009 Oct 29
1
correlated binary data and overall probability
..., each=5)), task=as.factor(rep(c(1:5),10))) [this format might be more appropriate:] corr2<-data.frame(ID=as.factor(rep(c(1:10),5)), tablet=as.factor(rep(c(1:5),each=10))) Now, I want to add one column 'outcome' for the binary response: * within each 'task', the probabilty of success (1) is fixed, (say rbinom(x,1,0.7)) * within each 'ID', the outcomes are correlated (say 0.8) How can I generate this column 'outcome' with the given proporties? Many thanks for hints or even code! Regards, Christian -- http://portal.gmx.net/de/go/dsl02
2011 Jul 14
1
t-test on a data-frame.
Dear R-helpers, In a data frame I have 100 securities,monthly closing value,from 1995 to present,which I have to 1. Sampling with replacement,make 50 samples of 10 securities each,each sample hence will be a data frame with 10 columns. 2. With uniform probabilty,mark a month from 2000 onwards as a "special" month,t=0. 3. I have to subtract the market index from each column of each sample and then compute the residues. 4. For each data frame of residues I have to compute the statistic ( Eps i0 - mean(Eps it ) ) / var( Eps it ). Here i and t vary...
2017 Aug 24
1
rmutil parameters for Pareto distribution
In https://en.wikipedia.org/wiki/Pareto_distribution, it is clear what the parameters are for the pareto distribution: *xmin *the scale parameter and *a* the shape parameter. I am using rmutil to generate random deviates from a pareto distribution. It says in the documentation that the probabilty density of the pareto distribution The Pareto distribution has density f(y) = s (1 + y/(m (s-1)))^(-s-1)/(m (s-1)) where m is the mean parameter of the distribution and s is the dispersion Through my experimentation of using rpareto function from the library using m as the scale parameter *xmin...
2009 Jun 08
1
Interpreting R -results for Bivariate Normal
...= 6.3] = 0.16^2 = .0256 m <- 5.3 + 0.6*(6.3 - 5.8) = 5.6 this the Expected value of E[X+Y] I see from the output that this would be correct because the probability of 5.6 = 0.5 to interpret E[X2|X1 = 6.3] I can't see it in the output. And I'm not sure how to find the conditional probabilty from the output. Any help would be greatly appreciated -- View this message in context: http://www.nabble.com/Interpreting-R--results-for-Bivariate-Normal-tp23916967p23916967.html Sent from the R help mailing list archive at Nabble.com.
2005 Oct 09
1
enter a survey design in survey2.9
Hi dears, I expect that Mr Thomas Lumley will read this message. I have data from a complexe stratified survey. The population is divide in 12 regions and a region consist to and urban area and rural one. there to region just with urbain area. stratification variable is a combinaison of region and area type (urban/rural) In rural area, subdivision are sample with probabilties proporionnal to
2011 Aug 31
6
Weights using Survreg
Dear R users, I have been trying to understand what the Weights arguments is doing in the estimation of the parameters when using the Surreg function. I looked through the function's code but I am not sure if I got it or not. For example, if I inclue the Surv function in it: survreg(Surv(vector, status)~1,weights=vector2,dist="Weibull") will it try to maximize the likelihood with