similar to: bivariate empirical cdf

Displaying 20 results from an estimated 1000 matches similar to: "bivariate empirical cdf"

2005 Jun 23
1
mac osx, g95 package port problem
Hi all, I have a working package for linux, including fortran 95 code compiled with g95, that I need to port to OS X. The package works on Linux and seems to load on the Mac, but when I try to run a function that calls C or Fortran I'm told that the symbol is not loaded. I'm developing via a shell account on an OS X system, I don't have access to a desktop. The set up is: R
2006 Jun 30
1
Empirical CDF
Good day everyone, I want to assess the error when fitting a Gram-Charlier CDF to some data 'ws', that is, I want to calculate: Err = |ecdf(ws) - GCh_ser(ws)| The problem is, I cannot get the F(x) values from the ecdf. 'Summary(ecdf())' returns some of the x-axis values, but how do you get the F(x) values? Thank you for any help you can provide. Regards, Augusto
2005 May 06
2
bivariate normal cdf
-- R Help List -- I am looking for a bivariate normal cdf routine in R. I have some fortran routines for this, which appear to be based on 15-point quadrature. Any guidance/suggestions on making these in loadable R-functions would be appreciated. Thanks, Dan =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Daniel A. Powers, Ph.D. Department of Sociology University of Texas at Austin
2013 Aug 26
0
Bivariate skew normal cdf; very slow
Dear all, I am calculating the bivariate skew normal cdf in "sn" package using "pmsn" function. Although it is quite convenient ( thanks to prof. Azzalini) but it seems to be slow. For example, it takes about 1 minute in calculation of 100k of such cdf values. I am thinking to write a c++ code for this although not very familiar with it. Any other idea?    Thanks in advance,
2008 Apr 04
0
looking for a CDF of bivariate noncentral Chisquare
Hi, I would like to know if there is a program written in R to get the CDF (cumulative distribution function) of a bivariate non-central chi-square distribution. Hope someone will reply. Thank you, Rossita M Yunus yunus@usq.edu.au This email (including any attached files) is confidentia...{{dropped:19}}
2002 May 01
3
bivariate normal cdf and rho
Suppose F(x, y; rho) is the cdf of a bivariate normal distribution, with standardized marginals and correlation parameter rho. For any fixed x and y, I wonder if F(x, y; rho) is a monotone increasing function of rho, i.e., there is a 1 to 1 map from rho to F(x, y; rho). I explored it using the function pmvnorm in package mvtnorm with different x and y. The plot suggests the statement may be true.
2005 Jan 07
2
Getting empirical percentiles for data
Dear List, I have some discrete data and want to calculate the percentiles and the percentile ranks for each of the unique scores. I can calculate the percentiles with quantile(). I know that "ecdf" can be used to calculate the empirical cumulative distribution. However, I don't know how to exact the cumulative probabilities for each unique element. The requirement is similar
2005 Sep 28
1
gfortran Makefile for cygwin
Hi all, I'm porting a package that I've worked on for OS X to Cygwin/Windows. This package requires a Makefile. My question is, how can I find out (or what is), the link command? Here is the OS X Makefile: RLIB_LOC=${R_HOME} F90_FILES=\ class_data_frame.f90 \ class_old_dbest.f90 \ class_cm_data.f90 \ class_cm.f90 \ class_bgw.f90 \ class_cm_mle.f90 \ cme.f90 FORTRAN_FILES=\ dgletc.f
2005 Sep 28
3
gfortran Makefile for windows
Hi all, (Originally posted to r-help) I'm porting a package that I've worked on for OS X to Windows. The package is written in F95 so I need to compile it with gfortran and link it with gcc4. I've been trying to build an R with gcc4 without luck so far. If there is a binary of such a thing info would be appreciated. This package requires a Makefile. My question is, how can I find
2006 Mar 15
2
difftime arguments
Hi I just started using RGui.exe under widnows. I have a text file containing date arranged in columns and rows, each column has the same format, each row with different formats. 3 of the columns are something like this 1/12/2006 3:59:45 PM I need to calculate the different in seconds between 2 selected periods using their row’s index My solution: Read the file in a data frame and
2005 Jan 17
1
merge data frames taking mean/mode of multiple macthes
Hello :) I have two data frames, one has properties taken on a piece by piece basis and the other has performance on a lot by lot basis. I wish to combine these two data frames but the problem is that each lot has multiple pieces and hence i need to take a mean of the properties of multiple pieces and match it to the row having data about the lot. I was wondering if there is a simple commmand,
2006 Sep 25
1
apply: new behaviour for factors in R-2.4.0
Dear R-core There is a different output for the apply function due to the change of unlist as mentioned in the R news. Newly, applying as.factor() (or factor()) in str(dat <- data.frame(x = 1:10, f1 = gl(2,5,labels = c("A", "B")))) (d1 <- apply(dat,2,as.factor)) newly returns a character matrix while in R-2.3.1 the same command resulted in an integer matrix that was
2007 Jun 19
2
Preconditions for a variance analysis
Hello everbody, i'm currently using the anova()-test for a small data.frame of 40 rows and 2 columns. It works well, but is there any preconditions for a valid variance analysis, that i should consider? Thank you for your answer, Daniel
2006 Apr 25
1
summary.lme: argument "adjustSigma"
Dear R-list I have a question concerning the argument "adjustSigma" in the function "lme" of the package "nlme". The help page says: "the residual standard error is multiplied by sqrt(nobs/(nobs - npar)), converting it to a REML-like estimate." Having a look into the code I found: stdFixed <- sqrt(diag(as.matrix(object$varFix))) if (object$method
2005 May 15
3
adjusted p-values with TukeyHSD?
hi list, i have to ask you again, having tried and searched for several days... i want to do a TukeyHSD after an Anova, and want to get the adjusted p-values after the Tukey Correction. i found the p.adjust function, but it can only correct for "holm", "hochberg", bonferroni", but not "Tukey". Is it not possbile to get adjusted p-values after
2011 Feb 17
0
[BioC] Make.cdf.package error
Hi everybody, I tried to analyze a custom Affymetrix 3'-biased Array. So I wanted to make a cdf package. (My CDF file size is 1.12Go). I tried several methods but the same error occured Method 1 > #Set the working directory > setwd("D:/Analyse R/Cel files") > #library to create cdf env > library("makecdfenv") >#Create cdf environment >pkgpath
2005 Jul 07
1
CDF plot
Dear all, I have define a discrete distribution P(y_i=x_i)=p_i, which I want to plot a CDF plot. However, I can not find a function in R to draw it for me after searching R and R-archive. I only find the one for the sample CDF instead my theoretical one. I find stepfun can do it for me, however, I want to plot some different CDF with same support x in one plot. I can not manage how to do it with
2012 Jun 14
2
plot cdf
Good Afternoon, I'm trying to create a cdf plot, with the following code. It works well, but I have little doubt, if you can help solve. When I create the plot, like the graph line would still not appear with point #cdf x<-table(Dataset$Apcode) View(s) hist(s) *plot(ecdf(x))* x<-1 37607 2 26625 3 5856 4 25992 5 30585 6 16064 7 9850 .. ... .. 186 52 -- View this message in
2006 Dec 04
0
How to calculate area between ECDF and CDF?
Hi all, I'm working with data to which I'm fitting three-parameter weibull distributions (shape, scale & shift). The data are of low sample sizes (between 10 and 80 observations), so I'm reluctant to check my fits using chi-square (also, I'd like to avoid bin choice issues). I'd use the Kolmogorov-Smirnov test, but of course this is invalid when the distribution
2003 Sep 09
2
Computing a CDF or many quantiles
Given f, a pdf over a finite interval, is there any existing R function that can efficiently tabulate the cumulative distribution function for f, or produce all N+1 quantiles of the form i/N? "Efficiently" here means better than doing repeated integrations for each point.