Displaying 20 results from an estimated 2000 matches similar to: "Help on mapping memory"
2024 Apr 16
1
read.csv
Gene names being misinterpreted by spreadsheet software (read.csv is
no different) is a classic issue in bioinformatics. It seems like
every practitioner ends up encountering this issue in due time. E.g.
https://pubmed.ncbi.nlm.nih.gov/15214961/
https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-1044-7
https://www.nature.com/articles/d41586-021-02211-4
2019 Jun 21
0
Calculation of e^{z^2/2} for a normal deviate z
Hi Peter, Rui, Chrstophe and Gabriel,
Thanks for your inputs -- the use of qnorm(., log=TRUE) is a good point in line with pnorm with which we devised log(p) as
log(2) + pnorm(-abs(z), lower.tail = TRUE, log.p = TRUE)
that could do really really well for large z compared to Rmpfr. Maybe I am asking too much since
z <-20000
>
2019 May 31
2
use of buffers in sprintf and snprintf
No, that will make it even worse since you'll be declaring a lot more memory that you actually have.
The real problem is that you're ignoring the truncation, so you probably want to use something like
if (snprintf(tempname, sizeof(tempname), "%s.%d", of1name, j) >= sizeof(tempname)) Rf_error("file name is too long");
BTW: most OSes systems have a path limits that
2019 May 30
2
use of buffers in sprintf and snprintf
Hi again,
I realised it is useful to replicate the warnings locally without relying on CRAN automatic check; instead of R(-devel) CMD check --as-cran package_version.tar.gz one can use
R CMD check --configure-args=""
and in my case the WARNINGS were initially given with https://www.stats.ox.ac.uk/pub/bdr/gcc9/README.txt and those specification might as well used in --configure-args
2019 May 31
0
use of buffers in sprintf and snprintf
On Thu, May 30, 2019 at 7:21 PM Simon Urbanek
<simon.urbanek at r-project.org> wrote:
>
> No, that will make it even worse since you'll be declaring a lot more memory that you actually have.
>
> The real problem is that you're ignoring the truncation, so you probably want to use something like
>
> if (snprintf(tempname, sizeof(tempname), "%s.%d", of1name,
2019 Jun 23
0
Calculation of e^{z^2/2} for a normal deviate z
include/Rmath.h declares a set of 'logspace' functions for use at the C
level. I don't think there are core R functions that call them.
/* Compute the log of a sum or difference from logs of terms, i.e.,
*
* log (exp (logx) + exp (logy))
* or log (exp (logx) - exp (logy))
*
* without causing overflows or throwing away too much accuracy:
*/
double Rf_logspace_add(double
2024 Apr 16
1
read.csv
Hum...
This boils down to
> as.numeric("1.23e")
[1] 1.23
> as.numeric("1.23e-")
[1] 1.23
> as.numeric("1.23e+")
[1] 1.23
which in turn comes from this code in src/main/util.c (function R_strtod)
if (*p == 'e' || *p == 'E') {
int expsign = 1;
switch(*++p) {
case '-': expsign = -1;
case
2019 Jun 24
0
Calculation of e^{z^2/2} for a normal deviate z
Hi All,
Thanks for all your comments which allows me to appreciate more of these in Python and R.
I just came across the matrixStats package,
## EXAMPLE #1
lx <- c(1000.01, 1000.02)
y0 <- log(sum(exp(lx)))
print(y0) ## Inf
y1 <- logSumExp(lx)
print(y1) ## 1000.708
and
> ly <- lx*100000
> ly
[1] 100001000 100002000
> y1 <- logSumExp(ly)
> print(y1)
[1] 100002000
2019 Jun 23
2
Calculation of e^{z^2/2} for a normal deviate z
I agree with many the sentiments about the wisdom of computing very
small p-values (although the example below may win some kind of a prize:
I've seen people talking about p-values of the order of 10^(-2000), but
never 10^(-(10^8)) !). That said, there are a several tricks for
getting more reasonable sums of very small probabilities. The first is
to scale the p-values by dividing the
2019 Jun 21
4
Calculation of e^{z^2/2} for a normal deviate z
You may want to look into using the log option to qnorm
e.g., in round figures:
> log(1e-300)
[1] -690.7755
> qnorm(-691, log=TRUE)
[1] -37.05315
> exp(37^2/2)
[1] 1.881797e+297
> exp(-37^2/2)
[1] 5.314068e-298
Notice that floating point representation cuts out at 1e+/-308 or so. If you want to go outside that range, you may need explicit manipulation of the log values. qnorm()
2019 Jun 24
2
Calculation of e^{z^2/2} for a normal deviate z
>>>>> William Dunlap via R-devel
>>>>> on Sun, 23 Jun 2019 10:34:47 -0700 writes:
>>>>> William Dunlap via R-devel
>>>>> on Sun, 23 Jun 2019 10:34:47 -0700 writes:
> include/Rmath.h declares a set of 'logspace' functions for use at the C
> level. I don't think there are core R functions that call
2003 May 04
1
port of Pan to R
I'm looking for a port of Schafer's PAN module for multiple imputation of
nested data.
It is written in S-Plus, and I would like to use it in R.
Any pointers most appreciated.
Best wishes,
Paul von Hippel
2020 Sep 27
2
Using CentOS 7 to attempt recovery of failed disk
In article <E02FA554-9D6D-4E7D-8A78-5FBDE1DE939D at kicp.uchicago.edu>,
Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:
>
>
> > On Sep 26, 2020, at 8:05 AM, Jerry Geis <jerry.geis at gmail.com> wrote:
> >
> > I have a disk that is flagging errors, attempting to rescue the data.
> >
> > I tried dd first - if gets about 117G of 320G disk
2001 Jul 09
2
Need advice for application port
OK, I am definitely new to porting from Windows to Linux. You may have
already seen my previous post about Winelib, ignore it for the time
being. We have an app that dates back several developers, which of
course no longer are available, and several years. We have had quite a
few requests for a version that runs on Linux and hence the work to
get the app ported. It is not a large app by most
2020 Sep 26
0
Using CentOS 7 to attempt recovery of failed disk
> On Sep 26, 2020, at 8:05 AM, Jerry Geis <jerry.geis at gmail.com> wrote:
>
> I have a disk that is flagging errors, attempting to rescue the data.
>
> I tried dd first - if gets about 117G of 320G disk and stops incrementing
> the save image any more.
did you try
dd conv=noerror ?
this flag makes dd not stop on input error. Whatever is irrecoverable is irrecoverable,
2020 Sep 27
0
Using CentOS 7 to attempt recovery of failed disk
@tonymountifield
Does this still hold true?
https://superuser.com/a/1075837
On Sun, Sep 27, 2020 at 7:21 AM Tony Mountifield <tony at softins.co.uk> wrote:
> In article <E02FA554-9D6D-4E7D-8A78-5FBDE1DE939D at kicp.uchicago.edu>,
> Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:
> >
> >
> > > On Sep 26, 2020, at 8:05 AM, Jerry Geis
2019 Jun 21
4
Calculation of e^{z^2/2} for a normal deviate z
Hello,
Well, try it:
p <- .Machine$double.eps^seq(0.5, 1, by = 0.05)
z <- qnorm(p/2)
pnorm(z)
# [1] 7.450581e-09 1.228888e-09 2.026908e-10 3.343152e-11 5.514145e-12
# [6] 9.094947e-13 1.500107e-13 2.474254e-14 4.080996e-15 6.731134e-16
#[11] 1.110223e-16
p/2
# [1] 7.450581e-09 1.228888e-09 2.026908e-10 3.343152e-11 5.514145e-12
# [6] 9.094947e-13 1.500107e-13 2.474254e-14 4.080996e-15
2005 Jun 28
1
sample R code for multiple imputation
Hi,
I have a big dataset which has many missing values and want to implement
Multiple imputation via Monte carlo markov chain by following J Schafer's
"Analysis of incomplete multivariate data". I don't know where to begin
and is looking for a sample R code that implements multiple imputation
with EM, MCMC, etc....
Any help / suggestion will be greatly appreciated.
David
2020 Mar 16
1
live storage migration using blockcopy
Hello,
I'm seeking the solution to live storage migration using blockcopy.
Previously, the "virsh undefine" is required before blockcopy.
https://www.redhat.com/archives/libvirt-users/2015-October/msg00027.html
QEMU has "block-dirty-bitmap-*" operations now, are there steps for
the live storage migration using blockcopy without undefine?
By the way, why's the purpose
2024 Apr 16
5
read.csv
Dear R-developers,
I came to a somewhat unexpected behaviour of read.csv() which is trivial but worthwhile to note -- my data involves a protein named "1433E" but to save space I drop the quote so it becomes,
Gene,SNP,prot,log10p
YWHAE,13:62129097_C_T,1433E,7.35
YWHAE,4:72617557_T_TA,1433E,7.73
Both read.cv() and readr::read_csv() consider prot(ein) name as (possibly confused by