similar to: Making R CMD nicer

Displaying 20 results from an estimated 20000 matches similar to: "Making R CMD nicer"

2019 Jun 30
5
Making R CMD nicer
For the record, this is Linux R-devel: root at 4bef68c16864:~# R CMD /opt/R-devel/lib/R/bin/Rcmd: 60: shift: can't shift that many root at 4bef68c16864:~# R CMD -h /opt/R-devel/lib/R/bin/Rcmd: 62: exec: -h: not found root at 4bef68c16864:~# R CMD --help /opt/R-devel/lib/R/bin/Rcmd: 62: exec: --help: not found This is R-release on macOS: ? R CMD
2019 Jul 01
0
Making R CMD nicer
If you write a lot of R code to run as command line scripts then look at Dirk E's "littler": $ r --help Usage: r [options] [-|file] Launch GNU R to execute the R commands supplied in the specified file, or from stdin if '-' is used. Suitable for so-called shebang '#!/'-line scripts. Options: -h, --help Give this help list --usage Give a
2019 Jul 01
0
Making R CMD nicer
In that case, I was wrong. And I must apologize... In saying that, good to see Windows out performing Linux on the command line... On Mon, Jul 1, 2019 at 11:30 AM G?bor Cs?rdi <csardi.gabor at gmail.com> wrote: > > For the record, this is Linux R-devel: > > root at 4bef68c16864:~# R CMD > /opt/R-devel/lib/R/bin/Rcmd: 60: shift: can't shift that many > root at
2020 Oct 09
3
2 D density plot interpretation and manipulating the data
You could assign a density value to each point. Maybe you've done that already...? Then trim the lowest n (number of) data points Or trim the lowest p (proportion of) data points. e.g. Remove the data points with the 20 lowest density values. Or remove the data points with the lowest 5% of density values. I'll let you decide whether that is a good idea or a bad idea. And if it's a
2020 Jan 14
4
as-cran issue ==> set _R_CHECK_LENGTH_1_* settings!
> On Jan 14, 2020, at 3:29 PM, Abby Spurdle <spurdle.a at gmail.com> wrote: > >> I do want to entice people to have a long look beyond closed >> source OS into the world of Free Software where not only R is >> FOSS (Free and Open Source Software) but (all / almost) all the >> tools you use are of that same spirit. > > And while everyone is talking about
2020 Jun 07
1
[External] Re: use of the tcltk package crashes R 4.0.1 for Windows
sorry, release "versions" On Mon, Jun 8, 2020 at 11:17 AM Abby Spurdle <spurdle.a at gmail.com> wrote: > > On Mon, Jun 8, 2020 at 4:09 AM Fox, John <jfox at mcmaster.ca> wrote: > > Does it make sense to withdraw the Windows R 4.0.1 binary until the issue is resolved? > > Yes, it does. > All the release reversions should be removed.
2020 Oct 09
2
2 D density plot interpretation and manipulating the data
> My understanding is that this represents bivariate normal > approximation of the data which uses the kernel density function to > test for inclusion within a level set. (please correct me) You can fit a bivariate normal distribution by computing five parameters. Two means, two standard deviations (or two variances) and one correlation (or covariance) coefficient. The bivariate normal
2020 May 18
3
dbinom link
In principle a good idea, but I'm not sure the whereabouts of Catherine Loader are known at this point. Last peeps from her on the net seem to be about a decade old. .pd > On 18 May 2020, at 10:31 , Abby Spurdle <spurdle.a at gmail.com> wrote: > > This has come up before. > > Here's the last time: > https://stat.ethz.ch/pipermail/r-devel/2019-March/077478.html
2020 Oct 09
0
2 D density plot interpretation and manipulating the data
Hi Abby, Thanks for getting back to me, yes I believe I did that by doing this: SNP$density <- get_density(SNP$mean, SNP$var) > summary(SNP$density) Min. 1st Qu. Median Mean 3rd Qu. Max. 0 383 696 738 1170 1789 where get_density() is function from here: https://slowkow.com/notes/ggplot2-color-by-density/ and keep only entries with density > 400
2020 Oct 09
2
2 D density plot interpretation and manipulating the data
I recommend that you consult with a local statistical expert. Much of what you say (outliers?!?) seems to make little sense, and your statistical knowledge seems minimal. Perhaps more to the point, none of your questions can be properly answered without subject matter context, which this list is not designed to provide. That's why I believe you need local expertise. Bert Gunter "The
2019 Jul 13
2
head.matrix can return 1000s of columns -- limit to n or add new argument?
Hi Michael and Abby, So one thing that could happen that would be backwards compatible (with the exception of something that was an error no longer being an error) is head and tail could take vectors of length (dim(x)) rather than integers of length for n, with the default being n=6 being equivalent to n = c(6, dim(x)[2], <...>, dim(x)[k]), at least for the deprecation cycle, if not
2019 Nov 15
2
class(<matrix>) |--> c("matrix", "arrary") [was "head.matrix ..."]
> > And indeed I think you are right on spot and this would mean > > that indeed the implicit class > > "matrix" should rather become c("matrix", "array"). > > I've made up my mind (and not been contradicted by my fellow R > corers) to try go there for R 4.0.0 next April. I'm not enthusiastic about matrices extending arrays. If a
2019 Sep 16
5
head.matrix can return 1000s of columns -- limit to n or add new argument?
>>>>> Michael Chirico >>>>> on Sun, 15 Sep 2019 20:52:34 +0800 writes: > Finally read in detail your response Gabe. Looks great, > and I agree it's quite intuitive, as well as agree against > non-recycling. > Once the length(n) == length(dim(x)) behavior is enabled, > I don't think there's any need/desire to have
2020 Jan 14
5
as-cran issue ==> set _R_CHECK_LENGTH_1_* settings!
>>>>> Avraham Adler >>>>> on Mon, 13 Jan 2020 14:38:12 -0500 writes: > Those of us stuck on Windows but who attempt to develop properly are > wounded to the quick, sir! > :) > Avi Indeed, you had a ' :) ' , but others have perceived this as an insult. I'm really really sorry for that and do want to apologize to all of
2020 Oct 09
0
2 D density plot interpretation and manipulating the data
Hi Abby, thank you for getting back to me and for this useful information. I'm trying to detect the outliers in my distribution based of mean and variance. Can I see that from the plot I provided? Would outliers be outside of ellipses? If so how do I extract those from my data frame, based on which parameter? So I am trying to connect outliers based on what the plot is showing: s <-
2020 Oct 23
2
3d plot of earth with cut
Dear All, Thanks a lot for the useful help again. I manage to get it done up to a point where I think I just need to apply some smoothing/interpolation to get denser points, to make it nice. Basically, I started from Duncen's script to visualize and make the clipping along a plane at a slice. Then I map my data points' values to a color palette and just plot them as points on this plane.
2019 Jul 02
3
Format printing inside a matrix
Hi everyone, I am not sure if there is an existing solution to this, but I want my S4 objects inside a list matrix showing correctly. Currently it shows as: R> str(lst[[1]]) Formal class 'Basic' [package "symengine"] with 1 slot ..@ ptr:<externalptr> R> matrix(lst, 2) [,1] [,2] [,3] [,4] [,5] [1,] ? ? ? ? ? [2,] ? ? ? ? ? Is it
2019 Jun 14
2
Halfway through writing an "IDE" with support for R; Proof of concept, and request for suggestions.
On Sat, 15 Jun 2019 at 01:24, Abby Spurdle <spurdle.a at gmail.com> wrote: > > None of the tools that I've looked at satisfy these constraints. > But if you know of some, I'd like to know... And I would consider contributing... What about Atom, VS Code and the like? Or what about taking a project that meets most of the constraints and pushing to cover all of them, or even
2020 Jan 01
3
standard naming for components of R data structures
I need to write some documentation: I'm looking for a standard, consistent way of referring to the components and attributes of R data structures. Googling and Stackoverflow yield a variety of github sites that do not seem to be particularly authoritative. I was hoping to find a BNF/ABNF grammar for R. I've looked at the output of bison -v ./R-3.6.2/src/main/gram.y but it does not
2019 May 17
5
Give update.formula() an option not to simplify or reorder the result -- request for comments
Dear All, Martin Maechler has asked me to send this to R-devel for discussion after I submitted it as an enhancement request ( https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17563). At this time, the update.formula() method always performs a number of transformations on the results, eliminating redundant variables and reordering interactions to be after the main effects. This is not always