similar to: boxplot including null info from dataframe, not with SQLite dataframe

Displaying 20 results from an estimated 400 matches similar to: "boxplot including null info from dataframe, not with SQLite dataframe"

2008 Sep 11
2
database table merging tips with R
I have not devoted time to setting up ROracle since binaries are not available and it seems to require some effort to compile (see http://cran.r-project.org/web/packages/ROracle/index.html). On the other hand, RODBC worked more or less magically once I set up the data sources. What is your success using ROracle and why would it be preferable to RODBC ? -Avram On Thursday, September 11,
2011 Nov 23
2
zeros to NA's - faster
Hello, Is there a faster way to do this? Basically, I'd like to NA all values in all_data if there are no 1's in the same column of the other matrix, iu. Put another way, I want to replace values in the all_data columns if values in the same column in iu are all 0. This is pretty slow for me, but works: all_data = matrix(c(1:9),3,3) colnames(all_data) =
2024 Apr 16
5
read.csv
Dear R-developers, I came to a somewhat unexpected behaviour of read.csv() which is trivial but worthwhile to note -- my data involves a protein named "1433E" but to save space I drop the quote so it becomes, Gene,SNP,prot,log10p YWHAE,13:62129097_C_T,1433E,7.35 YWHAE,4:72617557_T_TA,1433E,7.73 Both read.cv() and readr::read_csv() consider prot(ein) name as (possibly confused by
2024 Apr 16
1
read.csv
?s 11:46 de 16/04/2024, jing hua zhao escreveu: > Dear R-developers, > > I came to a somewhat unexpected behaviour of read.csv() which is trivial but worthwhile to note -- my data involves a protein named "1433E" but to save space I drop the quote so it becomes, > > Gene,SNP,prot,log10p > YWHAE,13:62129097_C_T,1433E,7.35 > YWHAE,4:72617557_T_TA,1433E,7.73 >
2024 Apr 16
1
read.csv
Gene names being misinterpreted by spreadsheet software (read.csv is no different) is a classic issue in bioinformatics. It seems like every practitioner ends up encountering this issue in due time. E.g. https://pubmed.ncbi.nlm.nih.gov/15214961/ https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-1044-7 https://www.nature.com/articles/d41586-021-02211-4
2018 Jan 15
0
sum multiple csv files
Your message seems unclear, and as evidence the respondents are giving various answers. You should provide a small sample of input and output data as it would look in R to avoid this kind of thrashing about. See [1][2][3] for guidance. Note that you also really need to figure out how to make sure your email program sends plain text, because HTML formatting WILL be stripped by the mailing list
2011 Aug 29
1
How to order based on the second two columns?
Hello All, I have a data frame consisting of 4 columns (id1, id2, y, pred) where pred is the predicted value based on the glm function and my data frame is called "all". "data" is another data frame that has all data but I want to put together some important columns from my original data frame (data) into another data frame (all) as follows and I would like them to be sorted
2018 Jan 15
4
sum multiple csv files
Hi, I am pretty new to R and I would apreciatte very much your help to solve my problem. I have 40 csv files that have the same structure, and I want to merge them into a single data frame. I already have load and combined all the cvs files into a large list, and I created two filenames <- list.files('data',full.names=TRUE) All_data <- lapply(filenames,function(i){ ###read cvs
2008 Sep 02
5
Appending a record to a table
Hi I''m not too sure how best to explain this but here goes! I am trying to write an appointment system. I have, through example, just about got the dynamics correct. Even tried to play with some table joins (and excuse me if I''ve used the incorrect terminlogy). But no matter what I try I can''t seem to get the following code to work. I have a cart filled with Treatment
2010 Jul 16
2
invalid factor level, NAs generated
I've seen a few threads about this, but none that seem to answer my problem I have a list of .txt files in a directory that I am reading into R and row binding together. I am using the following code to do so: # Directory where files are found my.txt.file.directory <- "C:/Jared/Data/Kenya/Wildebeest/Tracking_Data" names.of.txt.files <-
2008 Dec 08
2
Permutation exact test to compare related series
I all, is there a way with R to perform an exact permutation test to replace the wilcoxon test to compare paired series and/or to perform pairwise multiple comparisons for related series after a Friedman test ? Thanks Gilles
2016 Apr 04
0
Does this code execute the bagging correctly ?!
Hello the code : set.seed(10) y<-c(1:1000) x1<-c(1:1000)*runif(1000,min=0,max=2) x2<-c(1:1000)*runif(1000,min=0,max=2) x3<-c(1:1000)*runif(1000,min=0,max=2) lm_fit<-lm(y~x1+x2+x3) summary(lm_fit) set.seed(10) all_data<-data.frame(y,x1,x2,x3) positions <- sample(nrow(all_data),size=floor((nrow(all_data)/4)*3)) training<- all_data[positions,] testing<-
2007 Jul 10
0
Plot dies with memory not mapped (segfault) (PR#9785)
Full_Name: Clay B Version: 2.5.0 (2007-04-23) OS: Solaris Nevada Build 55b Submission from: (NULL) (65.101.229.198) I find that running this script causes R to segfault reliably. However, running just for one system at a time (modifying the for loop updating iter to run just for a system at a time works). The system is a Sun W2100z with 12 GB of ram, and R segfaults using only around 360 MB of
2024 Apr 16
1
read.csv
Hum... This boils down to > as.numeric("1.23e") [1] 1.23 > as.numeric("1.23e-") [1] 1.23 > as.numeric("1.23e+") [1] 1.23 which in turn comes from this code in src/main/util.c (function R_strtod) if (*p == 'e' || *p == 'E') { int expsign = 1; switch(*++p) { case '-': expsign = -1; case
2024 Feb 29
1
R 4.3.3 is released
The build system rolled up R-4.3.3.tar.gz and .xz (codename "Angel Food Cake") this morning. This is a minor update, intended as the wrap-up release for the 4.3.x series. This also marks the 6th anniversary of R-1.0.0. (2000-02-29) The list below details the changes in this release. You can get the source code from https://cran.r-project.org/src/base/R-4/R-4.3.3.tar.gz
2024 Feb 29
1
R 4.3.3 is released
The build system rolled up R-4.3.3.tar.gz and .xz (codename "Angel Food Cake") this morning. This is a minor update, intended as the wrap-up release for the 4.3.x series. This also marks the 6th anniversary of R-1.0.0. (2000-02-29) The list below details the changes in this release. You can get the source code from https://cran.r-project.org/src/base/R-4/R-4.3.3.tar.gz
2024 Feb 29
1
R 4.3.3 is released
The build system rolled up R-4.3.3.tar.gz and .xz (codename "Angel Food Cake") this morning. This is a minor update, intended as the wrap-up release for the 4.3.x series. This also marks the 6th anniversary of R-1.0.0. (2000-02-29) The list below details the changes in this release. You can get the source code from https://cran.r-project.org/src/base/R-4/R-4.3.3.tar.gz
2019 Nov 30
0
Re: [PATCH nbdkit 2/3] filters: stats: Measure time per operation
On Sat, Nov 30, 2019 at 9:13 AM Richard W.M. Jones <rjones@redhat.com> wrote: > > On Sat, Nov 30, 2019 at 02:17:06AM +0200, Nir Soffer wrote: > > Previously we measured the total time and used it to calculate the rate > > of different operations. This is incorrect and hides the real > > throughput. A more useful way is to measure the time we spent in each > >
2009 Dec 11
3
Please help with a basic function
Hello, I am learning how to use functions, but I'm running into a roadblock. I would like my function to do two things: 1) convert an object to a dataframe, 2) and then subset the dataframe. Both of these commands work fine outside the function, but I would like to wrap them in a function so I can apply the code iteratively to many such objects. Here's what I wrote, but it doesn't
2011 Aug 21
3
pooled hazard model with aftreg and time-dependent variables
Dear R-users, I have two samples with individuals that are in more than one of the samples and individuals that are only in one sample. I have been trying to do a pooled hazard model, stacking one sample below the other, with aftreg and time-dependent covariates. The idea behind is to see aggregate effects of covariates, but need to control for ther effects of same individuals in both samples