Displaying 20 results from an estimated 1000 matches similar to: "Assistance with data import from Statistica"
2008 Apr 09
2
GLM fitting in R and Statistica
Hi,
I have a problem concerning discrepances between R (which I use) and
Statistica (which uses my supervisor). I can't say what is the origin
of these differences but unfortunately my supervisor doesn't know that
either.
Our response variable is number (or presence/absence) of parasites in
rodents and explanatory variables are presence/absence of several
alleles. The rodents were
2011 Mar 22
2
Popularity of R, SAS, SPSS, Stata, Statistica, S-PLUS updated
Greetings,
I've just put out the latest version of "The Popularity of Data Analysis Software" at http://r4stats.com/popularity. This update includes complete data for 2010, the addition of number of blogs for each software, more coverage of Statistica, and, where possible, measures regarding the implementations of the SAS Language: Carolina and the World Programming System (WPS).
2003 Nov 06
2
Summary: How to represent pure linefeeds chr(10) under R for Windows
Thanks to all who have responded.
My concern was to be able to write a csv file that can have line feeds in
string columns chr(10).
Why? Excel allows line feeds chr(10) within cells and line breaks
chr(13)+chr(10) at line ending,
but the windows version of R automatically replaces \n by \r\n in writing
and \r\n by \n in reading (text mode).
The clues for a solution came from Brian Ripley and
2004 Oct 01
1
dataload for linux
Is there a dataload utility for linux. The link in genstat is down but I
managed to find the utility at:
http://gurukul.ucc.american.edu/econ/gaussres/UTILITYS/DATALOAD.HTM
but this is a dos/windows version.
Thank you
Jean
2004 Dec 23
2
Importing csv files
There is a recurring need for importing large csv files quickly. David
Baird's dataload is a standalone program that will directly create .rda
files from .csv (it also handles many other conversions). Unfortunately
dataload is no longer publicly available because of some kind of
relationship with Stat/Transfer. The idea is a good one, though. I
wonder if anyone would volunteer to
2003 Oct 22
2
Excel to R
I have Excel files containing data that I would like to move to R.
They are in the standard form of a one row header followed by
rows of data, one record per row EXCEPT that there are a few
rows of comments before the header. The number of rows of comments
varies. For Excel files of this form without comments I have had
success with:
require(RODBC)
z <-
2003 Nov 04
5
read.spss Error reading system-file header
Is there any documentation on what kind of SPSS file can and cannot be
read by read.spss? Alternatively, how can one modify or "clean" an SPSS
file to make it readable by read.spss? What properties must a *.sav file
before read.spss can read it?
The file in this example is 270KB, with 5 rows and 173 columns. I have no
trouble reading larger files with read.spss, so it's not
2003 Nov 06
3
import data troubles
HI R lovers,
I have been facing a petty trouble with datas' import :
I have a plain txt file (see attached file or the copy below) that I cannot
read either with scan or read.table
> scan(file="F:/Alt/HDG/Stoliaroff/Data/test.txt")
Error in scan(file = "F:/Alt/HDG/Stoliaroff/Data/test.txt") :
"scan" expected a real, got "??6"
>
2004 Feb 26
2
Multidimensional scaling and distance matrices
Dear All,
I am in the somewhat unfortunate position of having to reproduce the
results previously obtained from (non-metric?) MDS on a "kinship" matrix
using Statistica. A kinship matrix measures affinity between groups, and
has its maximum values on the diagonal.
Apparently, starting with a nxn kinship matrix, all it was needed to do
was to feed it to Statistica flagging that the
2015 Feb 04
2
[LLVMdev] question about enabling cfl-aa and collecting a57 numbers
Sounds good, I'll reword that comment. Also, the assert you mentioned
turned out to be a bad assumption when combined with how I foresee us
handling inttoptr/ptrtoint in the future, so I'll just replace it with
slightly more robust code. :)
Thanks for the feedback,
George
On Tue, Feb 3, 2015 at 11:30 PM, Hal Finkel <hfinkel at anl.gov> wrote:
> Hi George,
>
> +// Given an
2009 Dec 07
2
column statistics
Hi everybody,
I would like to compute the mean for 1 variable between the rows with
the same levels.
For example, with the dataset below:
Factor1 Factor2 Value
A X 1
A X 2
A Y 3
A Y 4
B X 5
B X 6
B Y
2015 Jan 30
0
[LLVMdev] question about enabling cfl-aa and collecting a57 numbers
----- Original Message -----
> From: "George Burgess IV" <george.burgess.iv at gmail.com>
> To: "Hal Finkel" <hfinkel at anl.gov>
> Cc: "Chandler Carruth" <chandlerc at google.com>, "Jiangning Liu" <Jiangning.Liu at arm.com>, "LLVM Developers Mailing
> List" <llvmdev at cs.uiuc.edu>, "Daniel
2015 Feb 04
3
[LLVMdev] BasicAA Tests
[+llvmdev]
Hi George,
You're right, these tests are broken, and have been for a long time. As it turns out, at least in terms of the 2003-12-11-ConstExprGEP.ll test, this is related to a case we've been discussing in another thread ("Basic AliasAnalysis: Can GEPs with the same base but different constant indices into a struct alias?"). It seems like, to some extent, we used to
2015 Jan 30
2
[LLVMdev] question about enabling cfl-aa and collecting a57 numbers
> I had thought that the case that Danny had looked at had a constant GEP,
and so this constant might alias with other global pointers. How is that
handled now?
That issue had to do with that we assumed that for all arguments of a given
Instruction, each argument was either an Argument, GlobalValue, or Inst in
`for (auto& Bb : Inst.getBasicBlockList()) for (auto& Inst :
2015 Jan 31
3
[LLVMdev] question about enabling cfl-aa and collecting a57 numbers
So, I split it up into three patches:
- cflaa-danny-fixes.diff are (some of?) the fixes that Danny gave us earlier for tests + the minimal modifications you’d need to make in CFLAA to make them pass tests.
- cflaa-minor-bugfixes.diff consists primarily of a bug fix for Argument handling — we’d always report NoAlias when one of the given variables was an entirely unused argument
(We never added
2010 Jan 30
2
Questions on Mahalanobis Distance
Hello,
I am a new R user and trying to learn how to implement the mahalanobis
function to measure the distance between to 2 population centroids. I
have used STATISTICA to calculate these differences, but was hoping to learn
to do the analysis in R. I have implemented the code as below, but my
results are very different from that of STATISTICA, and I believe I may not
have interpreted the help
2018 Dec 06
4
[cfe-dev] RFC: Modernizing our use of auto
> On Dec 4, 2018, at 10:59 AM, George Burgess IV <george.burgess.iv at gmail.com> wrote:
>
> > I think people are too eager to use `auto` because it is easy to write but it makes the types substantially harder for the reader to understand
>
> I'm probably the Nth person to ask this, but what keeps us from promoting the use of a clang-tidy-powered tool that basically
2013 Oct 09
1
mixed model MANOVA? does it even exist?
Hi,
Sorry to bother you again.
I would like to estimate the effect of several categorical factors (two
between subjects and one within subjects) on two continuous dependent
variables that probably covary, with subjects as a random effect. *I want
to control for the covariance between those two DVs when estimating the
effects of the categorical predictors** on those two DVs*. The thing is, i
2005 Mar 01
4
write a library under 2.0.1 version
Hi there,
I had written a library under R 1.9.0 and now I would like to import that
library under R 2.0.1
Apparently it does not work; I can install the package, but when I try to
read it the error is the following:
Error in library(compvar) : 'compvar' is not a valid package -- installed
< 2.0.0
I have checked other libraries in R 2.0.1 and I noticed that there is a
new folders
2012 May 29
2
Wilcoxon-Mann-Whitney U value: outcomes from different stat packages
Given this example
#start code
a<-c(0,70,50,100,70,650,1300,6900,1780,4930,1120,700,190,940,
760,100,300,36270,5610,249680,1760,4040,164890,17230,75140,1870,22380,5890,2430)
b<-c(0,0,10,30,50,440,1000,140,70,90,60,60,20,90,180,30,90,
3220,490,20790,290,740,5350,940,3910,0,640,850,260)
wilcox.test(a, b, paired=FALSE)
#sum of rank for first sample
sum.rank.a <-