Displaying 20 results from an estimated 1000 matches similar to: "Analyze multiple data set simultaneoulsy"
2010 May 18
25
Very serious performance degradation
Hi,
I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks :
zfs_raid ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
c7t3d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0
2008 Feb 15
1
Questions about EM algorithm
Dear all:
Assume I have 3 distributions, x1, x2, and x3.
x1 ~ normal(mu1, sd1)
x2 ~ normal(mu2, sd2)
x3 ~ normal(mu3, sd3)
y1 = x1 + x2
y2 = x1 + x3
Now that the data I can observed is only y1 and y2. It is
easy to estimate (mu1+m2), (mu1+mu3), (sd1^2+sd2^2) and
(sd1^2+sd3^2) by EM algorithm since
y1 ~ normal(mu1+mu2, sqrt(sd1^2+sd2^2)) and
y2 ~ normal(mu1+mu3, sqrt(sd1^2+sd3^2))
However, I want
2011 Jan 10
2
Calculating Portfolio Standard deviation
Dear R helpers
I have following data
stocks <- c("ABC", "DEF", "GHI", "JKL")
prices_df <- data.frame(ABC = c(17,24,15,22,16,22,17,22,15,19),
DEF = c(22,28,20,20,28,26,29,18,24,21),
GHI = c(32,27,32,36,37,37,34,23,25,32),
2006 Nov 24
1
barplot help needed
hello,
I would like to create the following barplot:
I have 4 different data sets (same length + stddev for each data point)
data1
sd1
data2
sd2
data3
sd3
data4
sd4
now, I'd like to plot in the following way:
data1[1],data2[1],data3[1],data4[1] with it's sd-values side-by-side at
one x-axis label (named "position 1") and each bar in different colors.
2010 Jul 18
2
loop troubles
Hi all, I appreciate the help this list has given me before. I have a
question which has been perplexing me. I have been working on doing a
Bayesian calculating inserting studies sequentially after using a
non-informative prior to get a meta-analysis type result. I created a
function using three iterations of this, my code is below. I insert prior
mean and precision (I add precision manually
2012 Jun 04
1
simulation of modified bartlett's test
Hi, I run this code to get the power of the test for modified bartlett's
test..but I'm not really sure that my coding is right..
#normal distribution unequal variance
asim<-5000
pv<-rep(NA,asim)
for(i in 1:asim)
{print(i)
set.seed(i)
n1<-20
n2<-20
n3<-20
mu<-0
sd1<-sqrt(25)
sd2<-sqrt(50)
sd3<-sqrt(100)
g1<-rnorm(n1,mu,sd1)
g2<-rnorm(n2,mu,sd2)
2011 Nov 11
1
Fwd: Use of R for VECM
----- Forwarded Message -----
From: vramaiah at neo.tamu.edu
To: "bernhard pfaff" <bernhard.pfaff at pfaffikus.de>
Sent: Friday, November 11, 2011 9:03:11 AM GMT -06:00 US/Canada Central
Subject: Use of R for VECM
Hello Fellow R'ers
I am a new user of R and I am applying it for solving Bi-Variate (Consumption and Output) VECM with Co-Integration (I(1)) with three lags on
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME
2011 Nov 29
1
Create an identifier variable
I am needing to create a new identifier variable for a data set which has no
ID variable in the original file. I am basically wanting to take the count
of each row and add "sd" in front of it - so it would look like so
sd1
sd2
sd3
sd4
sd5 etc.......
I have no idea how to do this. I am a SAS user trying to learn R. This
question may have been answered previously, but I could be
2011 Apr 29
1
question of VECM restricted regression
Dear Colleague
I am trying to figure out how to use R to do OLS restricted VECM regression. However, there are some notation I cannot understand.
Please tell me what is 'ect', 'sd' and 'LRM.dl1 in the following practice:
#OLS retricted VECM regression
data(denmark)
sjd <- denmark[, c("LRM", "LRY", "IBO", "IDE")]
sjd.vecm<-
2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when
running iostat:
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0
0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0
0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0
0.0 65.1 0.0 119640090.2 0.0
2013 Feb 01
2
How does this function print, why is n1 which equals 1 printed as 2?
Windows 7, R 2.12.1
Colleagues,
I am trying to understand the n.for.2means function. The code below is a copy of the function (renamed to n.for.2means.js). I have inserted a single line of code towards the bottom of the function which uses the cat function to print the value of n1. You will note the value (preceded by stars) is printed as 1.
The function (1) prints a lot of output without any
2013 Jan 30
2
Integration of mixed normal distribution
Hi,
I already found a conversation on the integration of a normal
distribution and two
suggested solutions
(https://stat.ethz.ch/pipermail/r-help/2007-January/124008.html):
1) integrate(dnorm, 0,1, mean = 0, sd = 1.2)
and
2) pnorm(1, mean = 0, sd = 1.2) - pnorm(0, mean = 0, sd = 1.2)
where the pnorm-approach is supposed to be faster and with higher precision.
I want to integrate a mixed
2014 Feb 27
1
Join Samba4 member server to Windows AD
Hello everybody,
I need to setup a Domain/subdomain environment with Windows AD. All the
DCs run Windows Server 2012 R2. All domains (root and subdomains) The
forest and domain functional level are set to Windows 2008 R2.
I want to use Samba 4 server as fileservers in these domains, but up to
now I have trouble adding Samba 4 member servers to Windows AD.
My test environment is made of 2
2004 Sep 16
3
Estimating parameters for a bimodal distribution
For several years, I have been using Splus to analyze an ongoing series of
datasets that have a bimodal distribution. I have used the following
functions, in particular the ms() function, to estimate the parameters: two
means, two standard deviations, and one proportion. Here is the code I've
been using in S:
btmp.bi <- function(vec, p, m1, m2, sd1, sd2)
{
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift.
When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache.
When I try to import the pool using the zpool
2003 Nov 01
4
Beginner: Homogenity of Variances
Hello,
for my meta-analysis I try to test if two varainces are equal without
using the raw scores. I have is the SD's, N's and the Means.
I want to test the variances from dependent and independend
samples.
I assume I can use the var.test procedure for the independent
samples, but what about the dependent samples ? Has anyone an
idea how to realise this with R ?
Thanks in advance
2005 Jan 08
2
Does R accumulate memory
Dear List:
I am running into a memory issue that I haven't noticed before. I am
running a simulation with all of the code used below. I have increased
my memory to 712mb and have a total of 1 gb on my machine.
What appears to be happening is I run a simulation where I create 1,000
datasets with a sample size of 100. I then run each dataset through a
gls and obtain some estimates.
This works
2002 Mar 01
3
Power of t-test in R vs. S-PLUS
Dear all,
I found a discrepancy while performing a power calculation for a two sample
t-test in R and S-PLUS, respectively.
For given values of sample number (5 each), sd (0.2) , significance level
(0.01), and a desired power (80%) I looked for the difference in means.
These values differ: 0.5488882 in R and 0.4322771 in S-PLUS (see dump
below).
Did I overlook any detail or confuse some