Displaying 20 results from an estimated 8000 matches similar to: "the function doesn´t work"
2010 Sep 25
3
3D plot
hey, how can i plot this function??? thanks for ur help
n=1000
m=2
k=n/m
N=100
myfun <- function(n, m, alpha = .05, seeder = 1000) {
l=matrix(0,nrow=m,ncol=N)
for(i in 1:N){
set.seed(i)
for(j in 1:m){
x=rnorm(n,0,0.5)
y=rnorm(n,0,0.8)
l[j,i]=cor((x[(((j-1)*k)+1):(((j-1)*k)+k)]),
(y[(((j-1)*k)+1):(((j-1)*k)+k)]))
}
}
for(i in 1:N){
for (j in 1:m){
gute <- function() {
q_1 <-
2010 Sep 25
1
(no subject)
hi how can i plot now this function??? have to be m= 2??? because of the dimensions?thanks for ur help
myfun <- function(n, m, alpha = .05, seeder = 1000) {
set.seed(seeder)
x <- matrix(rnorm(n, 0, 0.5), ncol = m)
y <- matrix(rnorm(n, 0, 0.8), ncol = m)
l <- diag(cor(x, y))
cat("Correlations between two random variables \n", l, fill = TRUE)
gute
2007 Apr 24
1
Matrix: how to re-use the symbolic Cholesky factorization?
I have been playing around with sparse matrices in the Matrix
package, in particularly with the Cholesky factorization of matrices
of class dsCMatrix. And BTW, what a fantastic package.
My problem is that I have to carry out repeated Cholesky
factorization of a spares symmetric matrices, say Q_1, Q_2, ...,Q_n,
where the Q's have the same non-zero pattern. I know in this case one
does
2010 Sep 08
11
problem with outer
Hello,
i wrote this function guete and now i want to plot it: but i get this error
message. i hope someone can help me.
Error in dim(robj) <- c(dX, dY) :
dims [product 16] do not match the length of object [1]
p_11=seq(0,0.3,0.1)
p_12=seq(0.1,0.4,0.1)
guete = function(p_11,p_12) {
set.seed(1000)
S_vek=matrix(0,nrow=N,ncol=1)
for(i in 1:N) {
X_0=rmultinom(q-1,size=1,prob=p_0)
2013 Jul 29
0
[LLVMdev] [Polly] Analysis of the expensive compile-time overhead of Polly Dependence pass
On 07/29/2013 09:15 AM, Sven Verdoolaege wrote:
> On Mon, Jul 29, 2013 at 07:37:14AM -0700, Tobias Grosser wrote:
>> On 07/29/2013 03:18 AM, Sven Verdoolaege wrote:
>>> On Sun, Jul 28, 2013 at 04:42:25PM -0700, Tobias Grosser wrote:
>>>> Sven: In terms of making the behaviour of isl easier to understand,
>>>> it may make sense to fail/assert in case
2013 Jul 26
6
[LLVMdev] [Polly] Analysis of the expensive compile-time overhead of Polly Dependence pass
Hi Sebastian,
Recently, I found the "Polly - Calculate dependences" pass would lead to significant compile-time overhead when compiling some loop-intensive source code. Tobias told me you found similar problem as follows:
http://llvm.org/bugs/show_bug.cgi?id=14240
My evaluation shows that "Polly - Calculate dependences" pass consumes 96.4% of total compile-time overhead
2012 Mar 06
1
Reshape question
I have a data frame in wide format. There are six variables that represent two factors in long format 3x2, Valence and Temperature:
> head(dpts)
File Subj Time Group PainNeg.hot PainNeg.warm SociNeg.hot SociNeg.warm Positiv.hot Positiv.warm Errors
1 WB101_1_1_dp.txt 101 1 MNP 30.700000 13.75000 16.319048 35.166667 30.18333 14.383333 1
2
2013 May 03
0
[LLVMdev] [Polly] GSoC Proposal: Reducing LLVM-Polly Compiling overhead
Dear Tobias,
Thank you very much for your very helpful advice.
Yes, -debug-pass and -time-passes are two very useful and powerful options when evaluating the compile-time of each compiler pass. They are exactly what I need! With these options, I can step into details of the compile-time overhead of each pass. I have finished some preliminary testing based on two randomly selected files from
2013 May 02
2
[LLVMdev] [Polly] GSoC Proposal: Reducing LLVM-Polly Compiling overhead
On 04/30/2013 04:13 PM, Star Tan wrote:
> Hi all,
[...]
> How could I find out where the time is spent on between two adjacent Polly passes? Can anyone give me some advice?
Hi Star Tan,
I propose to do the performance analysis using the 'opt' tool and
optimizing LLVM-IR, instead of running it from within clang. For the
'opt' tool there are two commands that should help
2013 May 03
2
[LLVMdev] [Polly] GSoC Proposal: Reducing LLVM-Polly Compiling overhead
On 05/03/2013 11:39 AM, Star Tan wrote:
> Dear Tobias,
>
>
> Thank you very much for your very helpful advice.
>
>
> Yes, -debug-pass and -time-passes are two very useful and powerful
> options when evaluating the compile-time of each compiler pass. They
> are exactly what I need! With these options, I can step into details
> of the compile-time overhead of each pass.
2001 Apr 05
1
PR#896
Sorry to all that are angry about the form of my previous mail. I
didn't realise what would happen :((.
Here it is in (hopefully) plain text (if my mailer doesn't spoil it again):
##############
Dear developers,
I have a problem with some discrepancy between R 1.2.1 for
Windows and R 1.2.2 (and less) for Linux. While trying to correct
the wilcox.test (see my previous bug report) I
2019 Jun 21
4
Calculation of e^{z^2/2} for a normal deviate z
Hello,
Well, try it:
p <- .Machine$double.eps^seq(0.5, 1, by = 0.05)
z <- qnorm(p/2)
pnorm(z)
# [1] 7.450581e-09 1.228888e-09 2.026908e-10 3.343152e-11 5.514145e-12
# [6] 9.094947e-13 1.500107e-13 2.474254e-14 4.080996e-15 6.731134e-16
#[11] 1.110223e-16
p/2
# [1] 7.450581e-09 1.228888e-09 2.026908e-10 3.343152e-11 5.514145e-12
# [6] 9.094947e-13 1.500107e-13 2.474254e-14 4.080996e-15
2009 Mar 10
6
Pseudo-random numbers between two numbers
I would like to generate pseudo-random numbers between two numbers using
R, up to a given distribution,
for instance, rnorm.
That is something like rnorm(HowMany,Min,Max,mean,sd) over
rnorm(HowMany,mean,sd).
I am wondering if
dnorm(runif(HowMany, Min, Max), mean, sd)
is good. Any idea? Thanks.
-james
2019 Jun 21
4
Calculation of e^{z^2/2} for a normal deviate z
You may want to look into using the log option to qnorm
e.g., in round figures:
> log(1e-300)
[1] -690.7755
> qnorm(-691, log=TRUE)
[1] -37.05315
> exp(37^2/2)
[1] 1.881797e+297
> exp(-37^2/2)
[1] 5.314068e-298
Notice that floating point representation cuts out at 1e+/-308 or so. If you want to go outside that range, you may need explicit manipulation of the log values. qnorm()
2011 Sep 03
3
question with uniroot function
Dear all,
I have the following problem with the uniroot function. I want to find
roots for the fucntion "Fp2" which is defined as below.
Fz <- function(z){0.8*pnorm(z)+p1*pnorm(z-u1)+(0.2-p1)*pnorm(z-u2)}
Fp <- function(t){(1-Fz(abs(qnorm(1-(t/2)))))+(Fz(-abs(qnorm(1-(t/2)))))}
Fp2 <- function(t) {Fp(t)-0.8*t/alpha}
th <- uniroot(Fp2, lower =0, upper =1,
2006 Jul 03
2
help a newbie with a loop
Hi,
I am new in R and stumbled on a problem my (more experienced) friends
can not help with with. Why isnt this code working?
The function is working, also with the loop and the graph appears,
only when I build another loop around it (for different values of p) ,
R stays in a loop?
Can't it take more then 2 loops in one program?
powerb<-function(x,sp2,a,b,b1,m)
{
2007 Mar 22
3
Cohen's Kappa
Hi,
im little bit confused about Cohen's Kappa and i should be look into the
Kappa function code. Is the easy formula really wrong?
kappa=agreement-chance/(1-chance)
many thanks
christian
###############################################################################
true-negativ:7445
false-positive:3410
false-negativ:347
true-positiv:772
classification-aggrement:68,6%
2008 Feb 06
1
ci.pd() (Epi) and Newcombe method
Greetings!
I suspect that there is an error in the code for the
function ci.pd() in the Epi package.
This function is for computing confidence intervals
for a difference of proportions between two independent
groups of 0/1 responses, and implements the Newcombe
("Nc") method and the Agrasti-Caffo "AC" method.
I think there is an error in the computation for the
Newcombe
2011 Oct 05
4
SPlus to R
I'm trying to convert an S-Plus program to R. Since I'm a SAS programmer I'm not facile is either S-Plus or R, so I need some help. All I did was convert the underscores in S-Plus to the assignment operator <-. Here are the first few lines of the S-Plus file:
sshc _ function(rc, nc, d, method, alpha=0.05, power=0.8,
tol=0.01, tol1=.0001, tol2=.005, cc=c(.1,2),
2007 Apr 14
6
[LLVMdev] Regalloc Refactoring
On Thu, 12 Apr 2007, Fernando Magno Quintao Pereira wrote:
>> I'm definitely interested in improving coalescing and it sounds like
>> this would fall under that work. Do you have references to papers
>> that talk about the various algorithms?
>
> Some suggestions:
>
> @InProceedings{Budimlic02,
> AUTHOR = {Zoran Budimlic and Keith D. Cooper and Timothy