similar to: Partial Sums

Displaying 20 results from an estimated 7000 matches similar to: "Partial Sums"

2011 Mar 14
1
Math characters in column heading using latex() in Hmisc
Hi Everybody I want to print a latex table containing math characters in the column heading These are the formulae I want to use as column headings. It prints OK from TeX $\sum_{i}\sum_{j}C_{P,i,j,y}\times\mathit{FC}_{i}$, $XU_{alt,y}$, $n$, $\bar{C}_{P,y}$ My plan was to create a character vector with these and later rbind the values to them. When I create the vector like:
2012 Oct 18
7
summation coding
I would like to code the following in R: a1(b1+b2+b3) + a2(b1+b3+b4) + a3(b1+b2+b4) + a4(b1+b2+b3) or in summation notation: sum_{i=1, j\neq i}^{4} a_i * b_i I realise this is the same as: sum_{i=1, j=1}^{4} a_i * b_i - sum_{i=j} a_i * b_i would appreciate some help. Thank you. -- View this message in context: http://r.789695.n4.nabble.com/summation-coding-tp4646678.html Sent from the R
2009 May 01
2
Double summation limits
Dear R experts I need to write a function that incorporates double summation, the problem being that the upper limit of the second summation is the index of the first summation, i.e: sum_{j=0}^{x} sum_{i=0}^{j} choose(i+j, i) where x variable or constant, doesn't matter. The following code obviously doesn't work: f=function(x) {j=0:x; i=0:j; sum( choose(i+j,i) ) } Can you help? Thanks
2002 Dec 11
3
Modified Bessel Function - 2nd kind
In order to fit a probability distribution proposed by Sichel [Journal of the Royal Statistical Society. Series A (General), Vol. 137, No. 1. (1974), pp. 25-34], I need a modified Bessel function of the 2nd kind. I notice that the base package of "R" only has modified Bessel functions of the 1st and 3rd kind. Does a modified Bessel function of the 2nd kind exist anywhere? Many
2009 Oct 17
2
Recommendation on a probability textbook (conditional probability)
I need to refresh my memory on Probability Theory, especially on conditional probability. In particular, I want to solve the following two problems. Can somebody point me some good books on Probability Theory? Thank you! 1. Z=X+Y, where X and Y are independent random variables and their distributions are known. Now, I want to compute E(X | Z = z). 2.Suppose that I have $I \times J$ random number
2006 Oct 21
2
problem with mode of marginal distriubtion of rdirichlet{gtools}
Hi all, I have a problem using rdirichlet{gtools}. For Dir( a1, a2, ..., a_n), its mode can be found at $( a_i -1)/ ( \sum_{i}a_i - n)$; The means are $a_i / (\sum_{i} a_i ) $; I tried to study the above properties using rdirichlet from gtools. The code are: ############## library(gtools) alpha = c(1,3,9) #totoal=13 mean.expect = c(1/13, 3/13, 9/13) mode.expect = c(0, 2/10, 8/10) #
2005 Jun 14
1
within and between subject calculation
Dear helpers in this forum, I have the following question: Suppose I have the following data set: id x y 023 1 2 023 2 5 023 4 6 023 5 7 412 2 5 412 3 4 412 4 6 412 7 9 220 5 7 220 4 8 220 9 8 ...... and i want to calculate sum_{i=1}^k sum_{j=1}^{n_i}x_{ij}*y_{ij} is there a simple way to do this within and between subject summation in R?
2009 Mar 25
1
Confusion about ecdf
Hi, I'm bit confused about ecdf (read the help files but still not sure about this). I have an analytical expression for the pdf, but want to get the empirical cdf. How do I use this analytical expression with ecdf? If this helps make it concrete, the pdf is: f(u) = \sum_{t = 1}^T 1/n_t \sum_{i = 1}^{n_t} 1/w K((u - u_{it})/w) where K = kernel density estimator, w = weights, and u_{it} =
2010 Nov 23
2
[LLVMdev] Unrolling power sum calculations into constant time expressions
Hello, I noticed that feeding 'clang -O3' with functions like: int sum1(int x) { int ret = 0; for(int i = 0; i < x; i++) ret += i; return ret; } int sum2(int x) { int ret = 0; for(int i = 0; i < x; i++) ret += i*i; return ret; } ... int sum20(int x) { int ret = 0; for(int i = 0; i < x; i++) ret +=
2005 Jun 15
2
need help on computing double summation
Dear helpers in this forum, This is a clarified version of my previous questions in this forum. I really need your generous help on this issue. > Suppose I have the following data set: > > id x y > 023 1 2 > 023 2 5 > 023 4 6 > 023 5 7 > 412 2 5 > 412 3 4 > 412 4 6 > 412 7 9 > 220 5 7 > 220 4 8 > 220 9 8 > ...... > Now I want to compute the
2013 Jun 23
1
Dsync only one mailbox
Hi, I am looking for a way to sync only selected files/mailbox'es using dsync. Am I using the dsync -m option incorectly? It looks like it's being ignored. And as for the main INBOX (/var/mail/username) what should be the parameter for -m? dovecot --version 2.1.7 dsync -u pj -D -v -m Alerts -o mail_location=mdbox:/home/pj/mdbox backup mbox:/home/pj/:INBOX=/var/mail/pj doveadm(root):
2006 Dec 08
1
MAXIMIZATION WITH CONSTRAINTS
Dear R users, I?m a graduate students and in my master thesis I must obtain the values of the parameters x_i which maximize this Multinomial log?likelihood function log(n!)-sum_{i=1]^4 log(n_i!)+sum_ {i=1}^4 n_i log(x_i) under the following constraints: a) sum_i x_i=1, x_i>=0, b) x_1<=x_2+x_3+x_4 c)x_2<=x_3+x_4 I have been using the ?ConstrOptim? R-function with the instructions
2012 Aug 16
1
sum predictions by hand
Hi, If I do a standard svm regression with e1071 x <- seq(0.1, 5, by = 0.05) y <- log(x) + rnorm(x, sd = 0.2) m <- svm(x, y) we can do predict(m,x) to get the fitted values. But what if I wan tho get them by hand? Seem to me like it should be w = t(m$coefs)%*%m$SV x.scaled = scale(x, m$x.scale[[1]], m$x.scale[[2]]) t(w %*% t(as.matrix(x.scaled))) - m$rho but this is wrong If i
2003 Sep 12
1
asterisk and defunct perl procs
Trying to figure out why I'm having all of my test (and demo) perl script in a defunct status. Each run creates a problem: ps output root 26253 1356 0 16:39 pts/1 00:00:00 asterisk -vvvc root 26270 26253 0 16:40 pts/1 00:00:00 [pj.pl <defunct>] root 26271 26253 0 16:40 pts/1 00:00:00 [pj.pl <defunct>] root 26273 26253 0 16:40 pts/1 00:00:00 [pj.pl
2008 Mar 27
1
functions
I wrote some functions for multiway CANDECOMP, i.e. for least squares fitting of a_{i_1\cdots i_m}\approx\sum_{s=1}^p x^1_{i_1s}x^1_{i_1s}\cdots x^m_{i_ms} with arrays of arbitrary dimension. Reminded me of the good old APL days. I could not find this in the archives, but if it's already there, I would appreciate if someone let me know.
2009 May 18
8
Simple plotting errors
Dear R Users, I have 12 data frames, each of 12 rows and 2 columns. e.g. FeketeJAN MEAN SUM_ AMAZON 144.4997874 68348.4 NILE 5.4701955 1394.9 CONGO 71.3670036 21196.0 MISSISSIPPI 18.9273250 6511.0 AMUR 1.8426874 466.2 PARANA 58.3835497 13486.6 YENISEI 1.4668313 592.6 OB 1.4239179 559.6 LENA 0.9342164
2017 Mar 20
3
Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.
El Lunes 20/03/2017, PJ Welsh escribi?: > Still just starts the kernel and wihtin 4 seconds reboots with 4.9.16-24. > Thanks > PJ Edit grub's entry and add "noreboot" to your xen parameters, maybe when the kernel panicks xen detects it and automatically reboots it. > On Mon, Mar 20, 2017 at 2:23 PM, Johnny Hughes <johnny at centos.org> wrote: > > On
2009 May 16
1
maxLik pakage
Hi all; I recently have been used 'maxLik' function for maximizing G2StNV178 function with gradient function gradlik; for receiving this goal, I write the following program; but I have been seen an error  in calling gradient  function; The maxLik function can't enter gradlik function (definition of gradient function); I guess my mistake is in line ******** ,that the vector  ‘h’ is
2006 Apr 27
15
Which is faster, calling helpers or rendering a partial?
Using partials is a nice way to separate chunks of content into separate pages as opposed to building strings in helpers, but I''m wondering which is faster. It scares me when I see stuff like: Rendered users/_public (0.00051) Rendered users/_public (0.00009) Rendered users/_public (0.00008) Rendered users/_public (0.00008) Rendered users/_public (0.00008) ....50 more times Has anyone
2008 Aug 15
2
Design-consistent variance estimate
Dear List: I am working to understand some differences between the results of the svymean() function in the survey package and from code I have written myself. The results from svymean() also agree with results I get from SAS proc surveymeans, so, this suggests I am misunderstanding something. I am never comfortable with "I did what the software" does mentality, so I am working to