Displaying 20 results from an estimated 2000 matches similar to: "[PATCH] Add SM3 secure hash algorithm"
2024 Aug 07
1
[PATCH] Add SM3 secure hash algorithm
Hi,
This implementation looks fine, but there is no specification for using
SM3 in the SSH protocol. Could I suggest that you start by talking to the
IETF to get the standardisation process started?
https://mailman3.ietf.org/mailman3/lists/ssh.ietf.org/ is a good mailing
list to start at. There have been recent conversations in the IETF about
how best to handle national cryptographic standards
2007 Aug 31
2
memory.size help
I keep getting the 'memory.size' error message when I run a program I have
been writing. It always it cannot allocate a vector of a certain size. I
believe the error comes in the code fragement below where I have multiple
arrays that could be taking up space. Does anyone know a good way around
this?
w1 <- outer(xk$xk1, data[,x1], function(y,z) abs(z-y))
w2 <- outer(xk$xk2,
2008 Jan 29
2
Using Predict and GLM
Dear R Help,
I read through the archives pretty extensively before sending this
email, as it seemed there were several threads on using predict with
GLM. However, while my issue is similar to previous posts (cannot
get it to predict using new data), none of the suggested fixes are
working.
The important bits of my code:
set.seed(644)
n0=200 #number of observations
2020 May 04
2
"Earlyclobber" but for a subset of the inputs
Hi all,
I'm working on a target whose registers have equal-sized subregisters and
all of those subregisters can be named (or the other way round: registers
can be grouped into super registers).
So for instance we've got 16 registers W (as in wide) W0..W15 and 32
registers N (as in narrow) N0..N31. This way, W0 is made by grouping N0 and
N1, W1 is N2 and N3, W2 is N4 and N5, ..., W15 is
2008 Mar 22
1
Simulating Conditional Distributions
Dear R-Help List,
I'm trying to simulate data from a conditional distribution, and
haven't been able to modify my existing code to do so. I searched
the archives, but didn't find any previous post that matched my
question.
n=10000
pop = data.frame(W1 = rbinom(n, 1, .2),
W2 = runif(n, min = 3, max = 8), W3 = rnorm(n, mean=0, sd=2))
pop = transform(pop,
A = rbinom(n, 1,
2005 May 13
2
not deleting from the root
I have a bit of an issue with rsync. I am using to keep directories in
sync via another server for backup.
Here is the server config
[w1]
path = /w1
comment = w1 web dir
[w2]
path = /w2
comment = w2 web dir
Now on the client i run this command
rsync -avv --delete --force domain.com::w1/ /w1/
It will NOT delete anything that is no on the server anymore.. for
example on the server/client there
2008 Apr 05
2
How to improve the "OPTIM" results
Dear R users,
I used to "OPTIM" to minimize the obj. function below. Even though I used
the true parameter values as initial values, the results are not very good.
How could I improve my results? Any suggestion will be greatly appreciated.
Regards,
Kathryn Lord
#------------------------------------------------------------------------------------------
x = c(0.35938587,
2008 Apr 05
2
How to improve the "OPTIM" results
Dear R users,
I used to "OPTIM" to minimize the obj. function below. Even though I used
the true parameter values as initial values, the results are not very good.
How could I improve my results? Any suggestion will be greatly appreciated.
Regards,
Kathryn Lord
#------------------------------------------------------------------------------------------
x = c(0.35938587,
2020 May 05
2
"Earlyclobber" but for a subset of the inputs
Hi Quentin,
> It sounds like you only need the earlyclobber description for the N, N
> variant.
> In other words, as long as you use different opcodes for widen-op NN and
> widen-op WN, you model exactly what you want.
>
> What am I missing?
>
we are using different opcodes for widen-op NN and widen-op WN.
My understanding is that not setting earlyclobber to the W, N
2011 Mar 10
1
getting percentiles by factor
Hello,
I'm trying to get percentiles (PERCENTRANK for excel users) by factor in the
following data.frame:
myExample <- data.frame(Ret=seq(-2, 2.5,
by=0.5),PE=seq(10,19),Sectors=rep(c("Financial","Industrial"),5))
myExample <- na.omit(myExample)
Thanks to Patrick I I managed to put together the following lines which does
it for the "Ret" column:
myecdf
2012 Nov 28
1
Help setting optimization problem to include more constraints
Dear R-helpers,
I am struggling with an optimization problem at the moment and decided to
write the list looking for some help. I will use a very small example to
explain what I would like to. Thanks in advance for your help.
We would like to distribute resources from 4 warehouses to 3 destinations.
The costs associated are as follows:
Destination
>From 1 2 3 Total
2011 Jan 10
2
Calculating Portfolio Standard deviation
Dear R helpers
I have following data
stocks <- c("ABC", "DEF", "GHI", "JKL")
prices_df <- data.frame(ABC = c(17,24,15,22,16,22,17,22,15,19),
DEF = c(22,28,20,20,28,26,29,18,24,21),
GHI = c(32,27,32,36,37,37,34,23,25,32),
2002 Jan 30
1
Hi,
Hi,
Sorry for the confusion.
I would like to estimate a model wherein
the marginals of z with respect to w1 and w2
are smooth functions of x and y. I have data
on z, x, y, w1 and w2.
so E[dz/dw1] = f(x,y) and E[dz/dw2] = g(x,y)
and I would like to estimate f(x,y) and g(x,y)
I suppose I could try to fit something more general
using projection pursuit, but the nature of the problem
suggests
2020 Apr 26
2
assembly code for array iteration generated by llvm is much slower than gcc
Hi all developers,
I'm changing compiler from gcc to llvm on a RISCV target now. but I found in some case the assembly code generated by llvm is much more than gcc. It cause my program's performance about 40% decrease.
The flowing is a simple test code. It shows the problem. We can see than gcc prefer to use pointer to iterate the array, but llvm perfere to use index to iterate
2013 Jan 09
2
Using objects within functions in formulas
Dear all,
I'm looking to create a formula within a function to pass to glmer()
and I'm having a problem that the following example will illustrate:
library(lme4)
y1 = rnorm(10)
x1 = data.frame(x11=rnorm(10), x12=rnorm(10), x13=rnorm(10))
x1 = data.matrix(x1)
w1 = data.frame(w11=sample(1:3,10, replace=TRUE), w12=sample(1:3,10,
replace=TRUE), w13=sample(1:3,10, replace=TRUE))
test1 <-
2002 Jan 28
6
Almost a GAM?
Hello:
I sent this question the other day with the wrong subject
heading and couple typos, with no response. So,
here I go again, having made those corrections.
I would like to estimate, for lack of a better description,
a partially additive non-parametric model with the following
structure:
z~ f(x,y):w1 + g(x,y):w2 + e
In other words, I'd like to estimate the marginals with
respect to
2009 Jan 23
1
Anova and unbalanced designs
Dear R-list!
My question is related to an Anova including within and between subject
factors and unequal group sizes.
Here is a minimal example of what I did:
library(car)
within1 <- c(1,2,3,4,5,6,4,5,3,2); within2 <- c(3,4,3,4,3,4,3,4,5,4)
values <- data.frame(w1 = within1, w2 = within2)
values <- as.matrix(values)
between <- factor(c(rep(1,4), rep(2,6)))
betweenanova <-
2003 Jun 01
1
daemon crashes
Linux: RedHat 7.1
Samba: 2.2.7
Windoze #1: 98SE
Windoze #2: W2K
Here is the situation: copy files from W1 to Linux. At same time, transfer
those files from Linux to W2. Of course, the transfer from Linux to W2
doesn't occur until the particular file has completed the transfer from
W1 to Linux. This senerio of dual transfer from the same area on the Linux
disc will ultimately case the smbd
2012 May 29
2
Wilcoxon-Mann-Whitney U value: outcomes from different stat packages
Given this example
#start code
a<-c(0,70,50,100,70,650,1300,6900,1780,4930,1120,700,190,940,
760,100,300,36270,5610,249680,1760,4040,164890,17230,75140,1870,22380,5890,2430)
b<-c(0,0,10,30,50,440,1000,140,70,90,60,60,20,90,180,30,90,
3220,490,20790,290,740,5350,940,3910,0,640,850,260)
wilcox.test(a, b, paired=FALSE)
#sum of rank for first sample
sum.rank.a <-
2003 Feb 11
1
mean function on correlation matrices (PR#2540)
Full_Name: Raymond Salvador
Version: R 1.6.2
OS: Windows ME
Submission from: (NULL) (131.111.93.195)
The mean function applied on individual components of several correlation
matrices
gives a wrong result (gives the first value instead of the mean).
Here there is a simple example
x1 <- rnorm(10,1,1)
y1 <- rnorm(10,1,1)
z1 <- cbind(x1,y1)
w1 <- cor(z1)
x2 <- rnorm(10,1,1)
y2