Displaying 20 results from an estimated 100 matches similar to: "extract same columns and rows in two matrices"
2012 Mar 05
1
index instead of loop?
Hello,
Does anyone know of a way I can speed this up? Basically I'm attempting to
get the data item on the same row as the report date for each report date
available. In reality, I have over 11k of columns, not just A, B, C, D and
I have to do that over 100 times. My solution is slow, but it works. The
loop is slow because of merge.
# create sample data
z.dates =
2020 Mar 23
3
[InstCombine] Addrspacecast and GEP assumed commutative
I'm not sure what the usual "ping time" is for llvm-dev, but may I ask if there are any updates on this?
It appears that the following lines are the root cause of the reordering (https://github.com/llvm/llvm-project/blob/fdcb27105537f77c78c4473d4f7c47146ddbab69/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp#L2175):
// Handle gep(bitcast x) and gep(gep x, 0, 0, 0).
Value
2007 Apr 11
1
bind or samba configuration preventing browsing network
I have a networking problem where I am not certain if the problem is
samba or bind. I am still pretty much a nb at linux. The machine in
question is running openSuSE 10.2 and is named rd1. I had samba working
fine before I started to make it a WINS server and DNS host.
I have a small LAN with no real administration functionality. The
network is used for simple file sharing and dial-up
2013 Mar 15
1
order of APPEND and INITRD
Igor asked about APPEND:
> In other words: Can I break up a long line into multiple lines
> in 5.01 or 5.10pre now or is that still not supported?
I also wonder: can one control whether the INITRD parameter gets
prepended or appended? Right now it seems to be placed after APPEND
parameters, but might be more useful if it came first.
The Debian/Ubuntu installers will copy parameters
2014 Oct 24
3
[LLVMdev] IndVar widening in IndVarSimplify causing performance regression on GPU programs
Hi,
I noticed a significant performance regression (up to 40%) on some internal
CUDA benchmarks (a reduced example presented below). The root cause of this
regression seems that IndVarSimpilfy widens induction variables assuming
arithmetics on wider integer types are as cheap as those on narrower ones.
However, this assumption is wrong at least for the NVPTX64 target.
Although the NVPTX64 target
2012 Mar 03
0
removing data look-ahead, something faster.
Hello,
Thank you for your help/advice!
The issue here is speed/efficiency. I can do what I want, but its really
slow.
The goal is to have the ability to do calculations on my data and have it
adjusted for look-ahead. I see two ways to do this:
(I'm open to more ideas. My terminology: Unadjusted = values not adjusted
for look-ahead bias; adjusted = values adjusted for look-ahead bias.)
1) I
2006 Sep 27
0
umask and logging in openssh
I looked through the FAQ and archive and haven't seen an mention of
this. Has it been considered making the sftp logging patch maintain by
Michael Martinez at sftplogging.sourceforge.net a part of the main
stream sftp-server? Being able to configure the default umask for sftp
users who don't run a shell, and providing ftp level logging
functionality typically available in other ftp
2010 Apr 07
0
question about fold function
Dear all,
I'm trying to use the fold function as described here:
http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-cox-regression.pdf
Page9
It does say that you can use this when you have more than one time varying
covariate: in the description of the argument cov it says:
"cov: A vector giving the column numbers of the time-dependent covariate in
data, or a list of
2017 Oct 11
1
[PATCH v1 01/27] x86/crypto: Adapt assembly for PIE support
Change the assembly code to use only relative references of symbols for the
kernel to be PIE compatible.
Position Independent Executable (PIE) support will allow to extended the
KASLR randomization range below the -2G memory limit.
Signed-off-by: Thomas Garnier <thgarnie at google.com>
---
arch/x86/crypto/aes-x86_64-asm_64.S | 45 ++++++++-----
arch/x86/crypto/aesni-intel_asm.S
2017 Jul 25
0
[Questions] About small files performance
Dear all
Recently, i did some work to test small files performance for gnfsv3
transport. Following is my scenario.
#####environment#####
==2 cluster nodes(nodeA/nodeB)==
each is equipped with E5-2650*2, 128G memory and 10GB*2 netcard
nodeA: 10.254.3.77 10.128.3.77
nodeB: 10.254.3.78 10.128.3.78
==2 stress nodes(clientA/clientB)==
each is equipped with E5-2650*2, 128G memory and 10GB*2
2013 Mar 29
1
Create values based on a table of conditions
Hi R help forum,
I have a simple data frame of four columns - one of numbers (really a
categorical variable), one of dates and one
of data. I have over 500,000 data points to work with, spread over 40
files, each named after a different animal.
These are contact data recorded by proximity loggers over two years
between the animals of the file name and
collars being worn by other animals. The
2013 Apr 26
2
Remove reciprocal data from a grouped animal social contact dataset
Hi r-help forum,
I have been collecting contact data (with proximity logger collars)
between a few different species of animal. All animals wear the
collars, and any contact between the animals should be detected and
recorded by both collars. However, this isn't always the case and more
contacts may be recorded on one collar of the two. This is fine, it
depends on battery life and other
2009 Mar 12
1
Using one buffer object per (EXA) pixmap potentially wastes memory.
I've been doing some testing and it seems that glyphs (typically
smaller than a page size on nv50) render slow'ish sometimes, mostly
due to the many trips to the kernel (drm_addmap_core is the symbol
that shows up). Ofcource this was a benchmark, but it does leave me
wondering, do we really need need to call the kernel for every little
pixmap? Also from a memory point of view, because a
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v3:
- Update on message to describe longer term PIE goal.
- Minor change on ftrace if condition.
- Changed code using xchgq.
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends