Displaying 20 results from an estimated 6000 matches similar to: "wine static"
2011 Nov 18
1
Delete Rows Dynamically Within a Loop
Ok guys, as requested, I will add more info so that you understand why a
simple vector operation is not possible. It's not easy to explain in few
words but let's see. I have a huge amount of points over a 2D space.
I divide my space in a grid with a given resolution,say, 100m. The main loop
that I am not sure if it's mandatory or not (any alternative is welcomed) is
to go through EACH
2009 Feb 07
3
Re-post data format question (apologies)
Hello all,
I have a *.csv file that looks like this (actual file is orders of magnitude
larger):
Site taxa no.ind
forest LMA 1
forest LCY 1
forest SCO 1
meadow LMA 2
meadow LCY 1
meadow PNT
1999 Apr 10
2
IRIX compile (PR#163)
Full_Name: Tim Middelkoop
Version: 0.64.0
OS: IRIX 6.3 on O2
Submission from: (NULL) (128.119.88.192)
Various IRIX complile issues
src/nmath/pnt.c
IRIX cc does not like double negatives
Makeconf,config.site,etc/Makeconf
f77 pic hack, should change Makeconf.in or other...
change -PIC with -KPIC
enjoy, tim...
===
diff -ru orig/R-0.64.0/src/nmath/pnt.c R-0.64.0/src/nmath/pnt.c
---
2008 Jun 14
1
qt with ncp>37.62
help(qt) states that:
"ncp non-centrality parameter delta; currently except for rt(), only for
abs(ncp) <= 37.62"
so I would expect that calling qt with non-centrality parameter exceeding
37.62 should fail, instead e.g. calling
> mapply(function(x) qt(p = 0.9, df = 55, ncp = x),35:45)
gives:
[1] 40.21448 41.35293 42.49164 43.68862 44.82945 45.97048 47.11170 48.25310
[9]
1999 Apr 12
1
compiling R-0.64.0 on DEC osf4.0
Dear all,
Compiling R-0.64.0 on DEC osf4.0 has the following error message
(earlier version was OK).
cc -ieee_with_inexact -g -I../include -I../../src/include -c pnf.c -o pnf.o
cc -ieee_with_inexact -g -I../include -I../../src/include -c pnt.c -o pnt.o
cc: Error: pnt.c, line 83: In this statement, "1021" is not an lvalue, but occurs in a context that requires one.
if (df >
2007 Mar 21
3
question on suppressing error messages with Rmath library
Dear list,
I have been using the Rmath library for quite a while: in the current instance, I am calling dnt (non-central t density function) repeatedly for several million. When the argument is small, I get the warning message:
full precision was not achieved in 'pnt'
which is nothing unexpected. (The density calls pnt, if you look at the function dnt.) However, to have this happen a
2007 May 31
3
zfs boot error recovery
hi all,
i would like to ask some questions regarding best practices for zfs
recovery if disk errors occur.
currently i have zfs boot (nv62) and the following setup:
2 si3224 controllers (each 4 sata disks)
8 sata disks, same size, same type
i have two pools:
a) rootpool
b) datapool
the rootpool is a mirrored pool, where every disk has a slice (the s0,
which is 5 % of the whole disk) and this
2008 May 13
1
Catching warning message(stdout) from C
I'm using the 'pnt' C function from Rmath library in some C-code.
How can I catch the warning message: "full precision was not achieved in
'pnt'" in R. I call the function using the .C().
(options(warn=-1) didn't work)
Thanks in advance
--
Maarten van Iterson
Center for Human and Clinical Genetics
Leiden University Medical Center (LUMC)
Research Building,
2010 May 08
5
Plugging in a hard drive after Solaris has booted up?
Hi guys,
I have a quick question, I am playing around with ZFS and here''s what I did.
I created a storage pool with several drives. I unplugged 3 out of 5 drives from the array, currently:
NAME STATE READ WRITE CKSUM
gpool UNAVAIL 0 0 0 insufficient replicas
raidz1 UNAVAIL 0 0 0 insufficient replicas
c8t2d0 UNAVAIL 0 0
2009 Apr 13
2
academic papers that promote theora
Are there any academic papers that promote theora?
I know of one, are there more?
Universal Multimedia Access and Open Standards
<http://www.t4p.no/t4p.no/conference/media/Vaaler.pdf>
tom_a_sparks
Please avoid sending me Word or PowerPoint attachments.
but instead use OpenDocument File Formats or
use OpenOffice
http://en.wikipedia.org/wiki/OpenDocument
2008 Aug 21
1
pnmath compilation failure; dylib issue?
(1) ...need to speed up a monte-carlo sampling...any suggestions about
how I can get R to use all 8 cores of a mac pro would be most useful
and very appreciated...
(2) spent the last few hours trying to get pnmath to compile under os-
x 10.5.4...
using gcc version 4.2.1 (Apple Inc. build 5553) as downloaded from
CRAN, xcode 3.0...
...xcode 3.1 installed over top of above after
2012 Jul 13
3
Instalar R manualmente en Ubuntu
Hola Amigos:
Gracias por todas las respuestas sobre como filtrar datos en un data frame.
Ahora tengo un nuevo problema, necesito instalar R 2.15 manualmente,
pero no puedo hacerlo desde un repositorio.
Alguna solución??????????'
Un Saludos,
Leonardo
------------ próxima parte ------------
An embedded and charset-unspecified text was scrubbed...
Name: no disponible
URL:
2007 May 07
2
Unsupported CPU Error
Hi,
I've a problem; I've tryed to install wine 0.9.7 on a
Mac Computer on Mac OS X System.
I've downloaded source files from source-forge.net
I have made configuration,and make dependecies and
finally I did "make" !...but...at least appears a
terrible message:
#error unsupported cpu (minidump.c 172.2)....
I've tryed this on two different computers: a IBM
PowerPC G5
2018 Jan 26
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
We don't generate any .lib as those don't work well with incremental
linking (and give zero advantages when linking AFAIK), and it would be
pretty easy to have a modern format for having a .ghash for multiple files,
something simple like size prefixed name and then size prefixed ghash blobs.
On Fri, Jan 26, 2018 at 8:44 PM, Zachary Turner <zturner at google.com> wrote:
> We
2018 Jan 28
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Look for this code in lld/coff/pdb.cpp
if (Config->DebugGHashes) {
ArrayRef<GloballyHashedType> Hashes;
std::vector<GloballyHashedType> OwnedHashes;
if (Optional<ArrayRef<uint8_t>> DebugH = getDebugH(File))
Hashes = getHashesFromDebugH(*DebugH);
else {
OwnedHashes = GloballyHashedType::hashTypes(Types);
Hashes = OwnedHashes;
}
In the else block there, add a log
2018 Jan 30
4
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
You can make a PDB per lib (consider msvcrtd.pdb which ships with MSVC),
but all these per-lib PDBs would have to be merged into a single master PDB
at the end, so you still can't avoid that final . In a way, that's similar
to the idea behind /DEBUG:FASTLINK (keep the debug info in object files to
eliminate the cost of merging types and symbol records) and we know what
the problems with
2018 Jan 26
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
it does.
I just had an epiphany: why not just write a .ghash file and have lld read
those if they exist for an .obj file?
Seem much simpler than trying to wire up a 20 year old file format. I will
try to do this, is something like this acceptable for LLD? The cool thing
is that I can generate .ghash for .lib or any obj lying around (maybe even
for pdb in the future).
On Fri, Jan 26, 2018 at
2018 Jan 28
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
I don’t have pgo numbers. When I build using -flto=thin the link time is
significantly faster than msvc /ltcg and runtime is slightly faster, but I
haven’t tested on a large variety of different workloads, so YMMV. Link
time will definitely be faster though
On Sun, Jan 28, 2018 at 2:20 PM Leonardo Santagada <santagada at gmail.com>
wrote:
> This part is only for objects with /Z7 debug
2018 Jan 29
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Not a lot.
/TIME will show high level timing of the various phases (this is the same
option MSVC uses).
If you want anything more detailed than that, vTune or ETW+WPA (
https://github.com/google/UIforETW/releases) are probably what you'll need
to do.
(We'd definitely love patches to improve performance, or even just ideas
about how to make things faster. Improving link speed is one of
2018 Jan 29
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Part of the reason why lld is so fast is because we map every input file
into memory up front and rely on the virtual memory manager in the kernel
to make this fast. Generally speaking, this is a lot faster than opening a
file, reading it and processing a file, and closing the file. The
downside, as you note, is that it uses a lot of memory.
But there's a catch. The kernel is smart enough