Displaying 20 results from an estimated 47894 matches for "performs".
Did you mean:
perform
2019 Sep 17
2
Building LLVM with LLVM with no dependence on GCC
Hi folks!
I'm trying to get rid of any dependency on libgcc*, but without success so
far. The following commands were executed on a freshliy installed and updated
Ubuntu 16.04 LTS:
=== snip ===
sudo apt-get install build-essential libffi-dev cmake # see aptget.txt for packages installed
sudo mv /usr/local /usr/local.orig
git clone https://github.com/llvm/llvm-project.git
cd llvm-project; git
2019 Sep 20
2
Building LLVM with LLVM with no dependence on GCC
Thus wrote David Demelier via llvm-dev:
> Also you will need to add more options to the components. See for example:
>
> LIBCXX_CXX_ABI=libcxxabi
> LIBCXX_USE_COMPILER_RT=On
> LIBCXXABI_USE_LLVM_UNWINDER=On
> LIBCXXABI_USE_COMPILER_RT=On
> LIBCXX_HAS_GCC_S_LIB=Off
> LIBUNWIND_USE_COMPILER_RT=On
>
> And as mentioned above
>
> CLANG_DEFAULT_CXX_STDLIB=libc++
2016 Feb 25
2
Building with LLVM_PARALLEL_XXX_JOBS
Hi,
I switched from "configure and make" to "cmake" build-system and
wanted to speedup my build.
In my build-script I use...
CMAKE_JOBS="1"
##CMAKE_JOBS=$(($(getconf _NPROCESSORS_ONLN)+1))
JOBS_CMAKE_OPTS="-DLLVM_PARALLEL_COMPILE_JOBS=$CMAKE_JOBS
-DLLVM_PARALLEL_LINK_JOBS=$CMAKE_JOBS"
[1] says in "LLVM-specific variables" section...
***
2020 Aug 15
5
Supporting libunwind on Windows 10 (32bit; 64bit) for MSVC and Clang
Hello.
I was trying to compile
https://github.com/llvm/llvm-project/tree/master/libunwind using:
- MSVC
- Clang
I wasn't able to configure this project for using MSVC (directly or via
clang-cl):
>cmake -G Ninja -DLLVM_PATH="C:/Users/clang/llvm-project-10.0.1/llvm"
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_INSTALL_PREFIX="C:\Users\clang\libunwind_llvm" ../libunwind
--
2017 Mar 05
3
Error in Windows build from release_40 branch
Hi,
I'm trying to do a build and install on Windows 10 with Visual Studio
2015 Community Edition for the X86 and ARM targets, from the current
release_40 branch. While compilation completes without error, the
INSTALL target fails with the following error:
54> CMake Error at
projects/compiler-rt/lib/builtins/cmake_install.cmake:34 (file):
54> file INSTALL cannot find
54>
2016 Dec 19
1
How to create Debian packages for release 3.9.0
Hello,
Le 12/12/2016 à 18:29, Hans Wennborg a écrit :
> +Sylvestre who knows about these things.
>
> On Thu, Dec 8, 2016 at 2:24 AM, Kris van Rens via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
>> L.S.,
>>
>> I'm currently in the process of creating Debian packages for
>> clang/llvm release 3.9.0. For this I'm using the steps as explained on
2016 Mar 03
3
Building with LLVM_PARALLEL_XXX_JOBS
I had only a quick view on the blog-texts.
It might be that a CLANG generated with LTO/PGO speeds up the build.
Can you confirm this?
Can you confirm binutils-gold speed up the build?
Has LLVM an own linker?
Can be used? Speedup the build?
Yesterday night I loooked through available CMAKE/LLVM variables...
### GOLD
# CMAKE_LINKER:FILEPATH=/usr/bin/ld
#
2017 Jul 11
3
Gluster native mount is really slow compared to nfs
Hello,
?
?
We tried tons of settings to get a php app running on a native gluster mount:
?
e.g.:?192.168.140.41:/www /var/www glusterfs defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable 0 0
?
I tried some mount variants in order to speed up things without luck.
?
?
After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was a crazy
2013 Apr 30
1
Volume heal daemon 3.4alpha3
gluster> volume heal dyn_coldfusion
Self-heal daemon is not running. Check self-heal daemon log file.
gluster>
Is there a specific log? When i check /var/log/glusterfs/glustershd.log
glustershd.log:[2013-04-30 15:51:40.463259] E
[afr-self-heald.c:409:_crawl_proceed] 0-dyn_coldfusion-replicate-0:
Stopping crawl for dyn_coldfusion-client-1 , subvol went down
Is there a specific log? When
2018 Jan 30
2
Tiered volume performance degrades badly after a volume stop/start or system restart.
I am fighting this issue:
Bug 1540376 ? Tiered volume performance degrades badly after a
volume stop/start or system restart.
https://bugzilla.redhat.com/show_bug.cgi?id=1540376
Does anyone have any ideas on what might be causing this, and
what a fix or work-around might be?
Thanks!
~ Jeff Byers ~
Tiered volume performance degrades badly after a volume
stop/start or system restart.
The
2019 Dec 28
1
GFS performance under heavy traffic
Hi David,
It seems that I have misread your quorum options, so just ignore that from my previous e-mail.
Best Regards,
Strahil NikolovOn Dec 27, 2019 15:38, Strahil <hunter86_bg at yahoo.com> wrote:
>
> Hi David,
>
> Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
2018 Apr 16
2
Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory
Hi,
We have a 3-node gluster setup where gluster is both the server and the
client.
Every few days we have some $random file or directory that does not exist
according to the FUSE mountpoint. When we try to access the file (stat,
cat, etc...) the filesystem reports that the file/directory does not exist,
even though it does. When we try to create the file/directory we receive
the following error
2018 May 14
1
Unable to build 'lld' on Mac OS 10.9
Hi All,
I am trying to build the 'lld' linker on Mac OS 10.9, but during the build, I am getting the errors. Following are the steps that I have followed:
1. I have downloaded the ‘llvm-stable’ source code from the following location:
https://github.com/llvm-mirror/llvm/tree/stable
2. Machine details(on which llvm source code isbeing built) are as follows:
$ sw_vers
2018 Jan 31
1
Tiered volume performance degrades badly after a volume stop/start or system restart.
Tested it in two different environments lately with exactly same results.
Was trying to get better read performance from local mounts with
hundreds of thousands maildir email files by using SSD,
hoping that .gluster file stat read will improve which does migrate
to hot tire.
After seeing what you described for 24 hours and confirming all move
around on the tires is done - killed it.
Here are my
2006 May 18
3
populating array of text_fields from an array of model objects
I have in my view the following:
<% 0.upto(@num_performances) do |idx| -%>
<%= text_field ''performance'', ''city'', :index => idx, %>
<%= text_field ''performance'', ''venue'', :index => idx, %>
<% end -%>
and in my controller I have:
@performance = [Performance.new("city" =>
2019 Dec 27
0
GFS performance under heavy traffic
Hi David,
Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
Also, the gluster client should remount in order to bump the gluster op-version.
What kind of workload do you have ?
I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups .
You
2018 Feb 01
0
Tiered volume performance degrades badly after a volume stop/start or system restart.
This problem appears to be related to the sqlite3 DB files
that are used for the tiering file access counters, stored on
each hot and cold tier brick in .glusterfs/<volname>.db.
When the tier is first created, these DB files do not exist,
they are created, and everything works fine.
On a stop/start or service restart, the .db files are already
present, albeit empty since I don't have
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
+ Ambarish
On 07/11/2017 02:31 PM, Jo Goossens wrote:
> Hello,
>
>
>
>
>
> We tried tons of settings to get a php app running on a native gluster
> mount:
>
>
>
> e.g.: 192.168.140.41:/www /var/www glusterfs
> defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable
> 0 0
>
>
>
> I tried some mount variants
2017 Sep 04
2
Slow performance of gluster volume
Hi all,
I have a gluster volume used to host several VMs (managed through oVirt).
The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network
for the storage.
When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1
oflag=direct) out of the volume (e.g. writing at /root/) the performance of
the dd is reported to be ~ 700MB/s, which is quite decent. When testing the
dd on
2019 Dec 24
1
GFS performance under heavy traffic
Hi David,
On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hello,
>
> In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards