search for: performance

Displaying 20 results from an estimated 47727 matches for "performance".

2019 Sep 17
2
Building LLVM with LLVM with no dependence on GCC
Hi folks! I'm trying to get rid of any dependency on libgcc*, but without success so far. The following commands were executed on a freshliy installed and updated Ubuntu 16.04 LTS: === snip === sudo apt-get install build-essential libffi-dev cmake # see aptget.txt for packages installed sudo mv /usr/local /usr/local.orig git clone https://github.com/llvm/llvm-project.git cd llvm-project; git
2019 Sep 20
2
Building LLVM with LLVM with no dependence on GCC
Thus wrote David Demelier via llvm-dev: > Also you will need to add more options to the components. See for example: > > LIBCXX_CXX_ABI=libcxxabi > LIBCXX_USE_COMPILER_RT=On > LIBCXXABI_USE_LLVM_UNWINDER=On > LIBCXXABI_USE_COMPILER_RT=On > LIBCXX_HAS_GCC_S_LIB=Off > LIBUNWIND_USE_COMPILER_RT=On > > And as mentioned above > > CLANG_DEFAULT_CXX_STDLIB=libc++
2016 Feb 25
2
Building with LLVM_PARALLEL_XXX_JOBS
Hi, I switched from "configure and make" to "cmake" build-system and wanted to speedup my build. In my build-script I use... CMAKE_JOBS="1" ##CMAKE_JOBS=$(($(getconf _NPROCESSORS_ONLN)+1)) JOBS_CMAKE_OPTS="-DLLVM_PARALLEL_COMPILE_JOBS=$CMAKE_JOBS -DLLVM_PARALLEL_LINK_JOBS=$CMAKE_JOBS" [1] says in "LLVM-specific variables" section... ***
2020 Aug 15
5
Supporting libunwind on Windows 10 (32bit; 64bit) for MSVC and Clang
Hello. I was trying to compile https://github.com/llvm/llvm-project/tree/master/libunwind using: - MSVC - Clang I wasn't able to configure this project for using MSVC (directly or via clang-cl): >cmake -G Ninja -DLLVM_PATH="C:/Users/clang/llvm-project-10.0.1/llvm" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX="C:\Users\clang\libunwind_llvm" ../libunwind --
2017 Mar 05
3
Error in Windows build from release_40 branch
Hi, I'm trying to do a build and install on Windows 10 with Visual Studio 2015 Community Edition for the X86 and ARM targets, from the current release_40 branch. While compilation completes without error, the INSTALL target fails with the following error: 54> CMake Error at projects/compiler-rt/lib/builtins/cmake_install.cmake:34 (file): 54> file INSTALL cannot find 54>
2016 Dec 19
1
How to create Debian packages for release 3.9.0
Hello, Le 12/12/2016 à 18:29, Hans Wennborg a écrit : > +Sylvestre who knows about these things. > > On Thu, Dec 8, 2016 at 2:24 AM, Kris van Rens via llvm-dev > <llvm-dev at lists.llvm.org> wrote: >> L.S., >> >> I'm currently in the process of creating Debian packages for >> clang/llvm release 3.9.0. For this I'm using the steps as explained on
2016 Mar 03
3
Building with LLVM_PARALLEL_XXX_JOBS
...to ship. >>>>> * I’m at Apple, so points 1 and 2 are already covered (we only use >>>>> clang, >>>>> and ld64 is a fast linker). >>>>> * Our system compiler is PGO+LTO’d, but our stage1 isn’t. Stage1 isn’t >>>>> because the performance improvement of PGO+LTO is less than the time >>>>> it >>>>> takes to build, and stage1 is basically a throwaway. >>>>> * We are using Ninja and CMake, but this configuration isn’t really >>>>> significantly faster than autoconf/make, and a...
2017 Jul 11
3
Gluster native mount is really slow compared to nfs
...e.g.:?192.168.140.41:/www /var/www glusterfs defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable 0 0 ? I tried some mount variants in order to speed up things without luck. ? ? After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was a crazy performance difference. ? e.g.:?192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0 ? I tried a test like this to confirm the slowness: ? ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64 ?This test finished in around 1.5 seconds wi...
2013 Apr 30
1
Volume heal daemon 3.4alpha3
gluster> volume heal dyn_coldfusion Self-heal daemon is not running. Check self-heal daemon log file. gluster> Is there a specific log? When i check /var/log/glusterfs/glustershd.log glustershd.log:[2013-04-30 15:51:40.463259] E [afr-self-heald.c:409:_crawl_proceed] 0-dyn_coldfusion-replicate-0: Stopping crawl for dyn_coldfusion-client-1 , subvol went down Is there a specific log? When
2018 Jan 30
2
Tiered volume performance degrades badly after a volume stop/start or system restart.
I am fighting this issue: Bug 1540376 ? Tiered volume performance degrades badly after a volume stop/start or system restart. https://bugzilla.redhat.com/show_bug.cgi?id=1540376 Does anyone have any ideas on what might be causing this, and what a fix or work-around might be? Thanks! ~ Jeff Byers ~ Tiered volume performance degrades badly after a volume stop...
2019 Dec 28
1
GFS performance under heavy traffic
...> > N.B.: Once sharding is enabled, DO NOT DISABLE it - as you will loose? your data. > > Using GLUSTER v7.1 (soon on CentOS? & Debian) allows using latest features? and optimizations while support from gluster Dev community is quite active. > > P.S: I'm wondering how 'performance.cache-size' can both be 32 MB and 128 MB. Please double-check this (maybe I'm reading it wrong on my smartphone) and if needed raise a bug on bugzilla.redhat.com > > P.S2: Please? provide? 'gluster volume info' as 'cluster.quorum-type' ->? 'none' is not norm...
2018 Apr 16
2
Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory
...plicate Volume ID: e0579d53-f671-4868-863b-ba85c4cfacb3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: n01c01-gluster:/storage/gluster/www Brick2: n02c01-gluster:/storage/gluster/www Brick3: n03c01-gluster:/storage/gluster/www Options Reconfigured: performance.read-ahead: on performance.client-io-threads: on nfs.disable: on transport.address-family: inet performance.md-cache-timeout: 600 diagnostics.brick-log-level: WARNING network.ping-timeout: 3 features.cache-invalidation: on server.event-threads: 4 performance.cache-invalidation: on performance.quick...
2018 May 14
1
Unable to build 'lld' on Mac OS 10.9
Hi All, I am trying to build the 'lld' linker on Mac OS 10.9, but during the build, I am getting the errors. Following are the steps that I have followed: 1.     I have downloaded the ‘llvm-stable’ source code from the following location:   https://github.com/llvm-mirror/llvm/tree/stable   2.     Machine details(on which llvm source code isbeing built) are as follows: $ sw_vers
2018 Jan 31
1
Tiered volume performance degrades badly after a volume stop/start or system restart.
Tested it in two different environments lately with exactly same results. Was trying to get better read performance from local mounts with hundreds of thousands maildir email files by using SSD, hoping that .gluster file stat read will improve which does migrate to hot tire. After seeing what you described for 24 hours and confirming all move around on the tires is done - killed it. Here are my volume setting...
2006 May 18
3
populating array of text_fields from an array of model objects
I have in my view the following: <% 0.upto(@num_performances) do |idx| -%> <%= text_field ''performance'', ''city'', :index => idx, %> <%= text_field ''performance'', ''venue'', :index => idx, %> <% end -%> and in my controller I have: @performance = [Perfor...
2019 Dec 27
0
GFS performance under heavy traffic
...to the size of the shard. N.B.: Once sharding is enabled, DO NOT DISABLE it - as you will loose your data. Using GLUSTER v7.1 (soon on CentOS & Debian) allows using latest features and optimizations while support from gluster Dev community is quite active. P.S: I'm wondering how 'performance.cache-size' can both be 32 MB and 128 MB. Please double-check this (maybe I'm reading it wrong on my smartphone) and if needed raise a bug on bugzilla.redhat.com P.S2: Please provide 'gluster volume info' as 'cluster.quorum-type' -> 'none' is not normal for r...
2018 Feb 01
0
Tiered volume performance degrades badly after a volume stop/start or system restart.
...resent, albeit empty since I don't have cluster.write-freq- threshold nor cluster.read-freq-threshold set, so features.record-counters is off and nothing should be going into the DB. I've found that if I delete these .db files after the volume stop, but before the volume start, the tiering performance is normal, not degraded. Of course all of the history in these DB files is lost. Not sure what other ramifications there are to deleting these .db files. When I did have one of the freq-threshold settings set, I did see a record get added to the file, so the sqlite3 DB is working to some degree....
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...s,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable > 0 0 > > > > I tried some mount variants in order to speed up things without luck. > > > > > > After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was > a crazy performance difference. > > > > e.g.: 192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0 > > > > I tried a test like this to confirm the slowness: > > > > ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41 > --threads 8 --files 5000 --file-size 64 --rec...
2017 Sep 04
2
Slow performance of gluster volume
...all, I have a gluster volume used to host several VMs (managed through oVirt). The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network for the storage. When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct) out of the volume (e.g. writing at /root/) the performance of the dd is reported to be ~ 700MB/s, which is quite decent. When testing the dd on the gluster volume I get ~ 43 MB/s which way lower from the previous. When testing with dd the gluster volume, the network traffic was not exceeding 450 Mbps on the network interface. I would expect to reach near 9...
2019 Dec 24
1
GFS performance under heavy traffic
Hi David, On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote: > > Hello, > > In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node? It makes sense, as no data is being generated towards the arbiter. > Presumably we shouldn't have an arbiter node listed under backupvolfile-server when mounting the filesystem? S...