similar to: Wrong relocation emitted when building shared libraries with Control Flow Integrity

Displaying 20 results from an estimated 1100 matches similar to: "Wrong relocation emitted when building shared libraries with Control Flow Integrity"

2014 Oct 24
4
[LLVMdev] Cross-Block Dead Store Elimination
Hi, It looks like the DeadStoreElimination optimization doesn't work across BasicBlock boundaries. The project I'm working on (https://github.com/trailofbits/mcsema), would tremendously benefit from even simple cross-block DSE. There was a patch to do non-local DSE few years ago (http://lists.cs.uiuc.edu/pipermail/llvmdev/2010-January/028751.html), but seems that the patch was never
2017 Jan 31
1
CFI, Safe-Stack, and -fno-sanitize-trap
Hi, I am using clang++3.9 to build a simple program with both CFI and safe-stack. I am getting linker errors when combining -fsanitize=safe-stack, -fsanitize=cfi, and -fno-sanitize-trap=all. Combining safe-stack and CFI without -fno-sanitize-trap=all works as expected. It looks like clang is attempting to link in two compiler-rt libraries, one for ubsan and one for safestack, and this causes
2006 Sep 22
1
Stack corruption in newhidups.c
Hi, (please let me know if there is a better place to submit bugs) I run a FreeBSD box with stack-protector enabled, which raises a problem in the upsdrv_initups() function of the newhidups.c module; the regex_array variable is sized one item too small. Regards, Herve Masson <<<< void upsdrv_initups(void) { int i; #ifndef SHUT_MODE /*! * SHUT is only supported by
2007 Oct 19
4
[PATCH] nr_cpus calculation problem due to incorrect sockets_per_node
Testing on an 8-node 128-way NUMA machine has exposed a problem with Xen''s nr_cpus calculation. In this case, since Xen cuts off recognized CPUs at 32, the machine appears to have 16 CPUs on the first and second nodes and none on the remaining nodes. Given this asymmetry, the calculation of sockets_per_node (which is later used to calculate nr_cpus) is incorrect:
2009 Dec 29
0
aMSN segfaults at login after configuring my home network
After configuring my home network, aMSN segfaults. I posted this issue originally in the aMSN forums under this thread: http://www.amsn-project.net/forums/viewtopic.php?t=7593 I was told that my issue is related to SAMBA, referring this thread: http://www.amsn-project.net/forums/viewtopic.php?t=6343 After uninstalling SAMBA, aMSN stops segfaulting and works as expected. After installing it
2018 Apr 04
13
[Bug 105884] New: Firefox causes a crash in the nouveau driver on GTX 1060
https://bugs.freedesktop.org/show_bug.cgi?id=105884 Bug ID: 105884 Summary: Firefox causes a crash in the nouveau driver on GTX 1060 Product: xorg Version: git Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium Component:
2017 Nov 15
11
[Bug 103753] New: Visual glitches on GTX 1060 6GB/4.13.x
https://bugs.freedesktop.org/show_bug.cgi?id=103753 Bug ID: 103753 Summary: Visual glitches on GTX 1060 6GB/4.13.x Product: xorg Version: git Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium Component: Driver/nouveau Assignee: nouveau at
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol 3: option shared-brick-count 3 Sincerely, Artem --
2018 Jan 16
10
[Bug 104652] New: None of the video outputs are usable for GTX 1060 - jerky video very few seconds
https://bugs.freedesktop.org/show_bug.cgi?id=104652 Bug ID: 104652 Summary: None of the video outputs are usable for GTX 1060 - jerky video very few seconds Product: xorg Version: git Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium
2008 Mar 26
2
passing parameters to the newly booted kernel
is it possible to pass parameters from a .cfg file to the newly booted kernel? my setup is pxelinux where the relevent config is label fbsd63 kernel memdisk append initrd=/freebsd6.3.hd harddisk what i would like to do is pass in some parameter so the booted kernel can behave differently. i've looked in teh archives without success, although i did see an elliptic reference to
2011 Jan 25
1
[RFC] Updates to ACP smart driver
This patch introduces a handful of new options, I mentioned earlier in: http://www.mail-archive.com/nut-upsdev at lists.alioth.debian.org/msg02088.html See the large commit message in the follow-up for the details and rationale. I realize it's a bit larger diff - so if it's required I can split it into few smaller ones. Michal Soltys (1): APC smart driver update and new features.
2007 Jul 05
9
Limit i/o capacitiy?
Hi all Is there any way to limit the network i/o capacity of virtual machine somehow? Say, I want a domU with id 1 to consume at much 0.5 MB/s of host''s bandwidth. Is it possible? Artem Pervin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2016 Apr 09
2
[GPUCC] how to remove _ZL21__nvvm_reflect_anchorv() automatically?
David's change makes nvvm_reflect_anchor unnecessary. The issue with dots in names generated by llvm still needs to be fixed. On Apr 9, 2016 8:32 AM, "Jingyue Wu" <jingyue at google.com> wrote: > Artem, > > With David's http://reviews.llvm.org/rL265060, do you think > __nvvm_reflect_anchor is still necessary? > > On Fri, Apr 8, 2016 at 9:37 AM, Yuanfeng
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Ok, it looks like the same problem. @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate the volfiles to fix this? Regards, Nithya On 17 April 2018 at 09:57, Artem Russakovskii <archon810 at gmail.com> wrote: > pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count > dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol > 3:
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Thanks for the link. Looking at the status of that doc, it isn't quite ready yet, and there's no mention of the option. Does it mean that whatever is ready now in 4.0.1 is incomplete but can be enabled via granular-entry-heal=on, and when it is complete, it'll become the default and the flag will simply go away? Is there any risk enabling the option now in 4.0.1? Sincerely, Artem
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Following up here on a related and very serious for us issue. I took down one of the 4 replicate gluster servers for maintenance today. There are 2 gluster volumes totaling about 600GB. Not that much data. After the server comes back online, it starts auto healing and pretty much all operations on gluster freeze for many minutes. For example, I was trying to run an ls -alrt in a folder with 7300
2016 Feb 17
2
How to define data for X86 assembler?
Hi, Is there any documentation on the syntax accepted by X86 assembler? I have this code in my .asm file to define data: text db "127.1.1.1 google.lk" But X86 assembler fails to understand it, with error: error: unexpected token in argument list text db "127.1.1.1 google.lk" ^ Any ideas how to fix this problem? I tried to find some
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the bug seems to persist in 4.0.1. Sincerely, Artem -- Founder, Android Police <http://www.androidpolice.com>, APK Mirror <http://www.apkmirror.com/>, Illogical Robot LLC beerpla.net | +ArtemRussakovskii <https://plus.google.com/+ArtemRussakovskii> | @ArtemR <http://twitter.com/ArtemR> On Mon, Apr
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
That might be the reason. Perhaps the volfiles were not regenerated after upgrading to the version with the fix. There is a workaround detailed in [2] for the time being (you will need to copy the shell script into the correct directory for your Gluster release). [2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19 On 17 April 2018 at 09:58, Artem Russakovskii <archon810 at
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Ravi, Could you please expand on how these would help? By forcing full here, we move the logic from the CPU to network, thus decreasing CPU utilization, is that right? This is assuming the CPU and disk utilization are caused by the differ and not by lstat and other calls or something. > Option: cluster.data-self-heal-algorithm > Default Value: (null) > Description: Select between