Volodymyr Kuznetsov
2014-Nov-03 16:05 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
Dear LLVM developers, Our team has developed an LLVM-based protection mechanism that (i) prevents control-flow hijack attacks enabled by memory corruption errors and (ii) has very low performance overhead. We would like to contribute the implementation to LLVM. We presented this work at the OSDI 2014 conference, at several software companies, and several US universities. We received positive feedback, and so we've open-sourced our prototype available for download from our project website (http://levee.epfl.ch). There are three components (safe stack, CPS, and CPI), and each can be used individually. Our most stable part is the safe stack instrumentation, which separates the program stack into a safe stack, which stores return addresses, register spills, and local variables that are statically verified to be accessed in a safe way, and the unsafe stack, which stores everything else. Such separation makes it much harder for an attacker to corrupt objects on the safe stack, including function pointers stored in spilled registers and return addresses. A detailed description of the individual components is available in our OSDI paper on code-pointer integrity (http://dslab.epfl.ch/pubs/cpi.pdf). The overhead of our implementation of the safe stack is very close to zero (0.01% on the Phoronix benchmarks and 0.03% on SPEC2006 CPU on average). This is lower than the overhead of stack cookies, which are supported by LLVM and are commonly used today, yet the security guarantees of the safe stack are strictly stronger than stack cookies. In some cases, the safe stack improves performance due to better cache locality. Our current implementation of the safe stack is stable and robust, we used it to recompile multiple projects on Linux including Chromium, and we also recompiled the entire FreeBSD user-space system and more than 100 packages. We ran unit tests on the FreeBSD system and many of the packages and observed no errors caused by the safe stack. The safe stack is also fully binary compatible with non-instrumented code and can be applied to parts of a program selectively. We attach our implementation of the safe stack as three patches against current SVN HEAD of LLVM (r221153), clang (r221154) and compiler-rt (r220991). The same changes are also available on https://github.com/cpi-llvm <http://github.com/cpi-llvm> in the safestack-r221153 branches of corresponding repositories. The patches make the following changes: -- Add the safestack function attribute, similar to the ssp, sspstrong and sspreq attributes. -- Add the SafeStack instrumentation pass that applies the safe stack to all functions that have the safestack attribute. This pass moves all unsafe local variables to the unsafe stack with a separate stack pointer, whereas all safe variables remain on the regular stack that is managed by LLVM as usual. -- Invoke the pass as the last stage before code generation (at the same time the existing cookie-based stack protector pass is invoked). -- Add -fsafe-stack and -fno-safe-stack options to clang to control safe stack usage (the safe stack is disabled by default). -- Add __attribute__((no_safe_stack)) attribute to clang that can be used to disable the safe stack for individual functions even when enabled globally. -- Add basic runtime support for the safe stack to compiler-rt. The runtime manages unsafe stack allocation/deallocation for each thread. -- Add unit tests for the safe stack. You can find more information about the safe stack, as well as other parts of or control-flow hijack protection technique in our OSDI paper. FYI here is the abstract of the paper: << Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. >> (This is joint work with V. Kuznetsov, L. Szekeres, M. Payer, G. Candea, R. Sekar, and D. Song) We look forward to your feedback and hope for a prompt merge into LLVM, to make the software built with clang more secure. - Volodymyr Kuznetsov & the CPI team -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141103/a77c8aae/attachment.html> -------------- next part -------------- diff --git a/include/llvm/Bitcode/LLVMBitCodes.h b/include/llvm/Bitcode/LLVMBitCodes.h index c42ecfe..79eecb7 100644 --- a/include/llvm/Bitcode/LLVMBitCodes.h +++ b/include/llvm/Bitcode/LLVMBitCodes.h @@ -376,7 +376,8 @@ namespace bitc { ATTR_KIND_IN_ALLOCA = 38, ATTR_KIND_NON_NULL = 39, ATTR_KIND_JUMP_TABLE = 40, - ATTR_KIND_DEREFERENCEABLE = 41 + ATTR_KIND_DEREFERENCEABLE = 41, + ATTR_KIND_SAFESTACK = 42 }; enum ComdatSelectionKindCodes { diff --git a/include/llvm/CodeGen/Passes.h b/include/llvm/CodeGen/Passes.h index e45b3e0..48bee0e 100644 --- a/include/llvm/CodeGen/Passes.h +++ b/include/llvm/CodeGen/Passes.h @@ -557,6 +557,10 @@ namespace llvm { /// FunctionPass *createStackProtectorPass(const TargetMachine *TM); + /// createSafeStackPass - This pass split the stack into the safe stack and + /// the unsafe stack to protect against stack-based overflow vulnerabilities. + Pass *createSafeStackPass(const TargetMachine *tli); + /// createMachineVerifierPass - This pass verifies cenerated machine code /// instructions for correctness. /// diff --git a/include/llvm/IR/Attributes.h b/include/llvm/IR/Attributes.h index 5ff48d6..f0ed9d7 100644 --- a/include/llvm/IR/Attributes.h +++ b/include/llvm/IR/Attributes.h @@ -106,6 +106,7 @@ public: StackProtect, ///< Stack protection. StackProtectReq, ///< Stack protection required. StackProtectStrong, ///< Strong Stack protection. + SafeStack, ///< Safe Stack protection. StructRet, ///< Hidden pointer to structure to return SanitizeAddress, ///< AddressSanitizer is on. SanitizeThread, ///< ThreadSanitizer is on. diff --git a/include/llvm/InitializePasses.h b/include/llvm/InitializePasses.h index 2964798..76e44f4 100644 --- a/include/llvm/InitializePasses.h +++ b/include/llvm/InitializePasses.h @@ -235,6 +235,7 @@ void initializeRegionOnlyPrinterPass(PassRegistry&); void initializeRegionOnlyViewerPass(PassRegistry&); void initializeRegionPrinterPass(PassRegistry&); void initializeRegionViewerPass(PassRegistry&); +void initializeSafeStackPass(PassRegistry&); void initializeSCCPPass(PassRegistry&); void initializeSROAPass(PassRegistry&); void initializeSROA_DTPass(PassRegistry&); diff --git a/include/llvm/Target/TargetLowering.h b/include/llvm/Target/TargetLowering.h index ffb44b3..4893ec2 100644 --- a/include/llvm/Target/TargetLowering.h +++ b/include/llvm/Target/TargetLowering.h @@ -921,6 +921,14 @@ public: return false; } + /// Return true if the target stores unsafe stack pointer at a fixed offset + /// in some non-standard address space, and populates the address space and + /// offset as appropriate. + virtual bool getUnsafeStackPtrLocation(unsigned &/*AddressSpace*/, + unsigned &/*Offset*/) const { + return false; + } + /// Returns the maximal possible offset which can be used for loads / stores /// from the global. virtual unsigned getMaximalGlobalOffset() const { diff --git a/lib/AsmParser/LLLexer.cpp b/lib/AsmParser/LLLexer.cpp index 6523bce..939dc5d 100644 --- a/lib/AsmParser/LLLexer.cpp +++ b/lib/AsmParser/LLLexer.cpp @@ -636,6 +636,7 @@ lltok::Kind LLLexer::LexIdentifier() { KEYWORD(ssp); KEYWORD(sspreq); KEYWORD(sspstrong); + KEYWORD(safestack); KEYWORD(sanitize_address); KEYWORD(sanitize_thread); KEYWORD(sanitize_memory); diff --git a/lib/AsmParser/LLParser.cpp b/lib/AsmParser/LLParser.cpp index b7818bb..0a655a6 100644 --- a/lib/AsmParser/LLParser.cpp +++ b/lib/AsmParser/LLParser.cpp @@ -988,6 +988,7 @@ bool LLParser::ParseFnAttributeValuePairs(AttrBuilder &B, case lltok::kw_ssp: B.addAttribute(Attribute::StackProtect); break; case lltok::kw_sspreq: B.addAttribute(Attribute::StackProtectReq); break; case lltok::kw_sspstrong: B.addAttribute(Attribute::StackProtectStrong); break; + case lltok::kw_safestack: B.addAttribute(Attribute::SafeStack); break; case lltok::kw_sanitize_address: B.addAttribute(Attribute::SanitizeAddress); break; case lltok::kw_sanitize_thread: B.addAttribute(Attribute::SanitizeThread); break; case lltok::kw_sanitize_memory: B.addAttribute(Attribute::SanitizeMemory); break; @@ -1289,6 +1290,7 @@ bool LLParser::ParseOptionalParamAttrs(AttrBuilder &B) { case lltok::kw_ssp: case lltok::kw_sspreq: case lltok::kw_sspstrong: + case lltok::kw_safestack: case lltok::kw_uwtable: HaveError |= Error(Lex.getLoc(), "invalid use of function-only attribute"); break; @@ -1358,6 +1360,7 @@ bool LLParser::ParseOptionalReturnAttrs(AttrBuilder &B) { case lltok::kw_ssp: case lltok::kw_sspreq: case lltok::kw_sspstrong: + case lltok::kw_safestack: case lltok::kw_uwtable: HaveError |= Error(Lex.getLoc(), "invalid use of function-only attribute"); break; diff --git a/lib/AsmParser/LLToken.h b/lib/AsmParser/LLToken.h index f9821f7..fbc0384 100644 --- a/lib/AsmParser/LLToken.h +++ b/lib/AsmParser/LLToken.h @@ -132,6 +132,7 @@ namespace lltok { kw_ssp, kw_sspreq, kw_sspstrong, + kw_safestack, kw_sret, kw_sanitize_thread, kw_sanitize_memory, diff --git a/lib/Bitcode/Reader/BitcodeReader.cpp b/lib/Bitcode/Reader/BitcodeReader.cpp index 9e20ba6..28b7b74 100644 --- a/lib/Bitcode/Reader/BitcodeReader.cpp +++ b/lib/Bitcode/Reader/BitcodeReader.cpp @@ -647,6 +647,8 @@ static Attribute::AttrKind GetAttrFromCode(uint64_t Code) { return Attribute::StackProtectReq; case bitc::ATTR_KIND_STACK_PROTECT_STRONG: return Attribute::StackProtectStrong; + case bitc::ATTR_KIND_SAFESTACK: + return Attribute::SafeStack; case bitc::ATTR_KIND_STRUCT_RET: return Attribute::StructRet; case bitc::ATTR_KIND_SANITIZE_ADDRESS: diff --git a/lib/Bitcode/Writer/BitcodeWriter.cpp b/lib/Bitcode/Writer/BitcodeWriter.cpp index 191fdc9..8320a0d 100644 --- a/lib/Bitcode/Writer/BitcodeWriter.cpp +++ b/lib/Bitcode/Writer/BitcodeWriter.cpp @@ -226,6 +226,8 @@ static uint64_t getAttrKindEncoding(Attribute::AttrKind Kind) { return bitc::ATTR_KIND_STACK_PROTECT_REQ; case Attribute::StackProtectStrong: return bitc::ATTR_KIND_STACK_PROTECT_STRONG; + case Attribute::SafeStack: + return bitc::ATTR_KIND_SAFESTACK; case Attribute::StructRet: return bitc::ATTR_KIND_STRUCT_RET; case Attribute::SanitizeAddress: diff --git a/lib/CodeGen/CMakeLists.txt b/lib/CodeGen/CMakeLists.txt index 461ac55..f115bfe 100644 --- a/lib/CodeGen/CMakeLists.txt +++ b/lib/CodeGen/CMakeLists.txt @@ -89,6 +89,7 @@ add_llvm_library(LLVMCodeGen RegisterCoalescer.cpp RegisterPressure.cpp RegisterScavenging.cpp + SafeStack.cpp ScheduleDAG.cpp ScheduleDAGInstrs.cpp ScheduleDAGPrinter.cpp diff --git a/lib/CodeGen/Passes.cpp b/lib/CodeGen/Passes.cpp index 4762116..5eee1a1 100644 --- a/lib/CodeGen/Passes.cpp +++ b/lib/CodeGen/Passes.cpp @@ -458,6 +458,9 @@ void TargetPassConfig::addISelPrepare() { if (!DisableVerify) addPass(createDebugInfoVerifierPass()); + // Add both the safe stack and the stack protection passes: each of them will + // only protect functions that have corresponding attributes. + addPass(createSafeStackPass(TM)); addPass(createStackProtectorPass(TM)); if (PrintISelInput) diff --git a/lib/CodeGen/SafeStack.cpp b/lib/CodeGen/SafeStack.cpp new file mode 100644 index 0000000..4640e04 --- /dev/null +++ b/lib/CodeGen/SafeStack.cpp @@ -0,0 +1,600 @@ +//===-- SafeStack.cpp - Safe Stack Insertion --------------------*- C++ -*-===// +// +// The LLVM Compiler Infrastructure +// +// This file is distributed under the University of Illinois Open Source +// License. See LICENSE.TXT for details. +// +//===----------------------------------------------------------------------===// +// +// This pass splits the stack into the safe stack (kept as-is for LLVM backend) +// and the unsafe stack (explicitly allocated and managed through the runtime +// support library). +// +//===----------------------------------------------------------------------===// + +#define DEBUG_TYPE "safestack" +#include "llvm/Support/Debug.h" +#include "llvm/CodeGen/Passes.h" +#include "llvm/IR/Constants.h" +#include "llvm/IR/DerivedTypes.h" +#include "llvm/IR/Function.h" +#include "llvm/IR/InstIterator.h" +#include "llvm/IR/Instructions.h" +#include "llvm/IR/Intrinsics.h" +#include "llvm/IR/IntrinsicInst.h" +#include "llvm/IR/IRBuilder.h" +#include "llvm/IR/DIBuilder.h" +#include "llvm/IR/Module.h" +#include "llvm/IR/DataLayout.h" +#include "llvm/Pass.h" +#include "llvm/Support/CommandLine.h" +#include "llvm/Analysis/AliasAnalysis.h" +#include "llvm/Target/TargetLowering.h" +#include "llvm/Target/TargetOptions.h" +#include "llvm/Target/TargetSubtargetInfo.h" +#include "llvm/Target/TargetFrameLowering.h" +#include "llvm/Transforms/Utils/ModuleUtils.h" +#include "llvm/Transforms/Utils/Local.h" +#include "llvm/ADT/Triple.h" +#include "llvm/ADT/Statistic.h" +#include "llvm/Support/Format.h" +#include "llvm/Support/raw_os_ostream.h" + +using namespace llvm; + +namespace llvm { + +STATISTIC(NumFunctions, "Total number of functions"); +STATISTIC(NumUnsafeStackFunctions, "Number of functions with unsafe stack"); +STATISTIC(NumUnsafeStackRestorePointsFunctions, + "Number of functions that use setjmp or exceptions"); + +STATISTIC(NumAllocas, "Total number of allocas"); +STATISTIC(NumUnsafeStaticAllocas, "Number of unsafe static allocas"); +STATISTIC(NumUnsafeDynamicAllocas, "Number of unsafe dynamic allocas"); +STATISTIC(NumUnsafeStackRestorePoints, "Number of setjmps and landingpads"); + +} // namespace llvm + +namespace { + +/// Check whether a given alloca instructino (AI) should be put on the safe +/// stack or not. The function analyzes all uses of AI and checks whether it is +/// only accessed in a memory safe way (as decided statically). +bool IsSafeStackAlloca(const AllocaInst *AI, const DataLayout *) { + // Go through all uses of this alloca and check whether all accesses to the + // allocated object are statically known to be memory safe and, hence, the + // object can be placed on the safe stack. + + SmallPtrSet<const Value*, 16> Visited; + SmallVector<const Instruction*, 8> WorkList; + WorkList.push_back(AI); + + // A DFS search through all uses of the alloca in bitcasts/PHI/GEPs/etc. + while (!WorkList.empty()) { + const Instruction *V = WorkList.pop_back_val(); + for (Value::const_use_iterator UI = V->use_begin(), + UE = V->use_end(); UI != UE; ++UI) { + const Instruction *I = cast<const Instruction>(UI->getUser()); + assert(V == UI->get()); + + switch (I->getOpcode()) { + case Instruction::Load: + // Loading from a pointer is safe + break; + case Instruction::VAArg: + // "va-arg" from a pointer is safe + break; + case Instruction::Store: + if (V == I->getOperand(0)) + // Stored the pointer - conservatively assume it may be unsafe + return false; + // Storing to the pointee is safe + break; + + case Instruction::GetElementPtr: + if (!cast<const GetElementPtrInst>(I)->hasAllConstantIndices()) + // GEP with non-constant indices can lead to memory errors + return false; + + // We assume that GEP on static alloca with constant indices is safe, + // otherwise a compiler would detect it and warn during compilation. + + if (!isa<const ConstantInt>(AI->getArraySize())) + // However, if the array size itself is not constant, the access + // might still be unsafe at runtime. + return false; + + /* fallthough */ + + case Instruction::BitCast: + case Instruction::PHI: + case Instruction::Select: + // The object can be safe or not, depending on how the result of the + // BitCast/PHI/Select/GEP/etc. is used. + if (Visited.insert(I)) + WorkList.push_back(cast<const Instruction>(I)); + break; + + case Instruction::Call: + case Instruction::Invoke: { + ImmutableCallSite CS(I); + + // Given we don't care about information leak attacks at this point, + // the object is considered safe if a pointer to it is passed to a + // function that only reads memory nor returns any value. This function + // can neither do unsafe writes itself nor capture the pointer (or + // return it) to do unsafe writes to it elsewhere. The function also + // shouldn't unwind (a readonly function can leak bits by throwing an + // exception or not depending on the input value). + if (CS.onlyReadsMemory() /* && CS.doesNotThrow()*/ && + I->getType()->isVoidTy()) + continue; + + // LLVM 'nocapture' attribute is only set for arguments whose address + // is not stored, passed around, or used in any other non-trivial way. + // We assume that passing a pointer to an object as a 'nocapture' + // argument is safe. + // FIXME: a more precise solution would require an interprocedural + // analysis here, which would look at all uses of an argument inside + // the function being called. + ImmutableCallSite::arg_iterator B = CS.arg_begin(), E = CS.arg_end(); + for (ImmutableCallSite::arg_iterator A = B; A != E; ++A) + if (A->get() == V && !CS.doesNotCapture(A - B)) + // The parameter is not marked 'nocapture' - unsafe + return false; + continue; + } + + default: + // The object is unsafe if it is used in any other way. + return false; + } + } + } + + // All uses of the alloca are safe, we can place it on the safe stack. + return true; +} + +/// The SafeStack pass splits the stack of each function into the +/// safe stack, which is only accessed through memory safe dereferences +/// (as determined statically), and the unsafe stack, which contains all +/// local variables that are accessed in unsafe ways. +class SafeStack : public ModulePass { + const TargetMachine *TM; + const TargetLoweringBase *TLI; + const DataLayout *DL; + + AliasAnalysis *AA; + + /// Thread-local variable that stores the unsafe stack pointer + Value *UnsafeStackPtr; + + bool haveFunctionsWithSafeStack(Module &M) { + for (Module::iterator It = M.begin(), Ie = M.end(); It != Ie; ++It) { + if (It->hasFnAttribute(Attribute::SafeStack)) + return true; + } + return false; + } + + bool doPassInitialization(Module &M); + bool runOnFunction(Function &F); + +public: + static char ID; // Pass identification, replacement for typeid. + SafeStack(): ModulePass(ID), TM(nullptr), TLI(nullptr), DL(nullptr) { + initializeSafeStackPass(*PassRegistry::getPassRegistry()); + } + + SafeStack(const TargetMachine *TM) + : ModulePass(ID), TM(TM), TLI(nullptr), DL(nullptr) { + initializeSafeStackPass(*PassRegistry::getPassRegistry()); + } + + virtual void getAnalysisUsage(AnalysisUsage &AU) const { + AU.addRequired<AliasAnalysis>(); + } + + virtual bool runOnModule(Module &M) { + DEBUG(dbgs() << "[SafeStack] Module: " + << M.getModuleIdentifier() << "\n"); + + // Does the module have any functions that require safe stack? + if (!haveFunctionsWithSafeStack(M)) { + DEBUG(dbgs() << "[SafeStack] no functions to instrument\n"); + return false; // Nothing to do + } + + AA = &getAnalysis<AliasAnalysis>(); + + assert(TM != NULL && "SafeStack requires TargetMachine"); + TLI = TM->getSubtargetImpl()->getTargetLowering(); + DL = TLI->getDataLayout(); + + // Add module-level code (e.g., runtime support function prototypes) + doPassInitialization(M); + + // Add safe stack instrumentation to all functions that need it + for (Module::iterator It = M.begin(), Ie = M.end(); It != Ie; ++It) { + Function &F = *It; + DEBUG(dbgs() << "[SafeStack] Function: " << F.getName() << "\n"); + + if (F.isDeclaration()) { + DEBUG(dbgs() << "[SafeStack] function definition" + " is not available\n"); + continue; + } + + if (F.getName().startswith("llvm.") || + F.getName().startswith("__llvm__")) { + DEBUG(dbgs() << "[SafeStack] skipping an intrinsic function\n"); + continue; + } + + if (!F.hasFnAttribute(Attribute::SafeStack)) { + DEBUG(dbgs() << "[SafeStack] safestack is not requested" + " for this function\n"); + continue; + } + + + { + // Make sure the regular stack protector won't run on this function + // (safestack attribute takes precedence) + AttrBuilder B; + B.addAttribute(Attribute::StackProtect) + .addAttribute(Attribute::StackProtectReq) + .addAttribute(Attribute::StackProtectStrong); + F.removeAttributes(AttributeSet::FunctionIndex, AttributeSet::get( + F.getContext(), AttributeSet::FunctionIndex, B)); + } + + if (AA->onlyReadsMemory(&F)) { + // XXX: we don't protect against information leak attacks for now + DEBUG(dbgs() << "[SafeStack] function only reads memory\n"); + continue; + } + + runOnFunction(F); + DEBUG(dbgs() << "[SafeStack] safestack applied\n"); + } + + return true; + } +}; // class SafeStack + +bool SafeStack::doPassInitialization(Module &M) { + Type *Int8Ty = Type::getInt8Ty(M.getContext()); + unsigned AddressSpace, Offset; + bool Changed = false; + + // Check where the unsafe stack pointer is stored on this architecture + if (TLI->getUnsafeStackPtrLocation(AddressSpace, Offset)) { + // The unsafe stack pointer is stored at a fixed location + // (usually in the thread control block) + Constant *OffsetVal + ConstantInt::get(Type::getInt32Ty(M.getContext()), Offset); + + UnsafeStackPtr = ConstantExpr::getIntToPtr(OffsetVal, + PointerType::get(Int8Ty->getPointerTo(), AddressSpace)); + } else { + // The unsafe stack pointer is stored in a global variable with a magic name + // FIXME: make the name start with "llvm." + UnsafeStackPtr = dyn_cast_or_null<GlobalVariable>( + M.getNamedValue("__llvm__unsafe_stack_ptr")); + + if (!UnsafeStackPtr) { + // The global variable is not defined yet, define it ourselves + UnsafeStackPtr = new GlobalVariable( + /*Module=*/ M, /*Type=*/ Int8Ty->getPointerTo(), + /*isConstant=*/ false, /*Linkage=*/ GlobalValue::ExternalLinkage, + /*Initializer=*/ 0, /*Name=*/ "__llvm__unsafe_stack_ptr"); + + cast<GlobalVariable>(UnsafeStackPtr)->setThreadLocal(true); + + // TODO: should we place the unsafe stack ptr global in a special section? + // UnsafeStackPtr->setSection(".llvm.safestack"); + + Changed = true; + } else { + // The variable exists, check its type and attributes + if (UnsafeStackPtr->getType() != Int8Ty->getPointerTo()) { + report_fatal_error("__llvm__unsafe_stack_ptr must have void* type"); + } + + if (!cast<GlobalVariable>(UnsafeStackPtr)->isThreadLocal()) { + report_fatal_error("__llvm__unsafe_stack_ptr must be thread-local"); + } + + // TODO: check other attributes? + } + } + + return Changed; +} + +bool SafeStack::runOnFunction(Function &F) { + ++NumFunctions; + + unsigned StackAlignment + TM->getSubtargetImpl(F)->getFrameLowering()->getStackAlignment(); + + SmallVector<AllocaInst*, 16> StaticAlloca; + SmallVector<AllocaInst*, 4> DynamicAlloca; + SmallVector<ReturnInst*, 4> Returns; + + // Collect all points where stack gets unwinded and needs to be restored + // This is only necessary because the runtime (setjmp and unwind code) is + // not aware of the unsafe stack and won't unwind/restore it prorerly. + // To work around this problem without changing the runtime, we insert + // instrumentation to restore the unsafe stack pointer when necessary. + SmallVector<Instruction*, 4> StackRestorePoints; + + Type *StackPtrTy = Type::getInt8PtrTy(F.getContext()); + Type *IntPtrTy = DL->getIntPtrType(F.getContext()); + Type *Int32Ty = Type::getInt32Ty(F.getContext()); + + // Find all static and dynamic alloca instructions that must be moved to the + // unsafe stack, all return instructions and stack restore points + for (inst_iterator It = inst_begin(&F), Ie = inst_end(&F); It != Ie; ++It) { + Instruction *I = &*It; + + if (AllocaInst *AI = dyn_cast<AllocaInst>(I)) { + ++NumAllocas; + + if (IsSafeStackAlloca(AI, DL)) + continue; + + if (AI->isStaticAlloca()) { + ++NumUnsafeStaticAllocas; + StaticAlloca.push_back(AI); + } else { + ++NumUnsafeDynamicAllocas; + DynamicAlloca.push_back(AI); + } + + } else if (ReturnInst *RI = dyn_cast<ReturnInst>(I)) { + Returns.push_back(RI); + + } else if (CallInst *CI = dyn_cast<CallInst>(I)) { + // setjmps require stack restore + if (CI->getCalledFunction() && CI->canReturnTwice()) + //CI->getCalledFunction()->getName() == "_setjmp") + StackRestorePoints.push_back(CI); + + } else if (LandingPadInst *LP = dyn_cast<LandingPadInst>(I)) { + // Excpetion landing pads require stack restore + StackRestorePoints.push_back(LP); + } + } + + if (StaticAlloca.empty() && DynamicAlloca.empty() && + StackRestorePoints.empty()) + return false; // Nothing to do in this function + + if (!StaticAlloca.empty() || !DynamicAlloca.empty()) + ++NumUnsafeStackFunctions; // This function has the unsafe stack + + if (!StackRestorePoints.empty()) + ++NumUnsafeStackRestorePointsFunctions; + + DIBuilder DIB(*F.getParent()); + IRBuilder<> IRB(F.getEntryBlock().getFirstInsertionPt()); + + // The top of the unsafe stack after all unsafe static allocas are allocated + Value *StaticTop = NULL; + + if (!StaticAlloca.empty()) { + // We explicitly compute and set the unsafe stack layout for all unsafe + // static alloca instructions. We safe the unsafe "base pointer" in the + // prologue into a local variable and restore it in the epilogue. + + // Load the current stack pointer (we'll also use it as a base pointer) + // FIXME: use a dedicated register for it ? + Instruction *BasePointer = IRB.CreateLoad(UnsafeStackPtr, false, + "unsafe_stack_ptr"); + assert(BasePointer->getType() == StackPtrTy); + + for (SmallVectorImpl<ReturnInst*>::iterator It = Returns.begin(), + Ie = Returns.end(); It != Ie; ++It) { + IRB.SetInsertPoint(*It); + IRB.CreateStore(BasePointer, UnsafeStackPtr); + } + + // Compute maximum alignment among static objects on the unsafe stack + unsigned MaxAlignment = 0; + for (SmallVectorImpl<AllocaInst*>::iterator It = StaticAlloca.begin(), + Ie = StaticAlloca.end(); It != Ie; ++It) { + AllocaInst *AI = *It; + Type *Ty = AI->getAllocatedType(); + unsigned Align + std::max((unsigned)DL->getPrefTypeAlignment(Ty), AI->getAlignment()); + if (Align > MaxAlignment) + MaxAlignment = Align; + } + + if (MaxAlignment > StackAlignment) { + // Re-align the base pointer according to the max requested alignment + assert(isPowerOf2_32(MaxAlignment)); + IRB.SetInsertPoint(cast<Instruction>(BasePointer->getNextNode())); + BasePointer = cast<Instruction>(IRB.CreateIntToPtr( + IRB.CreateAnd(IRB.CreatePtrToInt(BasePointer, IntPtrTy), + ConstantInt::get(IntPtrTy, ~uint64_t(MaxAlignment-1))), + StackPtrTy)); + } + + // Allocate space for every unsafe static AllocaInst on the unsafe stack + int64_t StaticOffset = 0; // Current stack top + for (SmallVectorImpl<AllocaInst*>::iterator It = StaticAlloca.begin(), + Ie = StaticAlloca.end(); It != Ie; ++It) { + AllocaInst *AI = *It; + IRB.SetInsertPoint(AI); + + ConstantInt *CArraySize = cast<ConstantInt>(AI->getArraySize()); + Type *Ty = AI->getAllocatedType(); + + uint64_t Size = DL->getTypeAllocSize(Ty) * CArraySize->getZExtValue(); + if (Size == 0) Size = 1; // Don't create zero-sized stack objects. + + // Ensure the object is properly aligned + unsigned Align + std::max((unsigned)DL->getPrefTypeAlignment(Ty), AI->getAlignment()); + + // Add alignment + // NOTE: we ensure that BasePointer itself is aligned to >= Align + StaticOffset += Size; + StaticOffset = (StaticOffset + Align - 1) / Align * Align; + + Value *Off = IRB.CreateGEP(BasePointer, // BasePointer is i8* + ConstantInt::get(Int32Ty, -StaticOffset)); + Value *NewAI = IRB.CreateBitCast(Off, AI->getType(), AI->getName()); + if (AI->hasName() && isa<Instruction>(NewAI)) + cast<Instruction>(NewAI)->takeName(AI); + + // Replace alloc with the new location + replaceDbgDeclareForAlloca(AI, NewAI, DIB); + AI->replaceAllUsesWith(NewAI); + AI->eraseFromParent(); + } + + // Re-align BasePointer so that our callees would see it aligned as expected + // FIXME: no need to update BasePointer in leaf functions + StaticOffset = (StaticOffset + StackAlignment - 1) + / StackAlignment * StackAlignment; + + // Update shadow stack pointer in the function epilogue + IRB.SetInsertPoint(cast<Instruction>(BasePointer->getNextNode())); + + StaticTop = IRB.CreateGEP(BasePointer, + ConstantInt::get(Int32Ty, -StaticOffset), "unsafe_stack_static_top"); + IRB.CreateStore(StaticTop, UnsafeStackPtr); + } + + IRB.SetInsertPoint( + StaticTop ? cast<Instruction>(StaticTop)->getNextNode() + : (Instruction*) F.getEntryBlock().getFirstInsertionPt()); + + // Safe stack object that stores the current unsafe stack top. It is updated + // as unsafe dynamic (non-constant-sized) allocas are allocated and freed. + // This is only needed if we need to restore stack pointer after longjmp + // or exceptions. + // FIXME: a better alternative might be to store the unsafe stack pointer + // before setjmp / invoke instructions. + AllocaInst *DynamicTop = NULL; + + if (!StackRestorePoints.empty()) { + // We need the current value of the shadow stack pointer to restore + // after longjmp or exception catching. + + // FIXME: in the future, this should be handled by the longjmp/exception + // runtime itself + + if (!DynamicAlloca.empty()) { + // If we also have dynamic alloca's, the stack pointer value changes + // throughout the function. For now we store it in an allca. + DynamicTop = IRB.CreateAlloca(StackPtrTy, 0, "unsafe_stack_dynamic_ptr"); + } + + if (!StaticTop) { + // We need to original unsafe stack pointer value, even if there are + // no unsafe static allocas + StaticTop = IRB.CreateLoad(UnsafeStackPtr, false, "unsafe_stack_ptr"); + } + + if (!DynamicAlloca.empty()) { + IRB.CreateStore(StaticTop, DynamicTop); + } + } + + // Handle dynamic alloca now + for (SmallVectorImpl<AllocaInst*>::iterator It = DynamicAlloca.begin(), + Ie = DynamicAlloca.end(); It != Ie; ++It) { + AllocaInst *AI = *It; + IRB.SetInsertPoint(AI); + + // Compute the new SP value (after AI) + Value *ArraySize = AI->getArraySize(); + if (ArraySize->getType() != IntPtrTy) + ArraySize = IRB.CreateIntCast(ArraySize, IntPtrTy, false); + + Type *Ty = AI->getAllocatedType(); + uint64_t TySize = DL->getTypeAllocSize(Ty); + Value *Size = IRB.CreateMul(ArraySize, ConstantInt::get(IntPtrTy, TySize)); + + Value *SP = IRB.CreatePtrToInt(IRB.CreateLoad(UnsafeStackPtr), IntPtrTy); + SP = IRB.CreateSub(SP, Size); + + // Align the SP value to satisfy the AllocaInst, type and stack alignments + unsigned Align = std::max( + std::max((unsigned)DL->getPrefTypeAlignment(Ty), AI->getAlignment()), + (unsigned) StackAlignment); + + assert(isPowerOf2_32(Align)); + Value *NewTop = IRB.CreateIntToPtr( + IRB.CreateAnd(SP, ConstantInt::get(IntPtrTy, ~uint64_t(Align-1))), + StackPtrTy); + + // Save the stack pointer + IRB.CreateStore(NewTop, UnsafeStackPtr); + if (DynamicTop) { + IRB.CreateStore(NewTop, DynamicTop); + } + + Value *NewAI = IRB.CreateIntToPtr(SP, AI->getType()); + if (AI->hasName() && isa<Instruction>(NewAI)) + NewAI->takeName(AI); + + replaceDbgDeclareForAlloca(AI, NewAI, DIB); + AI->replaceAllUsesWith(NewAI); + AI->eraseFromParent(); + } + + if (!DynamicAlloca.empty()) { + // Now go through the instructions again, replacing stacksave/stackrestore + for (inst_iterator It = inst_begin(&F), Ie = inst_end(&F); It != Ie;) { + Instruction *I = &*(It++); + IntrinsicInst *II = dyn_cast<IntrinsicInst>(I); + if (!II) + continue; + + if (II->getIntrinsicID() == Intrinsic::stacksave) { + IRB.SetInsertPoint(II); + Instruction *LI = IRB.CreateLoad(UnsafeStackPtr); + LI->takeName(II); + II->replaceAllUsesWith(LI); + II->eraseFromParent(); + } else if (II->getIntrinsicID() == Intrinsic::stackrestore) { + IRB.SetInsertPoint(II); + Instruction *SI = IRB.CreateStore(II->getArgOperand(0), UnsafeStackPtr); + SI->takeName(II); + assert(II->use_empty()); + II->eraseFromParent(); + } + } + } + + // Restore current stack pointer after longjmp/exception catch + for (SmallVectorImpl<Instruction*>::iterator I = StackRestorePoints.begin(), + E = StackRestorePoints.end(); I != E; ++I) { + ++NumUnsafeStackRestorePoints; + + IRB.SetInsertPoint(cast<Instruction>((*I)->getNextNode())); + Value *CurrentTop = DynamicTop ? IRB.CreateLoad(DynamicTop) : StaticTop; + IRB.CreateStore(CurrentTop, UnsafeStackPtr); + } + + return true; +} + +} // end anonymous namespace + +char SafeStack::ID = 0; +INITIALIZE_PASS(SafeStack, "safe-stack", + "Safe Stack instrumentation pass", false, false) + +Pass *llvm::createSafeStackPass(const TargetMachine *TM) { + return new SafeStack(TM); +} diff --git a/lib/IR/Attributes.cpp b/lib/IR/Attributes.cpp index 04545ea..89bd83b 100644 --- a/lib/IR/Attributes.cpp +++ b/lib/IR/Attributes.cpp @@ -237,6 +237,8 @@ std::string Attribute::getAsString(bool InAttrGrp) const { return "sspreq"; if (hasAttribute(Attribute::StackProtectStrong)) return "sspstrong"; + if (hasAttribute(Attribute::SafeStack)) + return "safestack"; if (hasAttribute(Attribute::StructRet)) return "sret"; if (hasAttribute(Attribute::SanitizeThread)) @@ -426,6 +428,7 @@ uint64_t AttributeImpl::getAttrMask(Attribute::AttrKind Val) { case Attribute::InAlloca: return 1ULL << 43; case Attribute::NonNull: return 1ULL << 44; case Attribute::JumpTable: return 1ULL << 45; + case Attribute::SafeStack: return 1ULL << 46; case Attribute::Dereferenceable: llvm_unreachable("dereferenceable attribute not supported in raw format"); } diff --git a/lib/IR/Verifier.cpp b/lib/IR/Verifier.cpp index 1c54e9b..0f8f14c 100644 --- a/lib/IR/Verifier.cpp +++ b/lib/IR/Verifier.cpp @@ -759,6 +759,7 @@ void Verifier::VerifyAttributeTypes(AttributeSet Attrs, unsigned Idx, I->getKindAsEnum() == Attribute::StackProtect || I->getKindAsEnum() == Attribute::StackProtectReq || I->getKindAsEnum() == Attribute::StackProtectStrong || + I->getKindAsEnum() == Attribute::SafeStack || I->getKindAsEnum() == Attribute::NoRedZone || I->getKindAsEnum() == Attribute::NoImplicitFloat || I->getKindAsEnum() == Attribute::Naked || diff --git a/lib/Target/CppBackend/CPPBackend.cpp b/lib/Target/CppBackend/CPPBackend.cpp index f610fbb..5589f69 100644 --- a/lib/Target/CppBackend/CPPBackend.cpp +++ b/lib/Target/CppBackend/CPPBackend.cpp @@ -510,6 +510,7 @@ void CppWriter::printAttributes(const AttributeSet &PAL, HANDLE_ATTR(StackProtect); HANDLE_ATTR(StackProtectReq); HANDLE_ATTR(StackProtectStrong); + HANDLE_ATTR(SafeStack); HANDLE_ATTR(NoCapture); HANDLE_ATTR(NoRedZone); HANDLE_ATTR(NoImplicitFloat); diff --git a/lib/Target/X86/X86ISelLowering.cpp b/lib/Target/X86/X86ISelLowering.cpp index d8ffc36..e231979 100644 --- a/lib/Target/X86/X86ISelLowering.cpp +++ b/lib/Target/X86/X86ISelLowering.cpp @@ -1942,6 +1942,33 @@ bool X86TargetLowering::getStackCookieLocation(unsigned &AddressSpace, return true; } +bool X86TargetLowering::getUnsafeStackPtrLocation(unsigned &AddressSpace, + unsigned &Offset) const { + if (Subtarget->isTargetLinux()) { + if (Subtarget->is64Bit()) { + // %fs:0x280, unless we're using a Kernel code model + if (getTargetMachine().getCodeModel() != CodeModel::Kernel) { + Offset = 0x280; + AddressSpace = 257; + return true; + } + } else { + // %gs:0x280, unless we're using a Kernel code model + if (getTargetMachine().getCodeModel() != CodeModel::Kernel) { + Offset = 0x280; + AddressSpace = 256; + return true; + } + } + } else if (Subtarget->isTargetDarwin()) { + // %gs:(192*sizeof(void*)) + AddressSpace = 256; + Offset = 192 * (Subtarget->getDataLayout()->getPointerSize()); + return true; + } + return false; +} + bool X86TargetLowering::isNoopAddrSpaceCast(unsigned SrcAS, unsigned DestAS) const { assert(SrcAS != DestAS && "Expected different address spaces!"); diff --git a/lib/Target/X86/X86ISelLowering.h b/lib/Target/X86/X86ISelLowering.h index e81a9d1..823479a 100644 --- a/lib/Target/X86/X86ISelLowering.h +++ b/lib/Target/X86/X86ISelLowering.h @@ -793,6 +793,12 @@ namespace llvm { bool getStackCookieLocation(unsigned &AddressSpace, unsigned &Offset) const override; + /// Return true if the target stores unsafe stack pointer at a fixed offset + /// in some non-standard address space, and populates the address space and + /// offset as appropriate. + virtual bool getUnsafeStackPtrLocation(unsigned &AddressSpace, + unsigned &Offset) const; + SDValue BuildFILD(SDValue Op, EVT SrcVT, SDValue Chain, SDValue StackSlot, SelectionDAG &DAG) const; diff --git a/lib/Transforms/IPO/Inliner.cpp b/lib/Transforms/IPO/Inliner.cpp index 4ce6dfe..e0695d2 100644 --- a/lib/Transforms/IPO/Inliner.cpp +++ b/lib/Transforms/IPO/Inliner.cpp @@ -93,7 +93,8 @@ static void AdjustCallerSSPLevel(Function *Caller, Function *Callee) { // clutter to the IR. AttrBuilder B; B.addAttribute(Attribute::StackProtect) - .addAttribute(Attribute::StackProtectStrong); + .addAttribute(Attribute::StackProtectStrong) + .addAttribute(Attribute::StackProtectReq); AttributeSet OldSSPAttr = AttributeSet::get(Caller->getContext(), AttributeSet::FunctionIndex, B); @@ -101,18 +102,28 @@ static void AdjustCallerSSPLevel(Function *Caller, Function *Callee) { CalleeAttr = Callee->getAttributes(); if (CalleeAttr.hasAttribute(AttributeSet::FunctionIndex, - Attribute::StackProtectReq)) { + Attribute::SafeStack)) { + Caller->removeAttributes(AttributeSet::FunctionIndex, OldSSPAttr); + Caller->addFnAttr(Attribute::SafeStack); + } else if (CalleeAttr.hasAttribute(AttributeSet::FunctionIndex, + Attribute::StackProtectReq) && + !CallerAttr.hasAttribute(AttributeSet::FunctionIndex, + Attribute::SafeStack)) { Caller->removeAttributes(AttributeSet::FunctionIndex, OldSSPAttr); Caller->addFnAttr(Attribute::StackProtectReq); } else if (CalleeAttr.hasAttribute(AttributeSet::FunctionIndex, Attribute::StackProtectStrong) && !CallerAttr.hasAttribute(AttributeSet::FunctionIndex, + Attribute::SafeStack) && + !CallerAttr.hasAttribute(AttributeSet::FunctionIndex, Attribute::StackProtectReq)) { Caller->removeAttributes(AttributeSet::FunctionIndex, OldSSPAttr); Caller->addFnAttr(Attribute::StackProtectStrong); } else if (CalleeAttr.hasAttribute(AttributeSet::FunctionIndex, Attribute::StackProtect) && !CallerAttr.hasAttribute(AttributeSet::FunctionIndex, + Attribute::SafeStack) && + !CallerAttr.hasAttribute(AttributeSet::FunctionIndex, Attribute::StackProtectReq) && !CallerAttr.hasAttribute(AttributeSet::FunctionIndex, Attribute::StackProtectStrong)) diff --git a/test/CodeGen/X86/safestack.ll b/test/CodeGen/X86/safestack.ll new file mode 100644 index 0000000..ee47ea3 --- /dev/null +++ b/test/CodeGen/X86/safestack.ll @@ -0,0 +1,1504 @@ +; RUN: llc -mtriple=i386-pc-linux-gnu < %s -o - | FileCheck --check-prefix=LINUX-I386 %s +; RUN: llc -mtriple=x86_64-pc-linux-gnu < %s -o - | FileCheck --check-prefix=LINUX-X64 %s +; RUN: llc -mtriple=x86_64-apple-darwin < %s -o - | FileCheck --check-prefix=DARWIN-X64 %s + +%struct.foo = type { [16 x i8] } +%struct.foo.0 = type { [4 x i8] } +%struct.pair = type { i32, i32 } +%struct.nest = type { %struct.pair, %struct.pair } +%struct.vec = type { <4 x i32> } +%class.A = type { [2 x i8] } +%struct.deep = type { %union.anon } +%union.anon = type { %struct.anon } +%struct.anon = type { %struct.anon.0 } +%struct.anon.0 = type { %union.anon.1 } +%union.anon.1 = type { [2 x i8] } +%struct.small = type { i8 } + + at .str = private unnamed_addr constant [4 x i8] c"%s\0A\00", align 1 + +; test1a: array of [16 x i8] +; no safestack attribute +; Requires no protector. +define void @test1a(i8* %a) nounwind uwtable { +entry: +; LINUX-I386: test1a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test1a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test1a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %buf = alloca [16 x i8], align 16 + store i8* %a, i8** %a.addr, align 8 + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %arraydecay1 = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0 + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1) + ret void +} + +; test1b: array of [16 x i8] +; safestack attribute +; Requires protector. +define void @test1b(i8* %a) nounwind uwtable safestack { +entry: +; LINUX-I386: test1b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test1b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test1b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %buf = alloca [16 x i8], align 16 + store i8* %a, i8** %a.addr, align 8 + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %arraydecay1 = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0 + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1) + ret void +} + +; test2a: struct { [16 x i8] } +; no safestack attribute +; Requires no protector. +define void @test2a(i8* %a) nounwind uwtable { +entry: +; LINUX-I386: test2a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test2a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test2a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %b = alloca %struct.foo, align 1 + store i8* %a, i8** %a.addr, align 8 + %buf = getelementptr inbounds %struct.foo* %b, i32 0, i32 0 + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %buf1 = getelementptr inbounds %struct.foo* %b, i32 0, i32 0 + %arraydecay2 = getelementptr inbounds [16 x i8]* %buf1, i32 0, i32 0 + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2) + ret void +} + +; test2b: struct { [16 x i8] } +; safestack attribute +; Requires protector. +define void @test2b(i8* %a) nounwind uwtable safestack { +entry: +; LINUX-I386: test2b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test2b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test2b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %b = alloca %struct.foo, align 1 + store i8* %a, i8** %a.addr, align 8 + %buf = getelementptr inbounds %struct.foo* %b, i32 0, i32 0 + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %buf1 = getelementptr inbounds %struct.foo* %b, i32 0, i32 0 + %arraydecay2 = getelementptr inbounds [16 x i8]* %buf1, i32 0, i32 0 + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2) + ret void +} + +; test3a: array of [4 x i8] +; no safestack attribute +; Requires no protector. +define void @test3a(i8* %a) nounwind uwtable { +entry: +; LINUX-I386: test3a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test3a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test3a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %buf = alloca [4 x i8], align 1 + store i8* %a, i8** %a.addr, align 8 + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %arraydecay1 = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0 + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1) + ret void +} + +; test3b: array [4 x i8] +; safestack attribute +; Requires protector. +define void @test3b(i8* %a) nounwind uwtable safestack { +entry: +; LINUX-I386: test3b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test3b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test3b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %buf = alloca [4 x i8], align 1 + store i8* %a, i8** %a.addr, align 8 + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %arraydecay1 = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0 + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1) + ret void +} + +; test4a: struct { [4 x i8] } +; no safestack attribute +; Requires no protector. +define void @test4a(i8* %a) nounwind uwtable { +entry: +; LINUX-I386: test4a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test4a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test4a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %b = alloca %struct.foo.0, align 1 + store i8* %a, i8** %a.addr, align 8 + %buf = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0 + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %buf1 = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0 + %arraydecay2 = getelementptr inbounds [4 x i8]* %buf1, i32 0, i32 0 + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2) + ret void +} + +; test4b: struct { [4 x i8] } +; safestack attribute +; Requires protector. +define void @test4b(i8* %a) nounwind uwtable safestack { +entry: +; LINUX-I386: test4b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test4b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test4b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + %b = alloca %struct.foo.0, align 1 + store i8* %a, i8** %a.addr, align 8 + %buf = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0 + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0 + %0 = load i8** %a.addr, align 8 + %call = call i8* @strcpy(i8* %arraydecay, i8* %0) + %buf1 = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0 + %arraydecay2 = getelementptr inbounds [4 x i8]* %buf1, i32 0, i32 0 + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2) + ret void +} + +; test5a: no arrays / no nested arrays +; no safestack attribute +; Requires no protector. +define void @test5a(i8* %a) nounwind uwtable { +entry: +; LINUX-I386: test5a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test5a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test5a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + store i8* %a, i8** %a.addr, align 8 + %0 = load i8** %a.addr, align 8 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %0) + ret void +} + +; test5b: no arrays / no nested arrays +; safestack attribute +; Requires no protector. +define void @test5b(i8* %a) nounwind uwtable safestack { +entry: +; LINUX-I386: test5b: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test5b: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test5b: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a.addr = alloca i8*, align 8 + store i8* %a, i8** %a.addr, align 8 + %0 = load i8** %a.addr, align 8 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %0) + ret void +} + +; test6a: Address-of local taken (j = &a) +; no safestack attribute +; Requires no protector. +define void @test6a() nounwind uwtable { +entry: +; LINUX-I386: test6a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test6a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test6a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %retval = alloca i32, align 4 + %a = alloca i32, align 4 + %j = alloca i32*, align 8 + store i32 0, i32* %retval + %0 = load i32* %a, align 4 + %add = add nsw i32 %0, 1 + store i32 %add, i32* %a, align 4 + store i32* %a, i32** %j, align 8 + ret void +} + +; test6b: Address-of local taken (j = &a) +; safestack attribute +; Requires protector. +define void @test6b() nounwind uwtable safestack { +entry: +; LINUX-I386: test6b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test6b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test6b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %retval = alloca i32, align 4 + %a = alloca i32, align 4 + %j = alloca i32*, align 8 + store i32 0, i32* %retval + %0 = load i32* %a, align 4 + %add = add nsw i32 %0, 1 + store i32 %add, i32* %a, align 4 + store i32* %a, i32** %j, align 8 + ret void +} + +; test7a: PtrToInt Cast +; no safestack attribute +; Requires no protector. +define void @test7a() nounwind uwtable readnone { +entry: +; LINUX-I386: test7a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test7a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test7a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %0 = ptrtoint i32* %a to i64 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i64 %0) + ret void +} + +; test7b: PtrToInt Cast +; safestack attribute +; Requires no protector. +define void @test7b() nounwind uwtable readnone safestack { +entry: +; LINUX-I386: test7b: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test7b: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test7b: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %0 = ptrtoint i32* %a to i64 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i64 %0) + ret void +} + +; test8a: Passing addr-of to function call +; no safestack attribute +; Requires no protector. +define void @test8a() nounwind uwtable { +entry: +; LINUX-I386: test8a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test8a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test8a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %b = alloca i32, align 4 + call void @funcall(i32* %b) nounwind + ret void +} + +; test8b: Passing addr-of to function call +; safestack attribute +; Requires protector. +define void @test8b() nounwind uwtable safestack { +entry: +; LINUX-I386: test8b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test8b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test8b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %b = alloca i32, align 4 + call void @funcall(i32* %b) nounwind + ret void +} + +; test9a: Addr-of in select instruction +; no safestack attribute +; Requires no protector. +define void @test9a() nounwind uwtable { +entry: +; LINUX-I386: test9a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test9a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test9a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %x = alloca double, align 8 + %call = call double @testi_aux() nounwind + store double %call, double* %x, align 8 + %cmp2 = fcmp ogt double %call, 0.000000e+00 + %y.1 = select i1 %cmp2, double* %x, double* null + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), double* %y.1) + ret void +} + +; test9b: Addr-of in select instruction +; safestack attribute +; Requires protector. +define void @test9b() nounwind uwtable safestack { +entry: +; LINUX-I386: test9b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test9b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test9b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %x = alloca double, align 8 + %call = call double @testi_aux() nounwind + store double %call, double* %x, align 8 + %cmp2 = fcmp ogt double %call, 0.000000e+00 + %y.1 = select i1 %cmp2, double* %x, double* null + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), double* %y.1) + ret void +} + +; test10a: Addr-of in phi instruction +; no safestack attribute +; Requires no protector. +define void @test10a() nounwind uwtable { +entry: +; LINUX-I386: test10a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test10a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test10a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %x = alloca double, align 8 + %call = call double @testi_aux() nounwind + store double %call, double* %x, align 8 + %cmp = fcmp ogt double %call, 3.140000e+00 + br i1 %cmp, label %if.then, label %if.else + +if.then: ; preds = %entry + %call1 = call double @testi_aux() nounwind + store double %call1, double* %x, align 8 + br label %if.end4 + +if.else: ; preds = %entry + %cmp2 = fcmp ogt double %call, 1.000000e+00 + br i1 %cmp2, label %if.then3, label %if.end4 + +if.then3: ; preds = %if.else + br label %if.end4 + +if.end4: ; preds = %if.else, %if.then3, %if.then + %y.0 = phi double* [ null, %if.then ], [ %x, %if.then3 ], [ null, %if.else ] + %call5 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), double* %y.0) nounwind + ret void +} + +; test10b: Addr-of in phi instruction +; safestack attribute +; Requires protector. +define void @test10b() nounwind uwtable safestack { +entry: +; LINUX-I386: test10b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test10b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test10b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %x = alloca double, align 8 + %call = call double @testi_aux() nounwind + store double %call, double* %x, align 8 + %cmp = fcmp ogt double %call, 3.140000e+00 + br i1 %cmp, label %if.then, label %if.else + +if.then: ; preds = %entry + %call1 = call double @testi_aux() nounwind + store double %call1, double* %x, align 8 + br label %if.end4 + +if.else: ; preds = %entry + %cmp2 = fcmp ogt double %call, 1.000000e+00 + br i1 %cmp2, label %if.then3, label %if.end4 + +if.then3: ; preds = %if.else + br label %if.end4 + +if.end4: ; preds = %if.else, %if.then3, %if.then + %y.0 = phi double* [ null, %if.then ], [ %x, %if.then3 ], [ null, %if.else ] + %call5 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), double* %y.0) nounwind + ret void +} + +; test11a: Addr-of struct element. (GEP followed by store). +; no safestack attribute +; Requires no protector. +define void @test11a() nounwind uwtable { +entry: +; LINUX-I386: test11a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test11a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test11a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %b = alloca i32*, align 8 + %y = getelementptr inbounds %struct.pair* %c, i32 0, i32 1 + store i32* %y, i32** %b, align 8 + %0 = load i32** %b, align 8 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i32* %0) + ret void +} + +; test11b: Addr-of struct element. (GEP followed by store). +; safestack attribute +; Requires protector. +define void @test11b() nounwind uwtable safestack { +entry: +; LINUX-I386: test11b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test11b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test11b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %b = alloca i32*, align 8 + %y = getelementptr inbounds %struct.pair* %c, i32 0, i32 1 + store i32* %y, i32** %b, align 8 + %0 = load i32** %b, align 8 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i32* %0) + ret void +} + +; test12a: Addr-of struct element, GEP followed by ptrtoint. +; no safestack attribute +; Requires no protector. +define void @test12a() nounwind uwtable { +entry: +; LINUX-I386: test12a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test12a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test12a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %b = alloca i32*, align 8 + %y = getelementptr inbounds %struct.pair* %c, i32 0, i32 1 + %0 = ptrtoint i32* %y to i64 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i64 %0) + ret void +} + +; test12b: Addr-of struct element, GEP followed by ptrtoint. +; safestack attribute +; Requires protector. +define void @test12b() nounwind uwtable safestack { +entry: +; LINUX-I386: test12b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test12b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test12b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %b = alloca i32*, align 8 + %y = getelementptr inbounds %struct.pair* %c, i32 0, i32 1 + %0 = ptrtoint i32* %y to i64 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i64 %0) + ret void +} + +; test13a: Addr-of struct element, GEP followed by callinst. +; no safestack attribute +; Requires no protector. +define void @test13a() nounwind uwtable { +entry: +; LINUX-I386: test13a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test13a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test13a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %y = getelementptr inbounds %struct.pair* %c, i64 0, i32 1 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), i32* %y) nounwind + ret void +} + +; test13b: Addr-of struct element, GEP followed by callinst. +; safestack attribute +; Requires protector. +define void @test13b() nounwind uwtable safestack { +entry: +; LINUX-I386: test13b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test13b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test13b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %y = getelementptr inbounds %struct.pair* %c, i64 0, i32 1 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), i32* %y) nounwind + ret void +} + +; test14a: Addr-of a local, optimized into a GEP (e.g., &a - 12) +; no safestack attribute +; Requires no protector. +define void @test14a() nounwind uwtable { +entry: +; LINUX-I386: test14a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test14a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test14a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %add.ptr5 = getelementptr inbounds i32* %a, i64 -12 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), i32* %add.ptr5) nounwind + ret void +} + +; test14b: Addr-of a local, optimized into a GEP (e.g., &a - 12) +; safestack attribute +; Requires protector. +define void @test14b() nounwind uwtable safestack { +entry: +; LINUX-I386: test14b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test14b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test14b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %add.ptr5 = getelementptr inbounds i32* %a, i64 -12 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), i32* %add.ptr5) nounwind + ret void +} + +; test15a: Addr-of a local cast to a ptr of a different type +; (e.g., int a; ... ; float *b = &a;) +; no safestack attribute +; Requires no protector. +define void @test15a() nounwind uwtable { +entry: +; LINUX-I386: test15a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test15a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test15a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %b = alloca float*, align 8 + store i32 0, i32* %a, align 4 + %0 = bitcast i32* %a to float* + store float* %0, float** %b, align 8 + %1 = load float** %b, align 8 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), float* %1) + ret void +} + +; test15b: Addr-of a local cast to a ptr of a different type +; (e.g., int a; ... ; float *b = &a;) +; safestack attribute +; Requires protector. +define void @test15b() nounwind uwtable safestack { +entry: +; LINUX-I386: test15b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test15b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test15b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %b = alloca float*, align 8 + store i32 0, i32* %a, align 4 + %0 = bitcast i32* %a to float* + store float* %0, float** %b, align 8 + %1 = load float** %b, align 8 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), float* %1) + ret void +} + +; test16a: Addr-of a local cast to a ptr of a different type (optimized) +; (e.g., int a; ... ; float *b = &a;) +; no safestack attribute +; Requires no protector. +define void @test16a() nounwind uwtable { +entry: +; LINUX-I386: test16a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test16a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test16a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + store i32 0, i32* %a, align 4 + %0 = bitcast i32* %a to float* + call void @funfloat(float* %0) nounwind + ret void +} + +; test16b: Addr-of a local cast to a ptr of a different type (optimized) +; (e.g., int a; ... ; float *b = &a;) +; safestack attribute +; Requires protector. +define void @test16b() nounwind uwtable safestack { +entry: +; LINUX-I386: test16b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test16b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test16b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + store i32 0, i32* %a, align 4 + %0 = bitcast i32* %a to float* + call void @funfloat(float* %0) nounwind + ret void +} + +; test17a: Addr-of a vector nested in a struct +; no safestack attribute +; Requires no protector. +define void @test17a() nounwind uwtable { +entry: +; LINUX-I386: test17a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test17a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test17a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.vec, align 16 + %y = getelementptr inbounds %struct.vec* %c, i64 0, i32 0 + %add.ptr = getelementptr inbounds <4 x i32>* %y, i64 -12 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), <4 x i32>* %add.ptr) nounwind + ret void +} + +; test17b: Addr-of a vector nested in a struct +; safestack attribute +; Requires protector. +define void @test17b() nounwind uwtable safestack { +entry: +; LINUX-I386: test17b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test17b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test17b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.vec, align 16 + %y = getelementptr inbounds %struct.vec* %c, i64 0, i32 0 + %add.ptr = getelementptr inbounds <4 x i32>* %y, i64 -12 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i64 0, i64 0), <4 x i32>* %add.ptr) nounwind + ret void +} + +; test18a: Addr-of a variable passed into an invoke instruction. +; no safestack attribute +; Requires no protector. +define i32 @test18a() uwtable { +entry: +; LINUX-I386: test18a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test18a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test18a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %exn.slot = alloca i8* + %ehselector.slot = alloca i32 + store i32 0, i32* %a, align 4 + invoke void @_Z3exceptPi(i32* %a) + to label %invoke.cont unwind label %lpad + +invoke.cont: + ret i32 0 + +lpad: + %0 = landingpad { i8*, i32 } personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) + catch i8* null + ret i32 0 +} + +; test18b: Addr-of a variable passed into an invoke instruction. +; safestack attribute +; Requires protector. +define i32 @test18b() uwtable safestack { +entry: +; LINUX-I386: test18b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test18b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test18b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32, align 4 + %exn.slot = alloca i8* + %ehselector.slot = alloca i32 + store i32 0, i32* %a, align 4 + invoke void @_Z3exceptPi(i32* %a) + to label %invoke.cont unwind label %lpad + +invoke.cont: + ret i32 0 + +lpad: + %0 = landingpad { i8*, i32 } personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) + catch i8* null + ret i32 0 +} + +; test19a: Addr-of a struct element passed into an invoke instruction. +; (GEP followed by an invoke) +; no safestack attribute +; Requires no protector. +define i32 @test19a() uwtable { +entry: +; LINUX-I386: test19a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test19a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test19a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %exn.slot = alloca i8* + %ehselector.slot = alloca i32 + %a = getelementptr inbounds %struct.pair* %c, i32 0, i32 0 + store i32 0, i32* %a, align 4 + %a1 = getelementptr inbounds %struct.pair* %c, i32 0, i32 0 + invoke void @_Z3exceptPi(i32* %a1) + to label %invoke.cont unwind label %lpad + +invoke.cont: + ret i32 0 + +lpad: + %0 = landingpad { i8*, i32 } personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) + catch i8* null + ret i32 0 +} + +; test19b: Addr-of a struct element passed into an invoke instruction. +; (GEP followed by an invoke) +; safestack attribute +; Requires protector. +define i32 @test19b() uwtable safestack { +entry: +; LINUX-I386: test19b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test19b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test19b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.pair, align 4 + %exn.slot = alloca i8* + %ehselector.slot = alloca i32 + %a = getelementptr inbounds %struct.pair* %c, i32 0, i32 0 + store i32 0, i32* %a, align 4 + %a1 = getelementptr inbounds %struct.pair* %c, i32 0, i32 0 + invoke void @_Z3exceptPi(i32* %a1) + to label %invoke.cont unwind label %lpad + +invoke.cont: + ret i32 0 + +lpad: + %0 = landingpad { i8*, i32 } personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) + catch i8* null + ret i32 0 +} + +; test20a: Addr-of a pointer +; no safestack attribute +; Requires no protector. +define void @test20a() nounwind uwtable { +entry: +; LINUX-I386: test20a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test20a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test20a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32*, align 8 + %b = alloca i32**, align 8 + %call = call i32* @getp() + store i32* %call, i32** %a, align 8 + store i32** %a, i32*** %b, align 8 + %0 = load i32*** %b, align 8 + call void @funcall2(i32** %0) + ret void +} + +; test20b: Addr-of a pointer +; safestack attribute +; Requires protector. +define void @test20b() nounwind uwtable safestack { +entry: +; LINUX-I386: test20b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test20b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test20b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32*, align 8 + %b = alloca i32**, align 8 + %call = call i32* @getp() + store i32* %call, i32** %a, align 8 + store i32** %a, i32*** %b, align 8 + %0 = load i32*** %b, align 8 + call void @funcall2(i32** %0) + ret void +} + +; test21a: Addr-of a casted pointer +; no safestack attribute +; Requires no protector. +define void @test21a() nounwind uwtable { +entry: +; LINUX-I386: test21a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test21a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test21a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32*, align 8 + %b = alloca float**, align 8 + %call = call i32* @getp() + store i32* %call, i32** %a, align 8 + %0 = bitcast i32** %a to float** + store float** %0, float*** %b, align 8 + %1 = load float*** %b, align 8 + call void @funfloat2(float** %1) + ret void +} + +; test21b: Addr-of a casted pointer +; safestack attribute +; Requires protector. +define void @test21b() nounwind uwtable safestack { +entry: +; LINUX-I386: test21b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test21b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test21b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca i32*, align 8 + %b = alloca float**, align 8 + %call = call i32* @getp() + store i32* %call, i32** %a, align 8 + %0 = bitcast i32** %a to float** + store float** %0, float*** %b, align 8 + %1 = load float*** %b, align 8 + call void @funfloat2(float** %1) + ret void +} + +; test22a: [2 x i8] in a class +; no safestack attribute +; Requires no protector. +define signext i8 @test22a() nounwind uwtable { +entry: +; LINUX-I386: test22a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test22a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test22a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca %class.A, align 1 + %array = getelementptr inbounds %class.A* %a, i32 0, i32 0 + %arrayidx = getelementptr inbounds [2 x i8]* %array, i32 0, i64 0 + %0 = load i8* %arrayidx, align 1 + ret i8 %0 +} + +; test22b: [2 x i8] in a class +; safestack attribute +; Requires no protector. +define signext i8 @test22b() nounwind uwtable safestack { +entry: +; LINUX-I386: test22b: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test22b: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test22b: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca %class.A, align 1 + %array = getelementptr inbounds %class.A* %a, i32 0, i32 0 + %arrayidx = getelementptr inbounds [2 x i8]* %array, i32 0, i64 0 + %0 = load i8* %arrayidx, align 1 + ret i8 %0 +} + +; test23a: [2 x i8] nested in several layers of structs and unions +; no safestack attribute +; Requires no protector. +define signext i8 @test23a() nounwind uwtable { +entry: +; LINUX-I386: test23a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test23a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test23a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %x = alloca %struct.deep, align 1 + %b = getelementptr inbounds %struct.deep* %x, i32 0, i32 0 + %c = bitcast %union.anon* %b to %struct.anon* + %d = getelementptr inbounds %struct.anon* %c, i32 0, i32 0 + %e = getelementptr inbounds %struct.anon.0* %d, i32 0, i32 0 + %array = bitcast %union.anon.1* %e to [2 x i8]* + %arrayidx = getelementptr inbounds [2 x i8]* %array, i32 0, i64 0 + %0 = load i8* %arrayidx, align 1 + ret i8 %0 +} + +; test23b: [2 x i8] nested in several layers of structs and unions +; safestack attribute +; Requires no protector. +define signext i8 @test23b() nounwind uwtable safestack { +entry: +; LINUX-I386: test23b: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test23b: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test23b: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %x = alloca %struct.deep, align 1 + %b = getelementptr inbounds %struct.deep* %x, i32 0, i32 0 + %c = bitcast %union.anon* %b to %struct.anon* + %d = getelementptr inbounds %struct.anon* %c, i32 0, i32 0 + %e = getelementptr inbounds %struct.anon.0* %d, i32 0, i32 0 + %array = bitcast %union.anon.1* %e to [2 x i8]* + %arrayidx = getelementptr inbounds [2 x i8]* %array, i32 0, i64 0 + %0 = load i8* %arrayidx, align 1 + ret i8 %0 +} + +; test24a: Variable sized alloca +; no safestack attribute +; Requires no protector. +define void @test24a(i32 %n) nounwind uwtable { +entry: +; LINUX-I386: test24a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test24a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test24a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %n.addr = alloca i32, align 4 + %a = alloca i32*, align 8 + store i32 %n, i32* %n.addr, align 4 + %0 = load i32* %n.addr, align 4 + %conv = sext i32 %0 to i64 + %1 = alloca i8, i64 %conv + %2 = bitcast i8* %1 to i32* + store i32* %2, i32** %a, align 8 + ret void +} + +; test24b: Variable sized alloca +; safestack attribute +; Requires protector. +define void @test24b(i32 %n) nounwind uwtable safestack { +entry: +; LINUX-I386: test24b: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test24b: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test24b: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %n.addr = alloca i32, align 4 + %a = alloca i32*, align 8 + store i32 %n, i32* %n.addr, align 4 + %0 = load i32* %n.addr, align 4 + %conv = sext i32 %0 to i64 + %1 = alloca i8, i64 %conv + %2 = bitcast i8* %1 to i32* + store i32* %2, i32** %a, align 8 + ret void +} + +; test25a: array of [4 x i32] +; no safestack attribute +; Requires no protector. +define i32 @test25a() nounwind uwtable { +entry: +; LINUX-I386: test25a: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test25a: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test25a: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca [4 x i32], align 16 + %arrayidx = getelementptr inbounds [4 x i32]* %a, i32 0, i64 0 + %0 = load i32* %arrayidx, align 4 + ret i32 %0 +} + +; test25b: array of [4 x i32] +; safestack attribute +; Requires no protector, constant index. +define i32 @test25b() nounwind uwtable safestack { +entry: +; LINUX-I386: test25b: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test25b: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test25b: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %a = alloca [4 x i32], align 16 + %arrayidx = getelementptr inbounds [4 x i32]* %a, i32 0, i64 0 + %0 = load i32* %arrayidx, align 4 + ret i32 %0 +} + +; test26: Nested structure, no arrays, no address-of expressions. +; Verify that the resulting gep-of-gep does not incorrectly trigger +; a safe stack protector. +; safestack attribute +; Requires no protector. +define void @test26() nounwind uwtable safestack { +entry: +; LINUX-I386: test26: +; LINUX-I386-NOT: movl __llvm__unsafe_stack_ptr +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test26: +; LINUX-X64-NOT: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test26: +; DARWIN-X64-NOT: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %c = alloca %struct.nest, align 4 + %b = getelementptr inbounds %struct.nest* %c, i32 0, i32 1 + %_a = getelementptr inbounds %struct.pair* %b, i32 0, i32 0 + %0 = load i32* %_a, align 4 + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i32 %0) + ret void +} + +; test27: Address-of a structure taken in a function with a loop where +; the alloca is an incoming value to a PHI node and a use of that PHI +; node is also an incoming value. +; Verify that the address-of analysis does not get stuck in infinite +; recursion when chasing the alloca through the PHI nodes. +; Requires protector. +define i32 @test27(i32 %arg) nounwind uwtable safestack { +bb: +; LINUX-I386: test27: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: + +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test27: +; LINUX-X64: movq %fs:640 +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test27: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %tmp = alloca %struct.small*, align 8 + %tmp1 = call i32 (...)* @dummy(%struct.small** %tmp) nounwind + %tmp2 = load %struct.small** %tmp, align 8 + %tmp3 = ptrtoint %struct.small* %tmp2 to i64 + %tmp4 = trunc i64 %tmp3 to i32 + %tmp5 = icmp sgt i32 %tmp4, 0 + br i1 %tmp5, label %bb6, label %bb21 + +bb6: ; preds = %bb17, %bb + %tmp7 = phi %struct.small* [ %tmp19, %bb17 ], [ %tmp2, %bb ] + %tmp8 = phi i64 [ %tmp20, %bb17 ], [ 1, %bb ] + %tmp9 = phi i32 [ %tmp14, %bb17 ], [ %tmp1, %bb ] + %tmp10 = getelementptr inbounds %struct.small* %tmp7, i64 0, i32 0 + %tmp11 = load i8* %tmp10, align 1 + %tmp12 = icmp eq i8 %tmp11, 1 + %tmp13 = add nsw i32 %tmp9, 8 + %tmp14 = select i1 %tmp12, i32 %tmp13, i32 %tmp9 + %tmp15 = trunc i64 %tmp8 to i32 + %tmp16 = icmp eq i32 %tmp15, %tmp4 + br i1 %tmp16, label %bb21, label %bb17 + +bb17: ; preds = %bb6 + %tmp18 = getelementptr inbounds %struct.small** %tmp, i64 %tmp8 + %tmp19 = load %struct.small** %tmp18, align 8 + %tmp20 = add i64 %tmp8, 1 + br label %bb6 + +bb21: ; preds = %bb6, %bb + %tmp22 = phi i32 [ %tmp1, %bb ], [ %tmp14, %bb6 ] + %tmp23 = call i32 (...)* @dummy(i32 %tmp22) nounwind + ret i32 undef +} + +%struct.__jmp_buf_tag = type { [8 x i64], i32, %struct.__sigset_t } +%struct.__sigset_t = type { [16 x i64] } + at buf = internal global [1 x %struct.__jmp_buf_tag] zeroinitializer, align 16 + +; test28: setjmp/longjmp test. +; Requires protector. +define i32 @test28() nounwind uwtable safestack { +entry: +; LINUX-I386: test28: +; LINUX-I386: movl __llvm__unsafe_stack_ptr +; LINUX-I386-NEXT: movl %gs: +; LINUX-I386: .cfi_endproc + +; LINUX-X64: test28: +; LINUX-X64: movq %fs:640 +; LINUX-X64: movq {{.*}}, %fs:640 +; LINUX-X64: movq {{.*}}, %fs:640 + +; LINUX-X64: .cfi_endproc + +; DARWIN-X64: test28: +; DARWIN-X64: movq ___llvm__unsafe_stack_ptr +; DARWIN-X64: .cfi_endproc + %retval = alloca i32, align 4 + %x = alloca i32, align 4 + store i32 0, i32* %retval + store i32 42, i32* %x, align 4 + %call = call i32 @_setjmp(%struct.__jmp_buf_tag* getelementptr inbounds ([1 x %struct.__jmp_buf_tag]* @buf, i32 0, i32 0)) #3 + %tobool = icmp ne i32 %call, 0 + br i1 %tobool, label %if.else, label %if.then +if.then: ; preds = %entry + call void @funcall(i32* %x) + br label %if.end +if.else: ; preds = %entry + call i32 (...)* @dummy() + br label %if.end +if.end: ; preds = %if.else, %if.then + ret i32 0 +} + +declare i32 @_setjmp(%struct.__jmp_buf_tag*) + +declare double @testi_aux() +declare i8* @strcpy(i8*, i8*) +declare i32 @printf(i8*, ...) +declare void @funcall(i32*) +declare void @funcall2(i32**) +declare void @funfloat(float*) +declare void @funfloat2(float**) +declare void @_Z3exceptPi(i32*) +declare i32 @__gxx_personality_v0(...) +declare i32* @getp() +declare i32 @dummy(...) -------------- next part -------------- diff --git a/include/clang/Basic/Attr.td b/include/clang/Basic/Attr.td index 51837fe..883139a 100644 --- a/include/clang/Basic/Attr.td +++ b/include/clang/Basic/Attr.td @@ -1317,6 +1317,13 @@ def X86ForceAlignArgPointer : InheritableAttr, TargetSpecificAttr<TargetX86> { let Documentation = [Undocumented]; } +// Attribute to disable SafeStack (or equivalent) instrumentation. +def NoSafeStack : InheritableAttr { + let Spellings = [GCC<"no_safe_stack">]; + let Subjects = SubjectList<[Function], ErrorDiag>; + let Documentation = [NoSafeStackDocs]; +} + // Attribute to disable AddressSanitizer (or equivalent) checks. def NoSanitizeAddress : InheritableAttr { let Spellings = [GCC<"no_address_safety_analysis">, diff --git a/include/clang/Basic/AttrDocs.td b/include/clang/Basic/AttrDocs.td index cf8b662..0b2d78a 100644 --- a/include/clang/Basic/AttrDocs.td +++ b/include/clang/Basic/AttrDocs.td @@ -860,6 +860,15 @@ This attribute accepts a single parameter that must be one of the following: }]; } +def NoSafeStackDocs : Documentation { + let Category = DocCatFunction; + let Content = [{ +Use ``__attribute__((no_safe_stack))`` on a function declaration to +specify that the safe stack instrumentation should not be applied to +that function. + }]; +} + def NoSanitizeAddressDocs : Documentation { let Category = DocCatFunction; // This function has multiple distinct spellings, and so it requires a custom diff --git a/include/clang/Basic/LangOptions.def b/include/clang/Basic/LangOptions.def index e3adaec..b7f4ec1 100644 --- a/include/clang/Basic/LangOptions.def +++ b/include/clang/Basic/LangOptions.def @@ -192,7 +192,7 @@ ENUM_LANGOPT(ValueVisibilityMode, Visibility, 3, DefaultVisibility, "value symbol visibility") ENUM_LANGOPT(TypeVisibilityMode, Visibility, 3, DefaultVisibility, "type symbol visibility") -ENUM_LANGOPT(StackProtector, StackProtectorMode, 2, SSPOff, +ENUM_LANGOPT(StackProtector, StackProtectorMode, 3, SSPOff, "stack protector mode") ENUM_LANGOPT(SignedOverflowBehavior, SignedOverflowBehaviorTy, 2, SOB_Undefined, "signed integer overflow handling") diff --git a/include/clang/Basic/LangOptions.h b/include/clang/Basic/LangOptions.h index 712af26..50e352f 100644 --- a/include/clang/Basic/LangOptions.h +++ b/include/clang/Basic/LangOptions.h @@ -66,7 +66,7 @@ public: typedef clang::Visibility Visibility; enum GCMode { NonGC, GCOnly, HybridGC }; - enum StackProtectorMode { SSPOff, SSPOn, SSPStrong, SSPReq }; + enum StackProtectorMode { SSPOff, SSPOn, SSPStrong, SSPReq, SSPSafeStack }; enum SignedOverflowBehaviorTy { SOB_Undefined, // Default C standard behavior. diff --git a/include/clang/Driver/Options.td b/include/clang/Driver/Options.td index db0fce9..5a2a0bf 100644 --- a/include/clang/Driver/Options.td +++ b/include/clang/Driver/Options.td @@ -864,6 +864,10 @@ def fstack_protector_strong : Flag<["-"], "fstack-protector-strong">, Group<f_Gr HelpText<"Use a strong heuristic to apply stack protectors to functions">; def fstack_protector : Flag<["-"], "fstack-protector">, Group<f_Group>, HelpText<"Enable stack protectors for functions potentially vulnerable to stack smashing">; +def fsafe_stack : Flag<["-"], "fsafe-stack">, Group<f_Group>, + HelpText<"Enable safe stack protection against stack-based memory corruption errors">; +def fno_safe_stack : Flag<["-"], "fno-safe-stack">, Group<f_Group>, + HelpText<"Disable safe stack protection against stack-based memory corruption errors">; def fstandalone_debug : Flag<["-"], "fstandalone-debug">, Group<f_Group>, Flags<[CC1Option]>, HelpText<"Emit full debug info for all types used by the program">; def fno_standalone_debug : Flag<["-"], "fno-standalone-debug">, Group<f_Group>, Flags<[CC1Option]>, diff --git a/lib/CodeGen/CodeGenModule.cpp b/lib/CodeGen/CodeGenModule.cpp index 7ecd95e..f059667 100644 --- a/lib/CodeGen/CodeGenModule.cpp +++ b/lib/CodeGen/CodeGenModule.cpp @@ -738,6 +738,9 @@ void CodeGenModule::SetLLVMFunctionAttributesForDefinition(const Decl *D, B.addAttribute(llvm::Attribute::StackProtectStrong); else if (LangOpts.getStackProtector() == LangOptions::SSPReq) B.addAttribute(llvm::Attribute::StackProtectReq); + else if (LangOpts.getStackProtector() == LangOptions::SSPSafeStack) + if (!D->hasAttr<NoSafeStackAttr>()) + B.addAttribute(llvm::Attribute::SafeStack); // Add sanitizer attributes if function is not blacklisted. if (!isInSanitizerBlacklist(F, D->getLocation())) { diff --git a/lib/Driver/ToolChains.cpp b/lib/Driver/ToolChains.cpp index 15e3ade..68528b7 100644 --- a/lib/Driver/ToolChains.cpp +++ b/lib/Driver/ToolChains.cpp @@ -10,6 +10,7 @@ #include "ToolChains.h" #include "clang/Basic/ObjCRuntime.h" #include "clang/Basic/Version.h" +#include "clang/Basic/LangOptions.h" #include "clang/Config/config.h" // for GCC_INSTALL_PREFIX #include "clang/Driver/Compilation.h" #include "clang/Driver/Driver.h" diff --git a/lib/Driver/Tools.cpp b/lib/Driver/Tools.cpp index d11fa4e..7f24995 100644 --- a/lib/Driver/Tools.cpp +++ b/lib/Driver/Tools.cpp @@ -2172,6 +2172,29 @@ static void addProfileRT( CmdArgs.push_back(Args.MakeArgString(LibProfile)); } +static void addSafeStackRT( + const ToolChain &TC, const ArgList &Args, ArgStringList &CmdArgs) { + if (!Args.hasFlag(options::OPT_fsafe_stack, + options::OPT_fno_safe_stack, false)) + return; + + const char *LibBaseName = "libclang_rt.safestack-"; + SmallString<128> LibName = getCompilerRTLibDir(TC); + llvm::sys::path::append(LibName, + Twine(LibBaseName) + getArchNameForCompilerRTLib(TC) + ".a"); + + CmdArgs.push_back(Args.MakeArgString(LibName)); + + // On gnu platforms, safestack runtime requires dl + CmdArgs.push_back("-ldl"); + + // We need to ensure that the safe stack init function from the safestack + // runtime library is linked in, even though it might not be referenced by + // any code in the module before LTO optimizations are applied. + CmdArgs.push_back("-u"); + CmdArgs.push_back("__llvm__safestack_init"); +} + static SmallString<128> getSanitizerRTLibName(const ToolChain &TC, StringRef Sanitizer, bool Shared) { @@ -3675,7 +3698,14 @@ void Clang::ConstructJob(Compilation &C, const JobAction &JA, // -stack-protector=0 is default. unsigned StackProtectorLevel = 0; - if (Arg *A = Args.getLastArg(options::OPT_fno_stack_protector, + if (Args.hasFlag(options::OPT_fsafe_stack, + options::OPT_fno_safe_stack, false)) { + StackProtectorLevel = LangOptions::SSPSafeStack; + Args.ClaimAllArgs(options::OPT_fno_stack_protector); + Args.ClaimAllArgs(options::OPT_fstack_protector_all); + Args.ClaimAllArgs(options::OPT_fstack_protector_strong); + Args.ClaimAllArgs(options::OPT_fstack_protector); + } else if (Arg *A = Args.getLastArg(options::OPT_fno_stack_protector, options::OPT_fstack_protector_all, options::OPT_fstack_protector_strong, options::OPT_fstack_protector)) { @@ -5843,6 +5873,21 @@ void darwin::Link::ConstructJob(Compilation &C, const JobAction &JA, !Args.hasArg(options::OPT_nostartfiles)) getMachOToolChain().addStartObjectFileArgs(Args, CmdArgs); + // SafeStack requires its own runtime libraries + // These libraries should be linked first, to make sure the + // __llvm__safestack_init constructor executes before everything else + if (Args.hasFlag(options::OPT_fsafe_stack, + options::OPT_fno_safe_stack, false)) { + getMachOToolChain().AddLinkRuntimeLib(Args, CmdArgs, + "libclang_rt.safestack_osx.a"); + + // We need to ensure that the safe stack init function from the safestack + // runtime library is linked in, even though it might not be referenced by + // any code in the module before LTO optimizations are applied. + CmdArgs.push_back("-u"); + CmdArgs.push_back("___llvm__safestack_init"); + } + Args.AddAllArgs(CmdArgs, options::OPT_L); LibOpenMP UsedOpenMPLib = LibUnknown; @@ -6092,6 +6137,8 @@ void solaris::Link::ConstructJob(Compilation &C, const JobAction &JA, CmdArgs.push_back(Args.MakeArgString("-L" + GCCLibPath)); + addSafeStackRT(getToolChain(), Args, CmdArgs); + Args.AddAllArgs(CmdArgs, options::OPT_L); Args.AddAllArgs(CmdArgs, options::OPT_T_Group); Args.AddAllArgs(CmdArgs, options::OPT_e); @@ -6638,6 +6685,8 @@ void freebsd::Link::ConstructJob(Compilation &C, const JobAction &JA, CmdArgs.push_back(Args.MakeArgString(ToolChain.GetFilePath(crtbegin))); } + addSafeStackRT(getToolChain(), Args, CmdArgs); + Args.AddAllArgs(CmdArgs, options::OPT_L); const ToolChain::path_list &Paths = ToolChain.getFilePaths(); for (const auto &Path : Paths) @@ -6932,6 +6981,8 @@ void netbsd::Link::ConstructJob(Compilation &C, const JobAction &JA, } } + addSafeStackRT(getToolChain(), Args, CmdArgs); + Args.AddAllArgs(CmdArgs, options::OPT_L); Args.AddAllArgs(CmdArgs, options::OPT_T_Group); Args.AddAllArgs(CmdArgs, options::OPT_e); @@ -7471,6 +7522,8 @@ void gnutools::Link::ConstructJob(Compilation &C, const JobAction &JA, ToolChain.AddFastMathRuntimeIfAvailable(Args, CmdArgs); } + addSafeStackRT(getToolChain(), Args, CmdArgs); + Args.AddAllArgs(CmdArgs, options::OPT_L); Args.AddAllArgs(CmdArgs, options::OPT_u); @@ -7613,6 +7666,8 @@ void minix::Link::ConstructJob(Compilation &C, const JobAction &JA, CmdArgs.push_back(Args.MakeArgString(getToolChain().GetFilePath("crtn.o"))); } + addSafeStackRT(getToolChain(), Args, CmdArgs); + Args.AddAllArgs(CmdArgs, options::OPT_L); Args.AddAllArgs(CmdArgs, options::OPT_T_Group); Args.AddAllArgs(CmdArgs, options::OPT_e); @@ -7738,6 +7793,8 @@ void dragonfly::Link::ConstructJob(Compilation &C, const JobAction &JA, getToolChain().GetFilePath("crtbegin.o"))); } + addSafeStackRT(getToolChain(), Args, CmdArgs); + Args.AddAllArgs(CmdArgs, options::OPT_L); Args.AddAllArgs(CmdArgs, options::OPT_T_Group); Args.AddAllArgs(CmdArgs, options::OPT_e); diff --git a/lib/Frontend/CompilerInvocation.cpp b/lib/Frontend/CompilerInvocation.cpp index 340d4ab..649c470 100644 --- a/lib/Frontend/CompilerInvocation.cpp +++ b/lib/Frontend/CompilerInvocation.cpp @@ -1606,6 +1606,7 @@ static void ParseLangArgs(LangOptions &Opts, ArgList &Args, InputKind IK, case 1: Opts.setStackProtector(LangOptions::SSPOn); break; case 2: Opts.setStackProtector(LangOptions::SSPStrong); break; case 3: Opts.setStackProtector(LangOptions::SSPReq); break; + case 4: Opts.setStackProtector(LangOptions::SSPSafeStack); break; } // Parse -fsanitize= arguments. diff --git a/lib/Frontend/InitPreprocessor.cpp b/lib/Frontend/InitPreprocessor.cpp index 476e214..16d8646 100644 --- a/lib/Frontend/InitPreprocessor.cpp +++ b/lib/Frontend/InitPreprocessor.cpp @@ -826,6 +826,8 @@ static void InitializePredefinedMacros(const TargetInfo &TI, Builder.defineMacro("__SSP_STRONG__", "2"); else if (LangOpts.getStackProtector() == LangOptions::SSPReq) Builder.defineMacro("__SSP_ALL__", "3"); + else if (LangOpts.getStackProtector() == LangOptions::SSPSafeStack) + Builder.defineMacro("__SAFESTACK__", "4"); if (FEOpts.ProgramAction == frontend::RewriteObjC) Builder.defineMacro("__weak", "__attribute__((objc_gc(weak)))"); diff --git a/lib/Sema/SemaDeclAttr.cpp b/lib/Sema/SemaDeclAttr.cpp index 1b04e52..e8656ec 100644 --- a/lib/Sema/SemaDeclAttr.cpp +++ b/lib/Sema/SemaDeclAttr.cpp @@ -4593,6 +4593,9 @@ static void ProcessDeclAttribute(Sema &S, Scope *scope, Decl *D, case AttributeList::AT_ScopedLockable: handleSimpleAttribute<ScopedLockableAttr>(S, D, Attr); break; + case AttributeList::AT_NoSafeStack: + handleSimpleAttribute<NoSafeStackAttr>(S, D, Attr); + break; case AttributeList::AT_NoSanitizeAddress: handleSimpleAttribute<NoSanitizeAddressAttr>(S, D, Attr); break; diff --git a/runtime/compiler-rt/Makefile b/runtime/compiler-rt/Makefile index ccd83a3..eb73ffd 100644 --- a/runtime/compiler-rt/Makefile +++ b/runtime/compiler-rt/Makefile @@ -83,7 +83,7 @@ RuntimeLibrary.darwin.Configs := \ eprintf.a 10.4.a osx.a ios.a cc_kext.a cc_kext_ios5.a \ asan_osx_dynamic.dylib \ profile_osx.a profile_ios.a \ - ubsan_osx.a + ubsan_osx.a safestack_osx.a RuntimeLibrary.macho_embedded.Configs := \ hard_static.a hard_pic.a @@ -127,7 +127,7 @@ TryCompile = \ # We try to build 32-bit runtimes both on 32-bit hosts and 64-bit hosts. Runtime32BitConfigs = \ builtins-i386.a profile-i386.a san-i386.a asan-i386.a asan_cxx-i386.a \ - ubsan-i386.a ubsan_cxx-i386.a + ubsan-i386.a ubsan_cxx-i386.a safestack-i386.a # We currently only try to generate runtime libraries on x86. ifeq ($(ARCH),x86) @@ -138,7 +138,7 @@ ifeq ($(ARCH),x86_64) RuntimeLibrary.linux.Configs += \ builtins-x86_64.a profile-x86_64.a san-x86_64.a asan-x86_64.a \ asan_cxx-x86_64.a tsan-x86_64.a msan-x86_64.a ubsan-x86_64.a \ - ubsan_cxx-x86_64.a dfsan-x86_64.a lsan-x86_64.a + ubsan_cxx-x86_64.a dfsan-x86_64.a lsan-x86_64.a safestack-x86_64.a # We need to build 32-bit ASan/UBsan libraries on 64-bit platform, and add them # to the list of runtime libraries to make # "clang -fsanitize=(address|undefined) -m32" work. -------------- next part -------------- diff --git a/CMakeLists.txt b/CMakeLists.txt index 76e1cbb..e190a7a 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -239,6 +239,7 @@ append_list_if(COMPILER_RT_HAS_FNO_EXCEPTIONS_FLAG -fno-exceptions SANITIZER_COM append_list_if(COMPILER_RT_HAS_FOMIT_FRAME_POINTER_FLAG -fomit-frame-pointer SANITIZER_COMMON_CFLAGS) append_list_if(COMPILER_RT_HAS_FUNWIND_TABLES_FLAG -funwind-tables SANITIZER_COMMON_CFLAGS) append_list_if(COMPILER_RT_HAS_FNO_STACK_PROTECTOR_FLAG -fno-stack-protector SANITIZER_COMMON_CFLAGS) +append_list_if(COMPILER_RT_HAS_FNO_SAFE_STACK_FLAG -fno-safe-stack SANITIZER_COMMON_CFLAGS) append_list_if(COMPILER_RT_HAS_FVISIBILITY_HIDDEN_FLAG -fvisibility=hidden SANITIZER_COMMON_CFLAGS) append_list_if(COMPILER_RT_HAS_FNO_FUNCTION_SECTIONS_FLAG -fno-function-sections SANITIZER_COMMON_CFLAGS) diff --git a/cmake/config-ix.cmake b/cmake/config-ix.cmake index 0b722c3..7cc9d1c 100644 --- a/cmake/config-ix.cmake +++ b/cmake/config-ix.cmake @@ -10,6 +10,7 @@ check_cxx_compiler_flag(-fno-exceptions COMPILER_RT_HAS_FNO_EXCEPTIONS_FLAG check_cxx_compiler_flag(-fomit-frame-pointer COMPILER_RT_HAS_FOMIT_FRAME_POINTER_FLAG) check_cxx_compiler_flag(-funwind-tables COMPILER_RT_HAS_FUNWIND_TABLES_FLAG) check_cxx_compiler_flag(-fno-stack-protector COMPILER_RT_HAS_FNO_STACK_PROTECTOR_FLAG) +check_cxx_compiler_flag(-fno-safe-stack COMPILER_RT_HAS_FNO_SAFE_STACK_FLAG) check_cxx_compiler_flag(-fvisibility=hidden COMPILER_RT_HAS_FVISIBILITY_HIDDEN_FLAG) check_cxx_compiler_flag(-fno-rtti COMPILER_RT_HAS_FNO_RTTI_FLAG) check_cxx_compiler_flag(-ffreestanding COMPILER_RT_HAS_FFREESTANDING_FLAG) @@ -182,6 +183,7 @@ filter_available_targets(MSAN_SUPPORTED_ARCH x86_64) filter_available_targets(PROFILE_SUPPORTED_ARCH x86_64 i386 i686 arm mips mips64 mipsel mips64el aarch64) filter_available_targets(TSAN_SUPPORTED_ARCH x86_64) filter_available_targets(UBSAN_SUPPORTED_ARCH x86_64 i386 i686 arm aarch64 mips mipsel) +filter_available_targets(SAFESTACK_SUPPORTED_ARCH x86_64 i386 i686) if(ANDROID) set(OS_NAME "Android") diff --git a/include/CMakeLists.txt b/include/CMakeLists.txt index 7f8664e..7c34aee 100644 --- a/include/CMakeLists.txt +++ b/include/CMakeLists.txt @@ -6,7 +6,8 @@ set(SANITIZER_HEADERS sanitizer/linux_syscall_hooks.h sanitizer/lsan_interface.h sanitizer/msan_interface.h - sanitizer/tsan_interface_atomic.h) + sanitizer/tsan_interface_atomic.h + safestack_interface.h) set(output_dir ${COMPILER_RT_OUTPUT_DIR}/include) diff --git a/include/safestack_interface.h b/include/safestack_interface.h new file mode 100644 index 0000000..940d903 --- /dev/null +++ b/include/safestack_interface.h @@ -0,0 +1,20 @@ +#ifndef SAFESTACK_INTERFACE_H +#define SAFESTACK_INTERFACE_H + +#include <stddef.h> + +#ifdef __cplusplus +extern "C" { +#endif + +void *__safestack_get_unsafe_stack_start(); +void *__safestack_get_unsafe_stack_ptr(); +size_t __safestack_get_unsafe_stack_size(); + +void *__safestack_get_safe_stack_ptr(); + +#ifdef __cplusplus +} // extern "C" +#endif + +#endif // SAFESTACK_INTERFACE_H diff --git a/lib/CMakeLists.txt b/lib/CMakeLists.txt index 934c5b7..fdb91ef 100644 --- a/lib/CMakeLists.txt +++ b/lib/CMakeLists.txt @@ -41,3 +41,4 @@ if(COMPILER_RT_HAS_UBSAN) add_subdirectory(ubsan) endif() +add_subdirectory(safestack) diff --git a/lib/Makefile.mk b/lib/Makefile.mk index ed9690d..c473e47 100644 --- a/lib/Makefile.mk +++ b/lib/Makefile.mk @@ -20,3 +20,4 @@ SubDirs += profile SubDirs += sanitizer_common SubDirs += tsan SubDirs += ubsan +SubDirs += safestack diff --git a/lib/safestack/CMakeLists.txt b/lib/safestack/CMakeLists.txt new file mode 100644 index 0000000..8d4cf05 --- /dev/null +++ b/lib/safestack/CMakeLists.txt @@ -0,0 +1,26 @@ +add_custom_target(safestack) + +set(SAFESTACK_SOURCES safestack.cc) + +include_directories(..) + +set(SAFESTACK_CFLAGS ${SANITIZER_COMMON_CFLAGS}) + +if(APPLE) + # Build universal binary on APPLE. + add_compiler_rt_osx_static_runtime(clang_rt.safestack_osx + ARCH ${SAFESTACK_SUPPORTED_ARCH} + SOURCES ${SAFESTACK_SOURCES} + $<TARGET_OBJECTS:RTInterception.osx> + CFLAGS ${SAFESTACK_CFLAGS}) + add_dependencies(safestack clang_rt.safestack_osx) +else() + # Otherwise, build separate libraries for each target. + foreach(arch ${SAFESTACK_SUPPORTED_ARCH}) + add_compiler_rt_runtime(clang_rt.safestack-${arch} ${arch} STATIC + SOURCES ${SAFESTACK_SOURCES} + $<TARGET_OBJECTS:RTInterception.${arch}> + CFLAGS ${SAFESTACK_CFLAGS}) + add_dependencies(safestack clang_rt.safestack-${arch}) + endforeach() +endif() diff --git a/lib/safestack/Makefile.mk b/lib/safestack/Makefile.mk new file mode 100644 index 0000000..d89af25 --- /dev/null +++ b/lib/safestack/Makefile.mk @@ -0,0 +1,23 @@ +#===- lib/safestack/Makefile.mk ------------------------------*- Makefile -*--===# +# +# The LLVM Compiler Infrastructure +# +# This file is distributed under the University of Illinois Open Source +# License. See LICENSE.TXT for details. +# +#===------------------------------------------------------------------------===# + +ModuleName := safestack +SubDirs :+ +Sources := $(foreach file,$(wildcard $(Dir)/*.cc),$(notdir $(file))) +ObjNames := $(Sources:%.cc=%.o) + +Implementation := Generic + +# FIXME: use automatic dependencies? +Dependencies := $(wildcard $(Dir)/*.h) +Dependencies += $(wildcard $(Dir)/../interception/*.h) + +# Define a convenience variable for all the safestack functions. +SafeStackFunctions := $(Sources:%.cc=%) diff --git a/lib/safestack/safestack.cc b/lib/safestack/safestack.cc new file mode 100644 index 0000000..2917a48 --- /dev/null +++ b/lib/safestack/safestack.cc @@ -0,0 +1,347 @@ +//===-- safestack.cc --------------------------------------------*- C++ -*-===// +// +// The LLVM Compiler Infrastructure +// +// This file is distributed under the University of Illinois Open Source +// License. See LICENSE.TXT for details. +// +//===----------------------------------------------------------------------===// +// +// This file implements the runtime support for the safe stack protection +// mechanism. The runtime manages allocation/deallocation of the unsafe stack +// for the main thread, as well as all pthreads threads that are +// created/destroyed during program execution. +// +//===----------------------------------------------------------------------===// + +#include <stdio.h> +#include <assert.h> +#include <dlfcn.h> +#include <limits.h> +#include <sys/mman.h> +#include <sys/user.h> +#include <pthread.h> + +#if defined(__linux__) +#include <unistd.h> +#include <sys/syscall.h> +#endif + +// FIXME: is this in some header? +#define STACK_ALIGN 16 + +// Should we make the following configurable? +#define __SAFESTACK_DEFAULT_STACK_SIZE 0x2800000 + +#include "interception/interception.h" + +namespace __llvm__safestack { + +// We don't know whether pthread is linked in or not, so we resolve +// all symbols from pthread that we use dynamically +#define __DECLARE_WRAPPER(fn) __typeof__(fn)* __d_ ## fn = NULL; + +__DECLARE_WRAPPER(pthread_attr_init) +__DECLARE_WRAPPER(pthread_attr_destroy) +__DECLARE_WRAPPER(pthread_attr_getstacksize) +__DECLARE_WRAPPER(pthread_attr_getguardsize) +__DECLARE_WRAPPER(pthread_key_create) +__DECLARE_WRAPPER(pthread_setspecific) + +// The unsafe stack pointer is stored in the TCB structure on these platforms +#if defined(__i386__) +# define MOVPTR "movl" +# ifdef __pic__ +# define IMM_MODE "nr" +# else +# define IMM_MODE "ir" +# endif +#elif defined(__x86_64__) +# define MOVPTR "movq" +# define IMM_MODE "nr" +#endif + +#if defined(__linux__) && (defined(__i386__) || defined(__x86_64__)) + +# define __THREAD_GETMEM_L(offset) \ + __extension__ ({ unsigned long __v; \ + asm volatile (MOVPTR " %%fs:%P1,%q0" \ + : "=r" (__v) : "i" (offset)); __v; }) + +# define __THREAD_SETMEM_L(offset, value) \ + asm volatile (MOVPTR " %q0,%%fs:%P1" : \ + : IMM_MODE ((unsigned long) (value)), "i" (offset)) + +// The following locations are platform-specific +# define __GET_UNSAFE_STACK_PTR() (void*) __THREAD_GETMEM_L(0x280) +# define __SET_UNSAFE_STACK_PTR(value) __THREAD_SETMEM_L(0x280, value) + +# define __GET_UNSAFE_STACK_START() (void*) __THREAD_GETMEM_L(0x288) +# define __SET_UNSAFE_STACK_START(value) __THREAD_SETMEM_L(0x288, value) + +# define __GET_UNSAFE_STACK_SIZE() (size_t) __THREAD_GETMEM_L(0x290) +# define __SET_UNSAFE_STACK_SIZE(value) __THREAD_SETMEM_L(0x290, value) + +# define __GET_UNSAFE_STACK_GUARD() (size_t) __THREAD_GETMEM_L(0x298) +# define __SET_UNSAFE_STACK_GUARD(value) __THREAD_SETMEM_L(0x298, value) + +#elif defined(__APPLE__) && (defined(__i386__) || defined(__x86_64__)) + +// OSX uses %gs to directly index thread-specific slots +# define __THREAD_GETMEM_L(slot) \ + __extension__ ({ unsigned long __v; \ + asm volatile (MOVPTR " %%gs:%P1,%q0" : "=r" (__v) \ + : "i" ((slot) * sizeof(void*))); __v; }) + +# define __THREAD_SETMEM_L(slot, value) \ + asm volatile (MOVPTR " %q0,%%gs:%P1" : \ + : IMM_MODE ((unsigned long) (value)), \ + "i" ((slot) * sizeof(void*))) + +// Thread-specific slots 0-256 are reserved for the systems on OSX. +// Slots 192 - 195 seems unused at the moment, so we claim them. + +// The following locations are platform-specific +# define __GET_UNSAFE_STACK_PTR() (void*) __THREAD_GETMEM_L(192) +# define __SET_UNSAFE_STACK_PTR(value) __THREAD_SETMEM_L(192, value) + +# define __GET_UNSAFE_STACK_START() (void*) __THREAD_GETMEM_L(193) +# define __SET_UNSAFE_STACK_START(value) __THREAD_SETMEM_L(193, value) + +# define __GET_UNSAFE_STACK_SIZE() (size_t) __THREAD_GETMEM_L(194) +# define __SET_UNSAFE_STACK_SIZE(value) __THREAD_SETMEM_L(194, value) + +# define __GET_UNSAFE_STACK_GUARD() (size_t) __THREAD_GETMEM_L(195) +# define __SET_UNSAFE_STACK_GUARD(value) __THREAD_SETMEM_L(195, value) + +#else +// The unsafe stack is stored in a thread-local variable on these platforms +extern "C" { + __attribute__((visibility ("default"))) + __thread void *__llvm__unsafe_stack_ptr = 0; +} + +__thread void *unsafe_stack_start = 0; +__thread size_t unsafe_stack_size = 0; +__thread size_t unsafe_stack_guard = 0; + +# define __GET_UNSAFE_STACK_PTR() __llvm__unsafe_stack_ptr +# define __SET_UNSAFE_STACK_PTR(value) __llvm__unsafe_stack_ptr = (value) + +# define __GET_UNSAFE_STACK_START() unsafe_stack_start +# define __SET_UNSAFE_STACK_START(value) unsafe_stack_start = (value) + +# define __GET_UNSAFE_STACK_SIZE() unsafe_stack_size +# define __SET_UNSAFE_STACK_SIZE(value) unsafe_stack_size = (value) + +# define __GET_UNSAFE_STACK_GUARD() unsafe_stack_guard +# define __SET_UNSAFE_STACK_GUARD(value) unsafe_stack_guard = (value) + +#endif + +static inline void *unsafe_stack_alloc(size_t size, size_t guard) { + void *addr +#if defined(__linux__) + (void*) syscall(SYS_mmap, +#else + mmap( +#endif + NULL, size + guard, PROT_WRITE | PROT_READ, + MAP_PRIVATE | MAP_ANON +#if defined(__linux__) + | MAP_STACK | MAP_GROWSDOWN +#endif + , -1, 0); + mprotect(addr, guard, PROT_NONE); + return (char*) addr + guard; +} + +static inline void unsafe_stack_setup(void *start, size_t size, size_t guard) { + void* stack_ptr = (char*) start + size; + assert((((size_t)stack_ptr) & (STACK_ALIGN-1)) == 0); + + __SET_UNSAFE_STACK_PTR(stack_ptr); + __SET_UNSAFE_STACK_START(start); + __SET_UNSAFE_STACK_SIZE(size); + __SET_UNSAFE_STACK_GUARD(guard); +} + +static void unsafe_stack_free() { + if (__GET_UNSAFE_STACK_START()) { +#if defined(__linux__) + syscall(SYS_munmap, +#else + munmap( +#endif + (char*) __GET_UNSAFE_STACK_START() - __GET_UNSAFE_STACK_GUARD(), + __GET_UNSAFE_STACK_SIZE() + __GET_UNSAFE_STACK_GUARD()); + } + __SET_UNSAFE_STACK_START(0); +} + +/// Thread data for the cleanup handler +pthread_key_t thread_cleanup_key; + +/// Safe stack per-thread information passed to the thread_start function +struct tinfo { + void *(*start_routine)(void*); + void *start_routine_arg; + + void *unsafe_stack_start; + size_t unsafe_stack_size; + size_t unsafe_stack_guard; +}; + +/// Wrap the thread function in order to deallocate the unsafe stack when the +/// thread terminates by returning from its main function. +static void* thread_start(void *arg) { + struct tinfo *tinfo = (struct tinfo*) arg; + + void *(*start_routine)(void*) = tinfo->start_routine; + void *start_routine_arg = tinfo->start_routine_arg; + + // Setup the unsafe stack; this will destroy tinfo content + unsafe_stack_setup(tinfo->unsafe_stack_start, + tinfo->unsafe_stack_size, + tinfo->unsafe_stack_guard); + + // Make sure out thread-specific destructor will be called + // FIXME: we can do this only any other specific key is set by + // intersepting the pthread_setspecific function itself + __d_pthread_setspecific(thread_cleanup_key, (void*) 1); + + // Start the original thread rutine + return start_routine(start_routine_arg); +} + +/// Intercept thread creation operation to allocate and setup the unsafe stack +INTERCEPTOR(int, pthread_create, pthread_t *thread, + const pthread_attr_t *attr, + void *(*start_routine)(void*), void *arg) { + + size_t size = 0; + size_t guard = 0; + + if (attr != NULL) { + __d_pthread_attr_getstacksize(attr, &size); + __d_pthread_attr_getguardsize(attr, &guard); + } else { + // get pthread default stack size + pthread_attr_t tmpattr; + __d_pthread_attr_init(&tmpattr); + __d_pthread_attr_getstacksize(&tmpattr, &size); + __d_pthread_attr_getguardsize(&tmpattr, &guard); + __d_pthread_attr_destroy(&tmpattr); + } + + assert(size != 0); + assert((size & (STACK_ALIGN-1)) == 0); + assert((guard & (PAGE_SIZE-1)) == 0); + + void *addr = unsafe_stack_alloc(size, guard); + struct tinfo *tinfo = (struct tinfo*) ( + ((char*)addr) + size - sizeof(struct tinfo)); + tinfo->start_routine = start_routine; + tinfo->start_routine_arg = arg; + tinfo->unsafe_stack_start = addr; + tinfo->unsafe_stack_size = size; + tinfo->unsafe_stack_guard = guard; + + return REAL(pthread_create)(thread, attr, thread_start, tinfo); +} + +/// Thread-specific data destructor +void thread_cleanup_handler(void* _iter) { + // We want to free the unsafe stack only after all other destructors + // have already run. We force this function to be called multiple times. + size_t iter = (size_t) _iter; + if (iter < PTHREAD_DESTRUCTOR_ITERATIONS) { + __d_pthread_setspecific(thread_cleanup_key, (void*) (iter + 1)); + } else { + // This is the last iteration + unsafe_stack_free(); + } +} + +int inited = 0; + +extern "C" +__attribute__((visibility ("default"))) +#ifndef __linux__ +__attribute__((constructor(0))) +#endif +void __llvm__safestack_init() { + if (inited) + return; + + inited = 1; + + // Allocate unsafe stack for main thread + size_t size = __SAFESTACK_DEFAULT_STACK_SIZE; + size_t guard = 4096; + void *addr = unsafe_stack_alloc(size, guard); + unsafe_stack_setup(addr, size, guard); + + // Initialize pthread interceptors for thread allocation + INTERCEPT_FUNCTION(pthread_create); + + #define __FIND_FUNCTION(fn) \ + __d_ ## fn = __extension__ (__typeof__(__d_ ## fn)) dlsym(RTLD_DEFAULT, #fn); + + // Find pthread functions that we need + __FIND_FUNCTION(pthread_attr_init) + __FIND_FUNCTION(pthread_attr_destroy) + __FIND_FUNCTION(pthread_attr_getstacksize) + __FIND_FUNCTION(pthread_attr_getguardsize) + __FIND_FUNCTION(pthread_key_create) + __FIND_FUNCTION(pthread_setspecific) + + if (__d_pthread_key_create != NULL) { + // We're using pthreads, setup the cleanup handler + __d_pthread_key_create(&thread_cleanup_key, thread_cleanup_handler); + } +} + +#ifndef NDEBUG +extern "C" +__attribute__((visibility ("default"))) +void __llvm__safestack_dump() { + fprintf(stderr, + "Unsafe stack addr = %p, ptr = %p, size = 0x%lx, guard = 0x%lx\n", + __GET_UNSAFE_STACK_START(), __GET_UNSAFE_STACK_PTR(), + __GET_UNSAFE_STACK_SIZE(), __GET_UNSAFE_STACK_GUARD()); +} + +#endif + +extern "C" +__attribute__((visibility ("default"))) +__attribute__((noinline)) +void *__safestack_get_unsafe_stack_start() { + return __GET_UNSAFE_STACK_START(); +} + +extern "C" +__attribute__((visibility ("default"))) +__attribute__((noinline)) +void *__safestack_get_unsafe_stack_ptr() { + return __GET_UNSAFE_STACK_PTR(); +} + +extern "C" +__attribute__((visibility ("default"))) +__attribute__((noinline)) +void *__safestack_get_safe_stack_ptr() { + return (char*) __builtin_frame_address(0) + 2*sizeof(void*); +} + +#ifdef __linux__ +// Run safestack initialization before any other constructors +// FIXME: can we do something similar on Mac or FreeBSD? +extern "C" { +__attribute__((section(".preinit_array"), used)) +void (*__llvm__safestack_preinit)(void) = __llvm__safestack_init; +} +#endif + +} // namespace __llvm__safestack diff --git a/make/platform/clang_darwin.mk b/make/platform/clang_darwin.mk index 8b5f848..7908b77 100644 --- a/make/platform/clang_darwin.mk +++ b/make/platform/clang_darwin.mk @@ -106,6 +106,10 @@ endif Configs += ubsan_osx UniversalArchs.ubsan_osx := $(call CheckArches,i386 x86_64 x86_64h,ubsan_osx) +# Configurations which define the safestack support functions. +Configs += safestack_osx +UniversalArchs.safestack_osx = $(call CheckArches,i386 x86_64 x86_64h,safestack_osx) + # Darwin 10.6 has a bug in cctools that makes it unable to use ranlib on our ARM # object files. If we are on that platform, strip out all ARM archs. We still # build the libraries themselves so that Clang can find them where it expects @@ -170,6 +174,10 @@ CFLAGS.asan_iossim_dynamic := \ CFLAGS.ubsan_osx := $(CFLAGS) -mmacosx-version-min=10.6 -fno-builtin +CFLAGS.safestack_osx := \ + $(CFLAGS) -fno-rtti -fno-exceptions -fno-builtin \ + -fno-stack-protector -fno-safe-stack + CFLAGS.ios.i386 := $(CFLAGS) $(IOSSIM_DEPLOYMENT_ARGS) CFLAGS.ios.x86_64 := $(CFLAGS) $(IOSSIM_DEPLOYMENT_ARGS) CFLAGS.ios.x86_64h := $(CFLAGS) $(IOSSIM_DEPLOYMENT_ARGS) @@ -244,6 +252,8 @@ FUNCTIONS.asan_iossim_dynamic := $(AsanFunctions) $(AsanCXXFunctions) \ FUNCTIONS.ubsan_osx := $(UbsanFunctions) $(UbsanCXXFunctions) \ $(SanitizerCommonFunctions) +FUNCTIONS.safestack_osx := $(SafeStackFunctions) $(InterceptionFunctions) + CCKEXT_COMMON_FUNCTIONS := \ absvdi2 \ absvsi2 \ diff --git a/make/platform/clang_linux.mk b/make/platform/clang_linux.mk index 2edbfff..3a8ab08 100644 --- a/make/platform/clang_linux.mk +++ b/make/platform/clang_linux.mk @@ -50,7 +50,7 @@ endif # Build runtime libraries for i386. ifeq ($(call contains,$(SupportedArches),i386),true) Configs += builtins-i386 profile-i386 san-i386 asan-i386 asan_cxx-i386 \ - ubsan-i386 ubsan_cxx-i386 + ubsan-i386 ubsan_cxx-i386 safestack-i386 Arch.builtins-i386 := i386 Arch.profile-i386 := i386 Arch.san-i386 := i386 @@ -58,13 +58,14 @@ Arch.asan-i386 := i386 Arch.asan_cxx-i386 := i386 Arch.ubsan-i386 := i386 Arch.ubsan_cxx-i386 := i386 +Arch.safestack-i386 := i386 endif # Build runtime libraries for x86_64. ifeq ($(call contains,$(SupportedArches),x86_64),true) Configs += builtins-x86_64 profile-x86_64 san-x86_64 asan-x86_64 asan_cxx-x86_64 \ tsan-x86_64 msan-x86_64 ubsan-x86_64 ubsan_cxx-x86_64 dfsan-x86_64 \ - lsan-x86_64 + lsan-x86_64 safestack-x86_64 Arch.builtins-x86_64 := x86_64 Arch.profile-x86_64 := x86_64 Arch.san-x86_64 := x86_64 @@ -76,6 +77,7 @@ Arch.ubsan-x86_64 := x86_64 Arch.ubsan_cxx-x86_64 := x86_64 Arch.dfsan-x86_64 := x86_64 Arch.lsan-x86_64 := x86_64 +Arch.safestack-x86_64 := x86_64 endif endif @@ -110,6 +112,10 @@ CFLAGS.ubsan_cxx-i386 := $(CFLAGS) -m32 $(SANITIZER_CFLAGS) CFLAGS.ubsan_cxx-x86_64 := $(CFLAGS) -m64 $(SANITIZER_CFLAGS) CFLAGS.dfsan-x86_64 := $(CFLAGS) -m64 $(SANITIZER_CFLAGS) -fno-rtti CFLAGS.lsan-x86_64 := $(CFLAGS) -m64 $(SANITIZER_CFLAGS) -fno-rtti +CFLAGS.safestack-i386 := $(CFLAGS) -m32 -fPIE -fno-builtin -fno-exceptions \ + -fno-rtti -fno-stack-protector -fno-safe-stack +CFLAGS.safestack-x86_64 := $(CFLAGS) -m64 -fPIE -fno-builtin -fno-exceptions \ + -fno-rtti -fno-stack-protector -fno-safe-stack SHARED_LIBRARY.asan-arm-android := 1 ANDROID_COMMON_FLAGS := -target arm-linux-androideabi \ @@ -156,6 +162,8 @@ FUNCTIONS.dfsan-x86_64 := $(DfsanFunctions) $(InterceptionFunctions) \ $(SanitizerCommonFunctions) FUNCTIONS.lsan-x86_64 := $(LsanFunctions) $(InterceptionFunctions) \ $(SanitizerCommonFunctions) +FUNCTIONS.safestack-i386 := $(SafeStackFunctions) $(InterceptionFunctions) +FUNCTIONS.safestack-x86_64 := $(SafeStackFunctions) $(InterceptionFunctions) # Always use optimized variants. OPTIMIZED := 1
David Chisnall
2014-Nov-03 16:40 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
Hi Volodymyr, I enjoyed the paper, and we're very interested in deploying this technique on FreeBSD. Would you mind putting the patches on Phabricator (http://reviews.llvm.org) and adding me as a reviewer? David P.S. If you have patches against the LLVM 3.5 release, we'd be interested in importing them into copy of LLVM in the FreeBSD tree, otherwise we'll grab them as part of the next LLVM release. On 3 Nov 2014, at 16:05, Volodymyr Kuznetsov <vova.kuznetsov at epfl.ch> wrote:> Dear LLVM developers, > > Our team has developed an LLVM-based protection mechanism that (i) prevents control-flow hijack attacks enabled by memory corruption errors and (ii) has very low performance overhead. We would like to contribute the implementation to LLVM. We presented this work at the OSDI 2014 conference, at several software companies, and several US universities. We received positive feedback, and so we've open-sourced our prototype available for download from our project website (http://levee.epfl.ch). > > There are three components (safe stack, CPS, and CPI), and each can be used individually. Our most stable part is the safe stack instrumentation, which separates the program stack into a safe stack, which stores return addresses, register spills, and local variables that are statically verified to be accessed in a safe way, and the unsafe stack, which stores everything else. Such separation makes it much harder for an attacker to corrupt objects on the safe stack, including function pointers stored in spilled registers and return addresses. A detailed description of the individual components is available in our OSDI paper on code-pointer integrity (http://dslab.epfl.ch/pubs/cpi.pdf). > > The overhead of our implementation of the safe stack is very close to zero (0.01% on the Phoronix benchmarks and 0.03% on SPEC2006 CPU on average). This is lower than the overhead of stack cookies, which are supported by LLVM and are commonly used today, yet the security guarantees of the safe stack are strictly stronger than stack cookies. In some cases, the safe stack improves performance due to better cache locality. > > Our current implementation of the safe stack is stable and robust, we used it to recompile multiple projects on Linux including Chromium, and we also recompiled the entire FreeBSD user-space system and more than 100 packages. We ran unit tests on the FreeBSD system and many of the packages and observed no errors caused by the safe stack. The safe stack is also fully binary compatible with non-instrumented code and can be applied to parts of a program selectively. > > We attach our implementation of the safe stack as three patches against current SVN HEAD of LLVM (r221153), clang (r221154) and compiler-rt (r220991). The same changes are also available on https://github.com/cpi-llvm in the safestack-r221153 branches of corresponding repositories. The patches make the following changes: > > -- Add the safestack function attribute, similar to the ssp, sspstrong and sspreq attributes. > -- Add the SafeStack instrumentation pass that applies the safe stack to all functions that have the safestack attribute. This pass moves all unsafe local variables to the unsafe stack with a separate stack pointer, whereas all safe variables remain on the regular stack that is managed by LLVM as usual. > -- Invoke the pass as the last stage before code generation (at the same time the existing cookie-based stack protector pass is invoked). > -- Add -fsafe-stack and -fno-safe-stack options to clang to control safe stack usage (the safe stack is disabled by default). > -- Add __attribute__((no_safe_stack)) attribute to clang that can be used to disable the safe stack for individual functions even when enabled globally. > -- Add basic runtime support for the safe stack to compiler-rt. The runtime manages unsafe stack allocation/deallocation for each thread. > -- Add unit tests for the safe stack. > > You can find more information about the safe stack, as well as other parts of or control-flow hijack protection technique in our OSDI paper. FYI here is the abstract of the paper: > > << Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees. > > We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. >> > > (This is joint work with V. Kuznetsov, L. Szekeres, M. Payer, G. Candea, R. Sekar, and D. Song) > > We look forward to your feedback and hope for a prompt merge into LLVM, to make the software built with clang more secure. > > - Volodymyr Kuznetsov & the CPI team > <safestack-llvm.diff><safestack-clang.diff><safestack-compiler-rt.diff>_______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Volodymyr Kuznetsov
2014-Nov-03 17:49 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
Hi David, Thanks a lot for your quick reply and your interest in our technique! I've just uploaded the three patches to Phabricator: http://reviews.llvm.org/D6094, http://reviews.llvm.org/D6095 and http://reviews.llvm.org/D6096 . We will prepare the patches for the 3.5 branch as well (it should be fairly straightforward, as most changes are well isolated) and send them to you tomorrow. We would also be glad to share our changes to the FreeBSD libc that integrate the safe stack runtime in a more direct and clean way than if linked as a compiler-rt library. - Vova On Mon, Nov 3, 2014 at 5:40 PM, David Chisnall <David.Chisnall at cl.cam.ac.uk> wrote:> Hi Volodymyr, > > I enjoyed the paper, and we're very interested in deploying this technique > on FreeBSD. Would you mind putting the patches on Phabricator ( > http://reviews.llvm.org) and adding me as a reviewer? > > David > > P.S. If you have patches against the LLVM 3.5 release, we'd be interested > in importing them into copy of LLVM in the FreeBSD tree, otherwise we'll > grab them as part of the next LLVM release. > > On 3 Nov 2014, at 16:05, Volodymyr Kuznetsov <vova.kuznetsov at epfl.ch> > wrote: > > > Dear LLVM developers, > > > > Our team has developed an LLVM-based protection mechanism that (i) > prevents control-flow hijack attacks enabled by memory corruption errors > and (ii) has very low performance overhead. We would like to contribute the > implementation to LLVM. We presented this work at the OSDI 2014 conference, > at several software companies, and several US universities. We received > positive feedback, and so we've open-sourced our prototype available for > download from our project website (http://levee.epfl.ch). > > > > There are three components (safe stack, CPS, and CPI), and each can be > used individually. Our most stable part is the safe stack instrumentation, > which separates the program stack into a safe stack, which stores return > addresses, register spills, and local variables that are statically > verified to be accessed in a safe way, and the unsafe stack, which stores > everything else. Such separation makes it much harder for an attacker to > corrupt objects on the safe stack, including function pointers stored in > spilled registers and return addresses. A detailed description of the > individual components is available in our OSDI paper on code-pointer > integrity (http://dslab.epfl.ch/pubs/cpi.pdf). > > > > The overhead of our implementation of the safe stack is very close to > zero (0.01% on the Phoronix benchmarks and 0.03% on SPEC2006 CPU on > average). This is lower than the overhead of stack cookies, which are > supported by LLVM and are commonly used today, yet the security guarantees > of the safe stack are strictly stronger than stack cookies. In some cases, > the safe stack improves performance due to better cache locality. > > > > Our current implementation of the safe stack is stable and robust, we > used it to recompile multiple projects on Linux including Chromium, and we > also recompiled the entire FreeBSD user-space system and more than 100 > packages. We ran unit tests on the FreeBSD system and many of the packages > and observed no errors caused by the safe stack. The safe stack is also > fully binary compatible with non-instrumented code and can be applied to > parts of a program selectively. > > > > We attach our implementation of the safe stack as three patches against > current SVN HEAD of LLVM (r221153), clang (r221154) and compiler-rt > (r220991). The same changes are also available on > https://github.com/cpi-llvm in the safestack-r221153 branches of > corresponding repositories. The patches make the following changes: > > > > -- Add the safestack function attribute, similar to the ssp, sspstrong > and sspreq attributes. > > -- Add the SafeStack instrumentation pass that applies the safe stack to > all functions that have the safestack attribute. This pass moves all unsafe > local variables to the unsafe stack with a separate stack pointer, whereas > all safe variables remain on the regular stack that is managed by LLVM as > usual. > > -- Invoke the pass as the last stage before code generation (at the same > time the existing cookie-based stack protector pass is invoked). > > -- Add -fsafe-stack and -fno-safe-stack options to clang to control safe > stack usage (the safe stack is disabled by default). > > -- Add __attribute__((no_safe_stack)) attribute to clang that can be > used to disable the safe stack for individual functions even when enabled > globally. > > -- Add basic runtime support for the safe stack to compiler-rt. The > runtime manages unsafe stack allocation/deallocation for each thread. > > -- Add unit tests for the safe stack. > > > > You can find more information about the safe stack, as well as other > parts of or control-flow hijack protection technique in our OSDI paper. FYI > here is the abstract of the paper: > > > > << Systems code is often written in low-level languages like C/C++, > which offer many benefits but also delegate memory management to > programmers. This invites memory safety bugs that attackers can exploit to > divert control flow and compromise the system. Deployed defense mechanisms > (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., > CFI) often have high overhead and limited guarantees. > > > > We introduce code-pointer integrity (CPI), a new design point that > guarantees the integrity of all code pointers in a program (e.g., function > pointers, saved return addresses) and thereby prevents all control-flow > hijack attacks, including return-oriented programming. We also introduce > code-pointer separation (CPS), a relaxation of CPI with better performance > properties. CPI and CPS offer substantially better security-to-overhead > ratios than the state of the art, they are practical (we protect a complete > FreeBSD system and over 100 packages like apache and postgresql), effective > (prevent all attacks in the RIPE benchmark), and efficient: on SPEC > CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s > overhead is 2.9% for C and 8.4% for C/C++. >> > > > > (This is joint work with V. Kuznetsov, L. Szekeres, M. Payer, G. Candea, > R. Sekar, and D. Song) > > > > We look forward to your feedback and hope for a prompt merge into LLVM, > to make the software built with clang more secure. > > > > - Volodymyr Kuznetsov & the CPI team > > > <safestack-llvm.diff><safestack-clang.diff><safestack-compiler-rt.diff>_______________________________________________ > > LLVM Developers mailing list > > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141103/7f18f562/attachment.html>
Kostya Serebryany
2014-Nov-04 00:36 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
Hi Volodymyr, disclaimer: my opinion is biased because I've co-authored AddressSanitizer and SafeStack is doing very similar things. The functionality of SafeStack is limited in scope, but given the near-zero overhead and non-zero benefits I'd still like to see it in LLVM trunk. SafeStack, both the LLVM and compiler-rt parts, is very similar to what we do in AddressSanitizer, so I would like to see more code reuse, especially in compiler-rt. What about user-visible interface? Do we want it to be more similar to asan/tsan/msan/lsan/ubsan/dfsan flags, e.g. -fsanitize=safe-stack ? I am puzzled why you are doing transformations on the CodeGen level, as opposed to doing it in LLVM IR pass. LLVM code base is c++11 now, so in the new code please use c++11, at least where it leads to simpler code (e.g. "for" loops). compiler-rt part lacks tests. same for clang part. Are you planing to support this feature in LLVM long term? You say that SafeStack is a superset of stack cookies. What are the downsides? You at least increase the memory footprint by doubling the stack sizes. You also add some (minor) incompatibility and the need for the new attributes to disable SafeStack. What else? I've also left a few specific comments in phabricator. --kcc On Mon, Nov 3, 2014 at 8:05 AM, Volodymyr Kuznetsov <vova.kuznetsov at epfl.ch> wrote:> Dear LLVM developers, > > Our team has developed an LLVM-based protection mechanism that > (i) prevents control-flow hijack attacks enabled by memory > corruption errors and (ii) has very low performance overhead. We would like > to contribute the implementation to LLVM. We presented this work at > the OSDI 2014 conference, at several software companies, and several US > universities. We received positive feedback, and so we've open-sourced our > prototype available for download from our project website ( > http://levee.epfl.ch). > > There are three components (safe stack, CPS, and CPI), and each can be > used individually. Our most stable part is the safe stack > instrumentation, which separates the program stack into a safe stack, which > stores return addresses, register spills, and local variables that are > statically verified to be accessed in a safe way, and the unsafe stack, > which stores everything else. Such separation makes it much harder for an > attacker to corrupt objects on the safe stack, including function pointers > stored in spilled registers and return addresses. A detailed description of > the individual components is available in our OSDI paper on code-pointer > integrity (http://dslab.epfl.ch/pubs/cpi.pdf). > > The overhead of our implementation of the safe stack is very close to zero > (0.01% on the Phoronix benchmarks and 0.03% on SPEC2006 CPU on average). > This is lower than the overhead of stack cookies, which are supported by > LLVM and are commonly used today, yet the security guarantees of the safe > stack are strictly stronger than stack cookies. In some cases, the safe > stack improves performance due to better cache locality. > > Our current implementation of the safe stack is stable and robust, we used > it to recompile multiple projects on Linux including Chromium, and we also > recompiled the entire FreeBSD user-space system and more than 100 packages. > We ran unit tests on the FreeBSD system and many of the packages and > observed no errors caused by the safe stack. The safe stack is also fully > binary compatible with non-instrumented code and can be applied to parts of > a program selectively. > > We attach our implementation of the safe stack as three patches > against current SVN HEAD of LLVM (r221153), clang (r221154) and compiler-rt > (r220991). The same changes are also available on > https://github.com/cpi-llvm <http://github.com/cpi-llvm> in the > safestack-r221153 branches of corresponding repositories. The patches make > the following changes: > > -- Add the safestack function attribute, similar to the ssp, sspstrong and > sspreq attributes. > -- Add the SafeStack instrumentation pass that applies the safe stack to > all functions that have the safestack attribute. This pass moves all unsafe > local variables to the unsafe stack with a separate stack pointer, whereas > all safe variables remain on the regular stack that is managed by LLVM as > usual. > -- Invoke the pass as the last stage before code generation (at the same > time the existing cookie-based stack protector pass is invoked). > -- Add -fsafe-stack and -fno-safe-stack options to clang to control safe > stack usage (the safe stack is disabled by default). > -- Add __attribute__((no_safe_stack)) attribute to clang that can be used > to disable the safe stack for individual functions even when enabled > globally. > -- Add basic runtime support for the safe stack to compiler-rt. > The runtime manages unsafe stack allocation/deallocation for each thread. > -- Add unit tests for the safe stack. > > You can find more information about the safe stack, as well as other parts > of or control-flow hijack protection technique in our OSDI paper. FYI here > is the abstract of the paper: > > << Systems code is often written in low-level languages like C/C++, > which offer many benefits but also delegate memory management > to programmers. This invites memory safety bugs that attackers can exploit > to divert control flow and compromise the system. Deployed defense > mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense > mechanisms (e.g., CFI) often have high overhead and limited guarantees. > > We introduce code-pointer integrity (CPI), a new design point > that guarantees the integrity of all code pointers in a program > (e.g., function pointers, saved return addresses) and thereby prevents > all control-flow hijack attacks, including return-oriented programming. > We also introduce code-pointer separation (CPS), a relaxation of CPI > with better performance properties. CPI and CPS offer substantially > better security-to-overhead ratios than the state of the art, they > are practical (we protect a complete FreeBSD system and over 100 > packages like apache and postgresql), effective (prevent all attacks in > the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages > 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for > C and 8.4% for C/C++. >> > > (This is joint work with V. Kuznetsov, L. Szekeres, M. Payer, G. Candea, > R. Sekar, and D. Song) > > We look forward to your feedback and hope for a prompt merge into LLVM, to > make the software built with clang more secure. > > - Volodymyr Kuznetsov & the CPI team > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141103/c5bf568b/attachment.html>
David Chisnall
2014-Nov-04 09:07 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
On 4 Nov 2014, at 00:36, Kostya Serebryany <kcc at google.com> wrote:> You at least increase the memory footprint by doubling the stack sizes.Not quite. The space overhead is constant for each stack frame - you just need to keep track of the top of two stacks, rather than one. The important overhead is that you reduce locality of reference. You will need a minimum of two cache lines for each stack frame instead of one. In practice, this is not a huge problem, because you need several cache lines live for good performance of the stack and the total number of lines is not much different. There are likely to be some pathological cases though, when both the safe and unsafe stacks have the same alignment for the top and you are dealing with some other heap data with the same alignment. This will increase the contention in set-associative cache lines and may cause more misses. David
Volodymyr Kuznetsov
2014-Nov-12 10:50 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
Dear LLVM developers, We've applied the feedback we received on Phabricator on the SafeStack patches, and added tests for all components of the SafeStack (thanks to our new developer Alexandre Bique at EPFL, who is working on the SafeStack starting this week). We would appreciate any suggestions on how we can further improve the SafeStack and help it's inclusion in LLVM. - Volodymyr Kuznetsov & the CPI team On Mon, Nov 3, 2014 at 5:05 PM, Volodymyr Kuznetsov <vova.kuznetsov at epfl.ch> wrote:> Dear LLVM developers, > > Our team has developed an LLVM-based protection mechanism that > (i) prevents control-flow hijack attacks enabled by memory > corruption errors and (ii) has very low performance overhead. We would like > to contribute the implementation to LLVM. We presented this work at > the OSDI 2014 conference, at several software companies, and several US > universities. We received positive feedback, and so we've open-sourced our > prototype available for download from our project website ( > http://levee.epfl.ch). > > There are three components (safe stack, CPS, and CPI), and each can be > used individually. Our most stable part is the safe stack > instrumentation, which separates the program stack into a safe stack, which > stores return addresses, register spills, and local variables that are > statically verified to be accessed in a safe way, and the unsafe stack, > which stores everything else. Such separation makes it much harder for an > attacker to corrupt objects on the safe stack, including function pointers > stored in spilled registers and return addresses. A detailed description of > the individual components is available in our OSDI paper on code-pointer > integrity (http://dslab.epfl.ch/pubs/cpi.pdf). > > The overhead of our implementation of the safe stack is very close to zero > (0.01% on the Phoronix benchmarks and 0.03% on SPEC2006 CPU on average). > This is lower than the overhead of stack cookies, which are supported by > LLVM and are commonly used today, yet the security guarantees of the safe > stack are strictly stronger than stack cookies. In some cases, the safe > stack improves performance due to better cache locality. > > Our current implementation of the safe stack is stable and robust, we used > it to recompile multiple projects on Linux including Chromium, and we also > recompiled the entire FreeBSD user-space system and more than 100 packages. > We ran unit tests on the FreeBSD system and many of the packages and > observed no errors caused by the safe stack. The safe stack is also fully > binary compatible with non-instrumented code and can be applied to parts of > a program selectively. > > We attach our implementation of the safe stack as three patches > against current SVN HEAD of LLVM (r221153), clang (r221154) and compiler-rt > (r220991). The same changes are also available on > https://github.com/cpi-llvm <http://github.com/cpi-llvm> in the > safestack-r221153 branches of corresponding repositories. The patches make > the following changes: > > -- Add the safestack function attribute, similar to the ssp, sspstrong and > sspreq attributes. > -- Add the SafeStack instrumentation pass that applies the safe stack to > all functions that have the safestack attribute. This pass moves all unsafe > local variables to the unsafe stack with a separate stack pointer, whereas > all safe variables remain on the regular stack that is managed by LLVM as > usual. > -- Invoke the pass as the last stage before code generation (at the same > time the existing cookie-based stack protector pass is invoked). > -- Add -fsafe-stack and -fno-safe-stack options to clang to control safe > stack usage (the safe stack is disabled by default). > -- Add __attribute__((no_safe_stack)) attribute to clang that can be used > to disable the safe stack for individual functions even when enabled > globally. > -- Add basic runtime support for the safe stack to compiler-rt. > The runtime manages unsafe stack allocation/deallocation for each thread. > -- Add unit tests for the safe stack. > > You can find more information about the safe stack, as well as other parts > of or control-flow hijack protection technique in our OSDI paper. FYI here > is the abstract of the paper: > > << Systems code is often written in low-level languages like C/C++, > which offer many benefits but also delegate memory management > to programmers. This invites memory safety bugs that attackers can exploit > to divert control flow and compromise the system. Deployed defense > mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense > mechanisms (e.g., CFI) often have high overhead and limited guarantees. > > We introduce code-pointer integrity (CPI), a new design point > that guarantees the integrity of all code pointers in a program > (e.g., function pointers, saved return addresses) and thereby prevents > all control-flow hijack attacks, including return-oriented programming. > We also introduce code-pointer separation (CPS), a relaxation of CPI > with better performance properties. CPI and CPS offer substantially > better security-to-overhead ratios than the state of the art, they > are practical (we protect a complete FreeBSD system and over 100 > packages like apache and postgresql), effective (prevent all attacks in > the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages > 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for > C and 8.4% for C/C++. >> > > (This is joint work with V. Kuznetsov, L. Szekeres, M. Payer, G. Candea, > R. Sekar, and D. Song) > > We look forward to your feedback and hope for a prompt merge into LLVM, to > make the software built with clang more secure. > > - Volodymyr Kuznetsov & the CPI team >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141112/42e7b5ab/attachment.html>
David Chisnall
2014-Nov-14 07:36 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
On 12 Nov 2014, at 10:50, Volodymyr Kuznetsov <vova.kuznetsov at epfl.ch> wrote:> We've applied the feedback we received on Phabricator on the SafeStack patches, and added tests for all components of the SafeStack (thanks to our new developer Alexandre Bique at EPFL, who is working on the SafeStack starting this week). We would appreciate any suggestions on how we can further improve the SafeStack and help it's inclusion in LLVM.This week is very deadline-heavy for me, but I hope to be able to do another review over the weekend. David
Kostya Serebryany
2014-Nov-14 22:15 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
On Wed, Nov 12, 2014 at 2:50 AM, Volodymyr Kuznetsov <vova.kuznetsov at epfl.ch> wrote:> Dear LLVM developers, > > We've applied the feedback we received on Phabricator on the SafeStack > patches, >Did you investigate the possibility of moving the transformation from codegen to the LLVM level, i.e. the same level where asan/msan/tsan/dfsan work? I understand that it's a lot of work, but it will pay off with greater portability and maintainability later. Also, did you reply to my comments about reusing compiler-rt code from sanitizer_common? I see lots of places in lib/safestack where you duplicate existing functionality from lib/sanitizer_common and added tests for all components of the SafeStack (thanks to our new> developer Alexandre Bique at EPFL, who is working on the SafeStack > starting this week). We would appreciate any suggestions on how we can > further improve the SafeStack and help it's inclusion in LLVM. > > - Volodymyr Kuznetsov & the CPI team > > On Mon, Nov 3, 2014 at 5:05 PM, Volodymyr Kuznetsov < > vova.kuznetsov at epfl.ch> wrote: > >> Dear LLVM developers, >> >> Our team has developed an LLVM-based protection mechanism that >> (i) prevents control-flow hijack attacks enabled by memory >> corruption errors and (ii) has very low performance overhead. We would like >> to contribute the implementation to LLVM. We presented this work at >> the OSDI 2014 conference, at several software companies, and several US >> universities. We received positive feedback, and so we've open-sourced our >> prototype available for download from our project website ( >> http://levee.epfl.ch). >> >> There are three components (safe stack, CPS, and CPI), and each can be >> used individually. Our most stable part is the safe stack >> instrumentation, which separates the program stack into a safe stack, which >> stores return addresses, register spills, and local variables that are >> statically verified to be accessed in a safe way, and the unsafe stack, >> which stores everything else. Such separation makes it much harder for an >> attacker to corrupt objects on the safe stack, including function pointers >> stored in spilled registers and return addresses. A detailed description of >> the individual components is available in our OSDI paper on code-pointer >> integrity (http://dslab.epfl.ch/pubs/cpi.pdf). >> >> The overhead of our implementation of the safe stack is very close >> to zero (0.01% on the Phoronix benchmarks and 0.03% on SPEC2006 CPU on >> average). This is lower than the overhead of stack cookies, which >> are supported by LLVM and are commonly used today, yet the >> security guarantees of the safe stack are strictly stronger than stack >> cookies. In some cases, the safe stack improves performance due to better >> cache locality. >> >> Our current implementation of the safe stack is stable and robust, >> we used it to recompile multiple projects on Linux including Chromium, >> and we also recompiled the entire FreeBSD user-space system and more than >> 100 packages. We ran unit tests on the FreeBSD system and many of the >> packages and observed no errors caused by the safe stack. The safe stack is >> also fully binary compatible with non-instrumented code and can be applied >> to parts of a program selectively. >> >> We attach our implementation of the safe stack as three patches >> against current SVN HEAD of LLVM (r221153), clang (r221154) and compiler-rt >> (r220991). The same changes are also available on >> https://github.com/cpi-llvm <http://github.com/cpi-llvm> in the >> safestack-r221153 branches of corresponding repositories. The patches make >> the following changes: >> >> -- Add the safestack function attribute, similar to the ssp, >> sspstrong and sspreq attributes. >> -- Add the SafeStack instrumentation pass that applies the safe stack to >> all functions that have the safestack attribute. This pass moves all unsafe >> local variables to the unsafe stack with a separate stack pointer, whereas >> all safe variables remain on the regular stack that is managed by LLVM as >> usual. >> -- Invoke the pass as the last stage before code generation (at the same >> time the existing cookie-based stack protector pass is invoked). >> -- Add -fsafe-stack and -fno-safe-stack options to clang to control safe >> stack usage (the safe stack is disabled by default). >> -- Add __attribute__((no_safe_stack)) attribute to clang that can be used >> to disable the safe stack for individual functions even when enabled >> globally. >> -- Add basic runtime support for the safe stack to compiler-rt. >> The runtime manages unsafe stack allocation/deallocation for each thread. >> -- Add unit tests for the safe stack. >> >> You can find more information about the safe stack, as well as >> other parts of or control-flow hijack protection technique in our OSDI >> paper. FYI here is the abstract of the paper: >> >> << Systems code is often written in low-level languages like C/C++, >> which offer many benefits but also delegate memory management >> to programmers. This invites memory safety bugs that attackers can exploit >> to divert control flow and compromise the system. Deployed defense >> mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense >> mechanisms (e.g., CFI) often have high overhead and limited guarantees. >> >> We introduce code-pointer integrity (CPI), a new design point >> that guarantees the integrity of all code pointers in a program >> (e.g., function pointers, saved return addresses) and thereby prevents >> all control-flow hijack attacks, including return-oriented programming. >> We also introduce code-pointer separation (CPS), a relaxation of CPI >> with better performance properties. CPI and CPS offer substantially >> better security-to-overhead ratios than the state of the art, they >> are practical (we protect a complete FreeBSD system and over 100 >> packages like apache and postgresql), effective (prevent all attacks in >> the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages >> 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for >> C and 8.4% for C/C++. >> >> >> (This is joint work with V. Kuznetsov, L. Szekeres, M. Payer, G. Candea, >> R. Sekar, and D. Song) >> >> We look forward to your feedback and hope for a prompt merge into >> LLVM, to make the software built with clang more secure. >> >> - Volodymyr Kuznetsov & the CPI team >> > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141114/92ba06cd/attachment.html>
Volodymyr Kuznetsov
2014-Nov-15 12:53 UTC
[LLVMdev] [PATCH] Protection against stack-based memory corruption errors using SafeStack
Hi Kostya,>On Wed, Nov 12, 2014 at 2:50 AM, Volodymyr Kuznetsov <vova.kuznetsov at epfl.ch >> wrote: > >> Dear LLVM developers, >> >> We've applied the feedback we received on Phabricator on the SafeStack >> patches, >> > >Did you investigate the possibility of moving the transformation from >codegen to the LLVM level, i.e. the same level where asan/msan/tsan/dfsan >work? >I understand that it's a lot of work, but it will pay off with greater >portability and maintainability later.We're currently considering doing something in-between. We could place the SafeStack pass in lib/Transformation/Instrumentation, so that it can be easily invoked with opt and operate on IR, but still schedule the pass during code generation by default. The latter is especially important when using LTO: running SafeStack before LTO would both impair the effectiveness of LTO (e.g., by breaking alias analysis, mem2reg, etc), and prevent SafeStack from taking advantage of the extra information obtained during LTO (e.g., LTO can remove some pointer uses through inlining and DCE, etc.). Even without LTO, some of the passes scheduled during code generation (in addPassesToGenerateCode) could affect the security and the performance of the generated code as well. Do you think moving the pass to lib/Transform/Instrumentation but scheduling it during code generation would make sense ? If so, we'll do that and change the safestack tests to use opt instead of llc.>Also, did you reply to my comments about reusing compiler-rt code from >sanitizer_common? >I see lots of places in lib/safestack where you duplicate existing >functionality from lib/sanitizer_commonYes, we would like to use some of the functions from sanitizer_common (e.g., internal_mmap/munmap and some pthread-related functions), but linking the entire sanitizer_common just for that might be an overkill. E.g., it can make small programs like coreutils up to 3x larger, and it requires compiling with -pthread. Perhaps we can move those functions to separate files in sanitizer_common and make them usable independently from the rest of sanitizer_common? - Vova
Reasonably Related Threads
- [RFC/PATCH PV_OPS X86_64 12/17] paravirt_ops - interrupt/exception changes
- [RFC/PATCH PV_OPS X86_64 12/17] paravirt_ops - interrupt/exception changes
- Help required regarding IPRA and Local Function optimization
- [LLVMdev] [cfe-dev] Code generation for noexcept functions
- [LLVMdev] [llvm-commits] r192750 - Enable MI Sched for x86.