JF Bastien via llvm-dev
2019-Nov-15 18:58 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hello compiler enthusiasts, The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: The LLVM Security Group has the following goals: Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. Organize fixes, code reviews, and release management for said issues. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. We’re looking for answers to the following questions: On this list: Should we create a security group and process? On this list: Do you agree with the goals listed in the proposal? On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: Are you an LLVM contributor (individual or representing a company)? Are you involved with security aspects of LLVM (if so, which)? Do you maintain significant downstream LLVM changes? Do you package and deploy LLVM for others to use (if so, to how many people)? Is your LLVM distribution based on the open-source releases? How often do you usually deploy LLVM? How fast can you deploy an update? Does your LLVM distribution handle untrusted inputs, and what kind? What’s the threat model for your LLVM distribution? Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: https://webkit.org/security-policy/ <https://webkit.org/security-policy/> https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. I’ll go first in answering my own questions above: Yes! We should create a security group and process. We agree with the goals listed. We think the proposal is exactly right, but would like to hear the community’s opinions. Here’s how we approach the security of LLVM: I contribute to LLVM as an Apple employee. I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. We maintain significant downstream changes. We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>. We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. This depends on which release of LLVM is affected. Yes, our distribution sometimes handles untrusted input. The threat model is highly variable depending on the particular language front-ends being considered. Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. Thanks, JF -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191115/ccf67345/attachment-0001.html>
Chris Bieneman via llvm-dev
2019-Nov-18 19:42 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hey JF, Thanks for putting this RFC together. LLVM security issues are very important, and I'm really glad someone is focusing attention here. I'm generally in agreement with much of what you have proposed. I do have a few thoughts I'd like to bring up. Having the group appointed by the board seems a bit odd to me. Historically the board has not involved itself technical processes. I'm curious what the board's thoughts are relating to this level of involvement in project direction (I know you wanted proposal feedback on Phabricator, but I think the role of the board is something worth discussing here). My other meta thought is about focus and direction for the group. How do you define security issues? To give you where I'm coming from. One of the big concerns I have at the moment is about running LLVM in secure execution contexts, where we care about bugs in the compiler that could influence code generation, not just the code generation itself. Historically, I believe, the security focus of LLVM has primarily been on generated code, do you see this group tackling both sides of the problem? Thanks, -Chris> On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hello compiler enthusiasts, > > > The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. > > A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. > Organize fixes, code reviews, and release management for said issues. > Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. > Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. > Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. > > We’re looking for answers to the following questions: > On this list: Should we create a security group and process? > On this list: Do you agree with the goals listed in the proposal? > On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: > Are you an LLVM contributor (individual or representing a company)? > Are you involved with security aspects of LLVM (if so, which)? > Do you maintain significant downstream LLVM changes? > Do you package and deploy LLVM for others to use (if so, to how many people)? > Is your LLVM distribution based on the open-source releases? > How often do you usually deploy LLVM? > How fast can you deploy an update? > Does your LLVM distribution handle untrusted inputs, and what kind? > What’s the threat model for your LLVM distribution? > > Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: > https://webkit.org/security-policy/ <https://webkit.org/security-policy/> > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> > https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> > https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> > https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> > https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. > > > I’ll go first in answering my own questions above: > Yes! We should create a security group and process. > We agree with the goals listed. > We think the proposal is exactly right, but would like to hear the community’s opinions. > Here’s how we approach the security of LLVM: > I contribute to LLVM as an Apple employee. > I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. > We maintain significant downstream changes. > We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. > Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>. > We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. > This depends on which release of LLVM is affected. > Yes, our distribution sometimes handles untrusted input. > The threat model is highly variable depending on the particular language front-ends being considered. > Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. > > > Thanks, > > JF > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191118/9b61f933/attachment.html>
JF Bastien via llvm-dev
2019-Nov-18 21:12 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hi Chris!> On Nov 18, 2019, at 11:42 AM, Chris Bieneman <chris.bieneman at me.com> wrote: > > Hey JF, > > Thanks for putting this RFC together. LLVM security issues are very important, and I'm really glad someone is focusing attention here. > > I'm generally in agreement with much of what you have proposed. I do have a few thoughts I'd like to bring up. > > Having the group appointed by the board seems a bit odd to me. Historically the board has not involved itself technical processes. I'm curious what the board's thoughts are relating to this level of involvement in project direction (I know you wanted proposal feedback on Phabricator, but I think the role of the board is something worth discussing here).I consulted the board before sending out the RFC, and they didn’t express concerns about this specific point. I’m happy to have another method to find the right seed group, but that’s the best I could come up with :)> My other meta thought is about focus and direction for the group. How do you define security issues? > > To give you where I'm coming from. One of the big concerns I have at the moment is about running LLVM in secure execution contexts, where we care about bugs in the compiler that could influence code generation, not just the code generation itself. Historically, I believe, the security focus of LLVM has primarily been on generated code, do you see this group tackling both sides of the problem?That’s indeed a difficult one! I think there are two aspects to this: for a non-LLVM contributor, it doesn’t really matter. If they think it’s a security thing, we should go through this process. They shouldn’t need to try to figure out what LLVM thinks is its security boundary, that’s the project’s job. So I want to be fairly accepting, and let people file things that aren’t actually security related as security issues, because that has lower risk of folks doing the wrong thing (filing security issues as not security related). On the flip side, I think it’s up to the project to figure it out. I used to care about what you allude to when working on PNaCl, but nowadays I mostly just care about the code generation. If we have people with that type of concern in the security group then we’re in a good position to handle those problems. If nobody represents that concern, then I don’t think we can address them, even if we nominally care. In other words: it’s security related if an LLVM contributor signs up to shepherd this kind of issue through the security process. If nobodies volunteers, then it’s not something the security process can handle. That might point out a hole in our coverage, one we should address. Does that make sense?> Thanks, > -Chris > >> On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> Hello compiler enthusiasts, >> >> >> The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. >> >> A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: >> >> The LLVM Security Group has the following goals: >> Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. >> Organize fixes, code reviews, and release management for said issues. >> Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. >> Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. >> Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. >> >> We’re looking for answers to the following questions: >> On this list: Should we create a security group and process? >> On this list: Do you agree with the goals listed in the proposal? >> On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? >> On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? >> On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: >> Are you an LLVM contributor (individual or representing a company)? >> Are you involved with security aspects of LLVM (if so, which)? >> Do you maintain significant downstream LLVM changes? >> Do you package and deploy LLVM for others to use (if so, to how many people)? >> Is your LLVM distribution based on the open-source releases? >> How often do you usually deploy LLVM? >> How fast can you deploy an update? >> Does your LLVM distribution handle untrusted inputs, and what kind? >> What’s the threat model for your LLVM distribution? >> >> Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: >> https://webkit.org/security-policy/ <https://webkit.org/security-policy/> >> https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> >> https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> >> https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> >> https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> >> https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. >> >> >> I’ll go first in answering my own questions above: >> Yes! We should create a security group and process. >> We agree with the goals listed. >> We think the proposal is exactly right, but would like to hear the community’s opinions. >> Here’s how we approach the security of LLVM: >> I contribute to LLVM as an Apple employee. >> I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. >> We maintain significant downstream changes. >> We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. >> Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>. >> We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. >> This depends on which release of LLVM is affected. >> Yes, our distribution sometimes handles untrusted input. >> The threat model is highly variable depending on the particular language front-ends being considered. >> Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. >> >> >> Thanks, >> >> JF >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191118/eba5464b/attachment.html>
James Y Knight via llvm-dev
2019-Nov-18 21:51 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
I think it's great to make a policy for reporting security bugs. But first, yes, we need to be clear as to what sorts of things we consider as "security bugs", and what we do not. We need to be clear on this, both for users to know what they should depend on, and for LLVM contributors to know when they should be raising a flag, if they discover or fix something themselves. We could just keep on doing our usual development process, and respond only to *externally-reported* issues with the security-response routine. But I don't think that would be a good idea. Creating a process whereby anyone outside the project can report security issues, and for which we'll coordinate disclosure and create backports and such is all well and good...but if we don't then also do (or at least *try* to do!) the same for issues discovered and fixed within the community, is there even a point? So, if we're going to expand what we consider a security bug beyond the present "effectively nothing", I think it is really important to be a bit more precise about what it's being expanded to. For example, I think it should generally be agreed that a bug in Clang which allows arbitrary-code-execution in the compiler, given a specially crafted source-file, should not be considered a security issue. A bug, yes, but not a security issue, because we do not consider the use-case of running the compiler in privileged context to be a supported operation. But also the same for parsing a source-file into a clang AST -- which might even happen automatically with editor integration. Seems less obviously correct, but, still, the reality today. And, IMO, the same stance should also apply to feeding arbitrary bitcode into LLVM. (And I get the unfortunate feeling that last statement might not find universal agreement.) Given the current architecture and state of the project, I think it would be rather unwise to pretend that any of those are secure operations, or to try to support them as such with a security response process. Many compiler crashes seem likely to be security bugs, if someone is trying hard enough. If every time such a bug was fixed, it got a full security-response triggered, with embargos, CVEs, backports, etc...that just seems unsustainable. Maybe it would be *nice* to support this, but I think we're a long way from there currently. However, all that said -- based on timing and recent events, perhaps your primary goal here is to establish a process for discussing LLVM patches to workaround not-yet-public CPU errata, and issues of that nature. In that case, the need for the security response group is primarily to allow developing quality LLVM patches based on not-yet-public information about other people's products. That seems like a very useful thing to formalize, indeed, and doesn't need any changes in llvm developers' thinking. So if that's what we're talking about, let's be clear about it. On Mon, Nov 18, 2019 at 2:43 PM Chris Bieneman via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hey JF, > > Thanks for putting this RFC together. LLVM security issues are very > important, and I'm really glad someone is focusing attention here. > > I'm generally in agreement with much of what you have proposed. I do have > a few thoughts I'd like to bring up. > > Having the group appointed by the board seems a bit odd to me. > Historically the board has not involved itself technical processes. I'm > curious what the board's thoughts are relating to this level of involvement > in project direction (I know you wanted proposal feedback on Phabricator, > but I think the role of the board is something worth discussing here). > > My other meta thought is about focus and direction for the group. How do > you define security issues? > > To give you where I'm coming from. One of the big concerns I have at the > moment is about running LLVM in secure execution contexts, where we care > about bugs in the compiler that could influence code generation, not just > the code generation itself. Historically, I believe, the security focus of > LLVM has primarily been on generated code, do you see this group tackling > both sides of the problem? > > Thanks, > -Chris > > On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > Hello compiler enthusiasts, > > The Apple LLVM team would like to propose that a new a security process > and an associated private LLVM Security Group be created under the umbrella > of the LLVM project. > > A draft proposal for how we could organize such a group and what its > process could be is available on Phabricator > <https://reviews.llvm.org/D70326>. The proposal starts with a list of > goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > > 1. Allow LLVM contributors and security researchers to disclose > security-related issues affecting the LLVM project to members of the LLVM > community. > 2. Organize fixes, code reviews, and release management for said > issues. > 3. Allow distributors time to investigate and deploy fixes before wide > dissemination of vulnerabilities or mitigation shortcomings. > 4. Ensure timely notification and release to vendors who package and > distribute LLVM-based toolchains and projects. > 5. Ensure timely notification to users of LLVM-based toolchains whose > compiled code is security-sensitive, through the CVE process > <https://cve.mitre.org/>. > > > We’re looking for answers to the following questions: > > 1. *On this list*: Should we create a security group and process? > 2. *On this list*: Do you agree with the goals listed in the proposal? > 3. *On this list*: at a high-level, what do you think should be done > differently, and what do you think is exactly right in the draft proposal? > 4. *On the Phabricator code review*: going into specific details, what > do you think should be done differently, and what do you think is exactly > right in the draft proposal? > 5. *On this list*: to help understand where you’re coming from with > your feedback, it would be helpful to state how you personally approach > this issue: > 1. Are you an LLVM contributor (individual or representing a > company)? > 2. Are you involved with security aspects of LLVM (if so, which)? > 3. Do you maintain significant downstream LLVM changes? > 4. Do you package and deploy LLVM for others to use (if so, to how > many people)? > 5. Is your LLVM distribution based on the open-source releases? > 6. How often do you usually deploy LLVM? > 7. How fast can you deploy an update? > 8. Does your LLVM distribution handle untrusted inputs, and what > kind? > 9. What’s the threat model for your LLVM distribution? > > > Other open-source projects have security-related groups and processes. > They structure their group very differently from one another. This proposal > borrows from some of these projects’ processes. A few examples: > > - https://webkit.org/security-policy/ > - > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md > - https://wiki.mozilla.org/Security > - https://www.openbsd.org/security.html > - https://security-team.debian.org/security_tracker.html > - https://www.python.org/news/security/ > > When providing feedback, it would be great to hear if you’ve dealt with > these or other projects’ processes, what works well, and what can be done > better. > > > I’ll go first in answering my own questions above: > > 1. Yes! We should create a security group and process. > 2. We agree with the goals listed. > 3. We think the proposal is exactly right, but would like to hear the > community’s opinions. > 4. Here’s how we approach the security of LLVM: > 1. I contribute to LLVM as an Apple employee. > 2. I’ve been involved in a variety of LLVM security issues, from > automatic variable initialization to security-related diagnostics, as well > as deploying these mitigations to internal codebases. > 3. We maintain significant downstream changes. > 4. We package and deploy LLVM, both internally and externally, for > a variety of purposes, including the clang, Swift, and mobile GPU shader > compilers. > 5. Our LLVM distribution is not directly derived from the > open-source release. In all cases, all non-upstream public patches for our > releases are available in repository branches at > https://github.com/apple. > 6. We have many deployments of LLVM whose release schedules vary > significantly. The LLVM build deployed as part of Xcode historically has > one major release per year, followed by roughly one minor release every 2 > months. Other releases of LLVM are also security-sensitive and don’t follow > the same schedule. > 7. This depends on which release of LLVM is affected. > 8. Yes, our distribution sometimes handles untrusted input. > 9. The threat model is highly variable depending on the particular > language front-ends being considered. > > Apple is involved with a variety of open-source projects and their > disclosures. For example, we frequently work with the WebKit community to > handle security issues through their process. > > > Thanks, > > JF > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191118/6b6c3f90/attachment.html>
Robinson, Paul via llvm-dev
2019-Nov-21 19:23 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
My answers to your "on the list" questions: 1. Should we create a security group and process? SGTM. It appears that gcc has CVEs against it, why should they have all the fun. 2. Do you agree with the goals listed in the proposal? They also SGTM. 3. at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? The involvement of the Foundation Board to bootstrap the initial security team... seems a tad odd. Basically you're calling for volunteers and wanting some sort of vetting process, and picked the Board to do that initially for lack of any other alternatives? I agree that the Board should sign on to have a security team at all, that falls within their purview, but they don't need to be part of the initial selection process. The initial volunteers can demonstrate their appropriateness to each other, just like later nominees would. And answers to "where you're coming from": 1. Are you an LLVM contributor (individual or representing a company)? I am a contributor, as a Sony employee, and code-owner for the PS4 target. 2. Are you involved with security aspects of LLVM (if so, which)? I have participated in security-related discussions that come up. I've recently done some work on the stack-smash protector pass; IIRC, Sony contributed the 'strong' flavor, which I reviewed. Some years ago there was a random-nop-insertion pass (for ROP gadget removal) proposed, which didn't stick; we recently had a summer intern work on it but did not get to proper quality; I'd like to revive that. Pre-LLVM, I spent over a decade working on OS security for DEC and Tandem. I can’t say I’m still current on the topic, but it remains an interest. 3. Do you maintain significant downstream LLVM changes? Yes. 4. Do you package and deploy LLVM for others to use (if so, to how many people)? We package a Clang-based toolchain for app and game-development studios; I don't have exact numbers but the developer population is definitely in the thousands. 5. Is your LLVM distribution based on the open-source releases? Yes; we do continuous integration from upstream master but we base our releases on the upstream release branches. 6. How often do you usually deploy LLVM? Twice a year; rarely, we deploy hot fixes. 7. How fast can you deploy an update? A new release based on a new upstream branch has a very long lead time (months). We have deployed hot fixes based on a previous release in a few weeks, but we don't like to do that. 8. Does your LLVM distribution handle untrusted inputs, and what kind? I'm unclear what you mean by this. 9. What’s the threat model for your LLVM distribution? I can't speak to our internal security team's thoughts--we will likely want to nominate a second Sony person, from that team, to be a non-compiler-expert "vendor representative" who can better address that question. I can say that we use the same toolchain to build our OS, as well as other sensitive software such as the browser, along with games and other apps that could engage in online transactions involving actual money.
> On Nov 21, 2019, at 14:23, Robinson, Paul via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Some years ago there was a random-nop-insertion pass (for ROP gadget removal) proposed, which didn't stick; we recently had a summer intern work on it but did not get to proper quality; I'd like to revive that.Hi Paul, I'm curious about what the use case for this was. In the normal course of binary distribution of programs, the addition of nops doesn't affect ROP in any significant way. (For a while, inserting a nop before a ret broke ROPgadget's [1] ability to find interesting code sequences since it was looking for fixed sequences of instructions.) I could imagine it being used for JITted code. If that was the use case in mind, did you happen to compare it to other randomized codegen? I'm only curious because this has historically been an area of research of mine [2,3,4], not any sort of pressing matter. Thank you, Steve 1. https://github.com/JonathanSalwan/ROPgadget 2. https://checkoway.net/papers/evt2009/evt2009.pdf 3. https://checkoway.net/papers/noret_ccs2010/noret_ccs2010.pdf 4. https://checkoway.net/papers/fcfi2014/fcfi2014.pdf -- Stephen Checkoway
Philip Reames via llvm-dev
2019-Nov-26 00:51 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
On 11/15/19 10:58 AM, JF Bastien via llvm-dev wrote:> > > Hello compiler enthusiasts, > > > The Apple LLVM team would like to propose that a new a security > process and an associated private LLVM Security Group be created under > the umbrella of the LLVM project. > > A draft proposal for how we could organize such a group and what its > process could be is available on Phabricator > <https://reviews.llvm.org/D70326>. The proposal starts with a list of > goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > > 1. Allow LLVM contributors and security researchers to disclose > security-related issues affecting the LLVM project to members of > the LLVM community. > 2. Organize fixes, code reviews, and release management for said issues. > 3. Allow distributors time to investigate and deploy fixes before > wide dissemination of vulnerabilities or mitigation shortcomings. > 4. Ensure timely notification and release to vendors who package and > distribute LLVM-based toolchains and projects. > 5. Ensure timely notification to users of LLVM-based toolchains whose > compiled code is security-sensitive, through the CVE process > <https://cve.mitre.org/>. > > > We’re looking for answers to the following questions: > > 1. _On this list_: Should we create a security group and process? >Probably, thought we haven't seen a strong need to date. If a group does form, we (Azul) are definitely interested in participating as a vendor.> 1. _On this list_: Do you agree with the goals listed in the proposal? >Yes> > 1. _On this list_: at a high-level, what do you think should be done > differently, and what do you think is exactly right in the draft > proposal? >I'm a bit uncomfortable with the board selected initial group. I see the need for a final decision maker, but maybe require public on-list nominations before ratification by the board? If there's broad consensus, no need to appeal to the final decision maker.> 1. _On the Phabricator code review_: going into specific details, > what do you think should be done differently, and what do you > think is exactly right in the draft proposal? > 2. _On this list_: to help understand where you’re coming from with > your feedback, it would be helpful to state how you personally > approach this issue: > 1. Are you an LLVM contributor (individual or representing a > company)? >Yes, in this email responding in both my capacity as an individual contributor, and on the behalf of my employer, Azul Systems.> > 1. Are you involved with security aspects of LLVM (if so, which)? >We have responded to a couple of security relevant bugs, though we've generally not acknowledged that fact upstream until substantially later.> > 1. Do you maintain significant downstream LLVM changes? >Yes.> > 1. Do you package and deploy LLVM for others to use (if so, to > how many people)? >Yes. Can't share user count.> > 1. Is your LLVM distribution based on the open-source releases? >No. We build off of periodic ToT snapshots.> > 1. How often do you usually deploy LLVM? >We have a new release roughly monthly. We backport selectively as needed.> > 1. How fast can you deploy an update? >Usual process would be a week or two. In a true emergency, much less.> > 1. Does your LLVM distribution handle untrusted inputs, and what > kind? >Yes, for any well formed java input we may generate IR and invoke the optimizer. We fuzz extensively for this reason.> > 1. What’s the threat model for your LLVM distribution? >In the worst case, attacker controlled bytecode. Given that, the attacker can influence, but not entirely control IR fed to the compiler.> > Other open-source projects have security-related groups and processes. > They structure their group very differently from one another. This > proposal borrows from some of these projects’ processes. A few examples: > > * https://webkit.org/security-policy/ > * https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md > * https://wiki.mozilla.org/Security > * https://www.openbsd.org/security.html > * https://security-team.debian.org/security_tracker.html > * https://www.python.org/news/security/ > > When providing feedback, it would be great to hear if you’ve dealt > with these or other projects’ processes, what works well, and what can > be done better. > > > I’ll go first in answering my own questions above: > > 1. Yes! We should create a security group and process. > 2. We agree with the goals listed. > 3. We think the proposal is exactly right, but would like to hear the > community’s opinions. > 4. Here’s how we approach the security of LLVM: > 1. I contribute to LLVM as an Apple employee. > 2. I’ve been involved in a variety of LLVM security issues, from > automatic variable initialization to security-related > diagnostics, as well as deploying these mitigations to > internal codebases. > 3. We maintain significant downstream changes. > 4. We package and deploy LLVM, both internally and externally, > for a variety of purposes, including the clang, Swift, and > mobile GPU shader compilers. > 5. Our LLVM distribution is not directly derived from the > open-source release. In all cases, all non-upstream public > patches for our releases are available in repository branches > at https://github.com/apple. > 6. We have many deployments of LLVM whose release schedules vary > significantly. The LLVM build deployed as part of Xcode > historically has one major release per year, followed by > roughly one minor release every 2 months. Other releases of > LLVM are also security-sensitive and don’t follow the same > schedule. > 7. This depends on which release of LLVM is affected. > 8. Yes, our distribution sometimes handles untrusted input. > 9. The threat model is highly variable depending on the > particular language front-ends being considered. > > Apple is involved with a variety of open-source projects and their > disclosures. For example, we frequently work with the WebKit community > to handle security issues through their process. > > > Thanks, > > JF > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191125/0a243703/attachment.html>
Kostya Serebryany via llvm-dev
2019-Nov-27 02:31 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
> > On this list: Should we create a security group and process? >Yes, as long as it is a funded mandate by several major contributors. We can't run it as a volunteer group. Also, someone (this group, or another) should do proactive work on hardening the sensitive parts of LLVM, otherwise it will be a whack-a-mole. Of course, will need to decide what are those sensitive parts first.> On this list: Do you agree with the goals listed in the proposal? >In general - yes. Although some details worry me. E.g. I would try to be stricter with disclosure dates.> public within approximately fourteen weeks of the fix landing in the LLVMrepository is too slow imho. it hurts the attackers less than it hurts the project. oss-fuzz will adhere to the 90/30 policy> On this list: at a high-level, what do you think should be done > differently, and what do you think is exactly right in the draft proposal? >The process seems to be too complicated, but no strong opinion here. Do we have another example from a project of similar scale? On the Phabricator code review: going into specific details, what do you> think should be done differently, and what do you think is exactly right in > the draft proposal? >commented on GitHub vs crbug> On this list: to help understand where you’re coming from with your > feedback, it would be helpful to state how you personally approach this > issue: > Are you an LLVM contributor (individual or representing a company)? >Yes, representing Google.> Are you involved with security aspects of LLVM (if so, which)? >To some extent: * my team owns tools that tend to find security bugs (sanitizers, libFuzzer) * my team co-owns oss-fuzz, which automatically sends security bugs to LLVM> Do you maintain significant downstream LLVM changes? >no> Do you package and deploy LLVM for others to use (if so, to how many > people)? >not my team> Is your LLVM distribution based on the open-source releases? >no> How often do you usually deploy LLVM? >In some ecosystems LLVM is deployed ~ every two-three weeks. In others it takes months.> How fast can you deploy an update? >For some ecosystems we can turn around in several days. For others I don't know.> Does your LLVM distribution handle untrusted inputs, and what kind? >Third party OSS code that is often pulled automatically.> What’s the threat model for your LLVM distribution?Speculating here. I am not a real security expert myself * A developer getting a bug report and running clang/llvm on the "buggy" input, compromising the developer's desktop. * A major opensource project is compromised and it's code is changed in a subtle way that triggers a vulnerability in Clang/LLVM. The opensource code is pulled into an internal repo and is compiled by clang, compromising a machine on the build farm. * A vulnerability in a run-time library, e.g. crbug.com/606626 or crbug.com/994957 * (???) Vulnerability in a LLVM-based JIT triggered by untrusted bitcode. <2-nd hand knowledge> * (???) an optimizer introducing a vulnerability into otherwise memory-safe code (we've seen a couple of such in load & store widening) * (???) deficiency in a hardening pass (CFI, stack protector, shadow call stack) making the hardening inefficient. My 2c on the policies: if we actually treat some area of LLVM security-critical, we must not only ensure that a reported bug is fixed, but also that the affected component gets additional testing, fuzzing, and hardening afterwards. E.g. for crbug.com/994957 I'd really like to see a fuzz target as a form of regression testing. --kcc On Sat, Nov 16, 2019 at 8:23 AM JF Bastien via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hello compiler enthusiasts, > > The Apple LLVM team would like to propose that a new a security process > and an associated private LLVM Security Group be created under the umbrella > of the LLVM project. > > A draft proposal for how we could organize such a group and what its > process could be is available on Phabricator > <https://reviews.llvm.org/D70326>. The proposal starts with a list of > goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > > 1. Allow LLVM contributors and security researchers to disclose > security-related issues affecting the LLVM project to members of the LLVM > community. > 2. Organize fixes, code reviews, and release management for said > issues. > 3. Allow distributors time to investigate and deploy fixes before wide > dissemination of vulnerabilities or mitigation shortcomings. > 4. Ensure timely notification and release to vendors who package and > distribute LLVM-based toolchains and projects. > 5. Ensure timely notification to users of LLVM-based toolchains whose > compiled code is security-sensitive, through the CVE process > <https://cve.mitre.org/>. > > > We’re looking for answers to the following questions: > > 1. *On this list*: Should we create a security group and process? > 2. *On this list*: Do you agree with the goals listed in the proposal? > 3. *On this list*: at a high-level, what do you think should be done > differently, and what do you think is exactly right in the draft proposal? > 4. *On the Phabricator code review*: going into specific details, what > do you think should be done differently, and what do you think is exactly > right in the draft proposal? > 5. *On this list*: to help understand where you’re coming from with > your feedback, it would be helpful to state how you personally approach > this issue: > 1. Are you an LLVM contributor (individual or representing a > company)? > 2. Are you involved with security aspects of LLVM (if so, which)? > 3. Do you maintain significant downstream LLVM changes? > 4. Do you package and deploy LLVM for others to use (if so, to how > many people)? > 5. Is your LLVM distribution based on the open-source releases? > 6. How often do you usually deploy LLVM? > 7. How fast can you deploy an update? > 8. Does your LLVM distribution handle untrusted inputs, and what > kind? > 9. What’s the threat model for your LLVM distribution? > > > Other open-source projects have security-related groups and processes. > They structure their group very differently from one another. This proposal > borrows from some of these projects’ processes. A few examples: > > - https://webkit.org/security-policy/ > - > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md > - https://wiki.mozilla.org/Security > - https://www.openbsd.org/security.html > - https://security-team.debian.org/security_tracker.html > - https://www.python.org/news/security/ > > When providing feedback, it would be great to hear if you’ve dealt with > these or other projects’ processes, what works well, and what can be done > better. > > > I’ll go first in answering my own questions above: > > 1. Yes! We should create a security group and process. > 2. We agree with the goals listed. > 3. We think the proposal is exactly right, but would like to hear the > community’s opinions. > 4. Here’s how we approach the security of LLVM: > 1. I contribute to LLVM as an Apple employee. > 2. I’ve been involved in a variety of LLVM security issues, from > automatic variable initialization to security-related diagnostics, as well > as deploying these mitigations to internal codebases. > 3. We maintain significant downstream changes. > 4. We package and deploy LLVM, both internally and externally, for > a variety of purposes, including the clang, Swift, and mobile GPU shader > compilers. > 5. Our LLVM distribution is not directly derived from the > open-source release. In all cases, all non-upstream public patches for our > releases are available in repository branches at > https://github.com/apple. > 6. We have many deployments of LLVM whose release schedules vary > significantly. The LLVM build deployed as part of Xcode historically has > one major release per year, followed by roughly one minor release every 2 > months. Other releases of LLVM are also security-sensitive and don’t follow > the same schedule. > 7. This depends on which release of LLVM is affected. > 8. Yes, our distribution sometimes handles untrusted input. > 9. The threat model is highly variable depending on the particular > language front-ends being considered. > > Apple is involved with a variety of open-source projects and their > disclosures. For example, we frequently work with the WebKit community to > handle security issues through their process. > > > Thanks, > > JF > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191126/b78acccb/attachment.html>
Arnaud De Grandmaison via llvm-dev
2019-Dec-03 13:29 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hi JF, Thanks for putting up this proposal. Regarding your question, which I answer both as an individual and with an Arm hat:> Should we create a security group and process?Yes ! We believe it's good to have such a group and a process. It may not be perfect for everyone, but that's way better than nothing, and the current proposal has the necessary bits to evolve and adapt over time to the actual needs.> Do you agree with the goals listed in the proposal?Yes !> At a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?Dealing with security vulnerabilities is often a bit of a mess, done under time pressure, so having a “safe” place to quickly iterate / coordinate amongst interested parties and taking into account upstream LLVM is necessary. We like that the role of this group is to deal / coordinate security-related issues, not to define an overall security roadmap for LLVM --- this should happen in the open using the standard communication channels. We think this group could work on proof of concept fixes or act as a proxy in case the work is done externally, providing (pre-)reviews to ensure the fixes are at the expected LLVM quality level, but the actual code reviews for committing LLVM upstream should be conducted using the standard community process (i.e. no special channel / fast lane for committing).> Our approach to this issue: > 1. Are you an LLVM contributor (individual or representing a company)?I respond here both as an individual contributor and also on behalf of my employer, Arm.> 2. Are you involved with security aspects of LLVM (if so, which)?I'm involved with security aspects in general, and have occasionally been involved in some LLVM specific aspects of security.> 3. Do you maintain significant downstream LLVM changes?Yes we do, and a number of other companies using Arm also have downstream changes they maintain.> 4. Do you package and deploy LLVM for others to use (if so, to how many people)?In our case, the situation is not as simple as "package & deploy". As a company, we care that all Arm users get the security fixes, wether this is thru software (tools or libraries) directly or indirectly shipped by Arm, or thru their own tool / library provider, or thru the vanilla open source channel.> 5. Is your LLVM distribution based on the open-source releases?We don't, but I'm sure there are distributions / users relying on the open-source releases. We thus believe it's important that backports are made and shared whenever possible.> 6. How often do you usually deploy LLVM?We usually have about half a dozen releases a year, but then our downstream users have their own constraints / agenda. This will of course be different for other people providing Arm tools & libraries.> 7. How fast can you deploy an update?On our end, we usually need about 4 weeks, and our downstream users have their own constraints / agenda. This will of course be different for other people providing Arm tools & libraries.> 8.Does your LLVM distribution handle untrusted inputs, and what kind? > 9. What’s the threat model for your LLVM distribution?Given our large user base and usage models, answering this precisely now and here is impossible. Kind regards, Arnaud From: llvm-dev <llvm-dev-bounces at lists.llvm.org> on behalf of JF Bastien via llvm-dev <llvm-dev at lists.llvm.org> Reply to: JF Bastien <jfbastien at apple.com> Date: Saturday 16 November 2019 at 17:23 To: llvm-dev <llvm-dev at lists.llvm.org> Subject: [llvm-dev] [RFC] LLVM Security Group and Process Hello compiler enthusiasts, The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. A draft proposal for how we could organize such a group and what its process could be is available on Phabricator<https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: The LLVM Security Group has the following goals: 1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. 2. Organize fixes, code reviews, and release management for said issues. 3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. 4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. 5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process<https://cve.mitre.org/>. We’re looking for answers to the following questions: 1. On this list: Should we create a security group and process? 2. On this list: Do you agree with the goals listed in the proposal? 3. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? 4. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? 5. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: * Are you an LLVM contributor (individual or representing a company)? * Are you involved with security aspects of LLVM (if so, which)? * Do you maintain significant downstream LLVM changes? * Do you package and deploy LLVM for others to use (if so, to how many people)? * Is your LLVM distribution based on the open-source releases? * How often do you usually deploy LLVM? * How fast can you deploy an update? * Does your LLVM distribution handle untrusted inputs, and what kind? * What’s the threat model for your LLVM distribution? Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: * https://webkit.org/security-policy/ * https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md * https://wiki.mozilla.org/Security * https://www.openbsd.org/security.html * https://security-team.debian.org/security_tracker.html * https://www.python.org/news/security/ When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. I’ll go first in answering my own questions above: 1. Yes! We should create a security group and process. 2. We agree with the goals listed. 3. We think the proposal is exactly right, but would like to hear the community’s opinions. 4. Here’s how we approach the security of LLVM: * I contribute to LLVM as an Apple employee. * I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. * We maintain significant downstream changes. * We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. * Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple. * We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. * This depends on which release of LLVM is affected. * Yes, our distribution sometimes handles untrusted input. * The threat model is highly variable depending on the particular language front-ends being considered. Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. Thanks, JF -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191203/48664b0e/attachment.html>
JF Bastien via llvm-dev
2019-Dec-04 23:36 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
> On Nov 26, 2019, at 6:31 PM, Kostya Serebryany <kcc at google.com> wrote: > > On this list: Should we create a security group and process? > > Yes, as long as it is a funded mandate by several major contributors. > We can't run it as a volunteer group.I expect that major corporate contributors will want some of their employees involved. Is that the kind of funding you’re looking for? Or something additional?> Also, someone (this group, or another) should do proactive work on hardening the > sensitive parts of LLVM, otherwise it will be a whack-a-mole. > Of course, will need to decide what are those sensitive parts first. > > On this list: Do you agree with the goals listed in the proposal? > > In general - yes. > Although some details worry me. > E.g. I would try to be stricter with disclosure dates. > > public within approximately fourteen weeks of the fix landing in the LLVM repository > is too slow imho. it hurts the attackers less than it hurts the project. > oss-fuzz will adhere to the 90/30 policyThis specific bullet followed the Chromium policy: https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md#Can-you-please-un_hide-old-security-bugs <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md#Can-you-please-un_hide-old-security-bugs> Quoting it: Our goal is to open security bugs to the public once the bug is fixed and the fix has been shipped to a majority of users. However, many vulnerabilities affect products besides Chromium, and we don’t want to put users of those products unnecessarily at risk by opening the bug before fixes for the other affected products have shipped. Therefore, we make all security bugs public within approximately 14 weeks of the fix landing in the Chromium repository. The exception to this is in the event of the bug reporter or some other responsible party explicitly requesting anonymity or protection against disclosing other particularly sensitive data included in the vulnerability report (e.g. username and password pairs). I think the same rationale applies to LLVM.> On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > > The process seems to be too complicated, but no strong opinion here. > Do we have another example from a project of similar scale?Yes, the email lists some. WebKit’s process resembles the one I propose, but I’ve expanded some of the points which it left unsaid. i.e. in many cases it has the same content, but not as spelled out.> On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > > commented on GitHub vs crbug > > On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: > Are you an LLVM contributor (individual or representing a company)? > Yes, representing Google. > Are you involved with security aspects of LLVM (if so, which)? > > To some extent: > * my team owns tools that tend to find security bugs (sanitizers, libFuzzer) > * my team co-owns oss-fuzz, which automatically sends security bugs to LLVM > > Do you maintain significant downstream LLVM changes? > > no > > Do you package and deploy LLVM for others to use (if so, to how many people)? > > not my team > > Is your LLVM distribution based on the open-source releases? > > no > > How often do you usually deploy LLVM? > > In some ecosystems LLVM is deployed ~ every two-three weeks. > In others it takes months. > > How fast can you deploy an update? > > For some ecosystems we can turn around in several days. > For others I don't know. > > Does your LLVM distribution handle untrusted inputs, and what kind? > > Third party OSS code that is often pulled automatically. > > What’s the threat model for your LLVM distribution? > > Speculating here. I am not a real security expert myself > * A developer getting a bug report and running clang/llvm on the "buggy" input, compromising the developer's desktop. > * A major opensource project is compromised and it's code is changed in a subtle way that triggers a vulnerability in Clang/LLVM. > The opensource code is pulled into an internal repo and is compiled by clang, compromising a machine on the build farm. > * A vulnerability in a run-time library, e.g. crbug.com/606626 <http://crbug.com/606626> or crbug.com/994957 <http://crbug.com/994957> > * (???) Vulnerability in a LLVM-based JIT triggered by untrusted bitcode. <2-nd hand knowledge> > * (???) an optimizer introducing a vulnerability into otherwise memory-safe code (we've seen a couple of such in load & store widening) > * (???) deficiency in a hardening pass (CFI, stack protector, shadow call stack) making the hardening inefficient. > > My 2c on the policies: if we actually treat some area of LLVM security-critical, > we must not only ensure that a reported bug is fixed, but also that the affected component gets > additional testing, fuzzing, and hardening afterwards. > E.g. for crbug.com/994957 <http://crbug.com/994957> I'd really like to see a fuzz target as a form of regression testing.Thanks, this is great stuff!> --kcc > > > On Sat, Nov 16, 2019 at 8:23 AM JF Bastien via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > Hello compiler enthusiasts, > > > The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. > > A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. > Organize fixes, code reviews, and release management for said issues. > Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. > Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. > Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. > > We’re looking for answers to the following questions: > On this list: Should we create a security group and process? > On this list: Do you agree with the goals listed in the proposal? > On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: > Are you an LLVM contributor (individual or representing a company)? > Are you involved with security aspects of LLVM (if so, which)? > Do you maintain significant downstream LLVM changes? > Do you package and deploy LLVM for others to use (if so, to how many people)? > Is your LLVM distribution based on the open-source releases? > How often do you usually deploy LLVM? > How fast can you deploy an update? > Does your LLVM distribution handle untrusted inputs, and what kind? > What’s the threat model for your LLVM distribution? > > Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: > https://webkit.org/security-policy/ <https://webkit.org/security-policy/> > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> > https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> > https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> > https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> > https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. > > > I’ll go first in answering my own questions above: > Yes! We should create a security group and process. > We agree with the goals listed. > We think the proposal is exactly right, but would like to hear the community’s opinions. > Here’s how we approach the security of LLVM: > I contribute to LLVM as an Apple employee. > I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. > We maintain significant downstream changes. > We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. > Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>. > We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. > This depends on which release of LLVM is affected. > Yes, our distribution sometimes handles untrusted input. > The threat model is highly variable depending on the particular language front-ends being considered. > Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. > > > Thanks, > > JF > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191204/337c6d11/attachment.html>
Dimitry Andric via llvm-dev
2019-Dec-05 18:45 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
On 15 Nov 2019, at 19:58, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org> wrote:> The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. > > A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. > Organize fixes, code reviews, and release management for said issues. > Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. > Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. > Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. > > We’re looking for answers to the following questions: > On this list: Should we create a security group and process?Yes, I think that is a good idea.> On this list: Do you agree with the goals listed in the proposal?Yes, but I hope we can clarify what "time to investigate" and "timely notification" means, in more precise terms.> On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?With regards to the embargo time limits, I think that 90 days is a rather long minimum time. Remember that major LLVM release cycles are just ~180 days! Then again, I realize that some downstream organizations have very elaborate release cycle procedures. I just wish they were shorter for critical security issues. I also think that fourteen weeks from a commit landing to making the issue public is not really doable. Are we really going to commit something with a message "what this commit does is a secret, see you in 14 weeks" ? And then expect nobody to just look at what the changes entail, and derive the actual issue from that? It seems unrealistic, to say the least.> On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?I'll post the above on the review> On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: > Are you an LLVM contributor (individual or representing a company)?I am both an individual contributor to LLVM, and a member of the FreeBSD community, where I am mostly responsible for maintaining the LLVM fork (well, just plain LLVM with a few hacks and additional patches) in the FreeBSD source tree.> Are you involved with security aspects of LLVM (if so, which)?Not especially, though I have been involved with diagnosing quite a number of LLVM crash bugs, of which at least some could possibly be abused in security contexts. That said, I have not been actively searching for any security holes.> Do you maintain significant downstream LLVM changes?No, we try to keep the differences between stock LLVM components and the FreeBSD versions as small as possible. We do, however apply a few minor customizations, and quite a number of post-release patches. Most of these are to fix issues with compiling the rather large FreeBSD ports collection (roughly 33,000 of them), for a number of different architectures.> Do you package and deploy LLVM for others to use (if so, to how many people)?Yes, we build LLVM components such as clang, compiler-rt, libc++, lld, lldb and a number of llvm tools as part of the FreeBSD base system. These are shipped as releases in binary and source code form. Regular snapshots (roughly every week) are also made available. FreeBSD is used by quite a number of people and organizations, but since the project does not actively track its users, I don't know any hard (or even semi-soft :) numbers. There are also a number of projects downstream from FreeBSD, such as FreeNAS and TrueOS.> Is your LLVM distribution based on the open-source releases?Yes, and almost all the patches we apply are from the regular LLVM trunk or master.> How often do you usually deploy LLVM?Normally we update to each new major and minor release as they appear, and we are also involved in the testing process before those releases. Since not every bug that affects FreeBSD can get fixed, we also regularly apply fixes post LLVM releases.> How fast can you deploy an update?If any issue affects a released version of FreeBSD, it will be handled by the FreeBSD Security Team. They will investigate the severity of the issue and the impact, verify if the fix(es) apply and have the promised mitigating effect, and obviously determine if there are no negative side effects. Then they will build new binary bits that go out via the binary update system we use, a.k.a freebsd-update. Building the bits can be done in a few days, but the investigation is harder to pin down time-wise.> Does your LLVM distribution handle untrusted inputs, and what kind?The toolchain parts, e.g. clang, lld and lldb, can obviously be used for arbitrary source code, but will seldom be useful for e.g. privilege escalations. Other parts, such as compiler-rt and libc++, are installed as system wide dynamic libraries. Vulnerabilities in these could affect any application on the system which links to them.> What’s the threat model for your LLVM distribution?As described in the previous item, specifically the compiler-rt and libc++ libraries can be dependencies of many applications, some of which will be part of the system, and also be security sensitive. (For example, FreeBSD's device daemon devd <https://www.freebsd.org/cgi/man.cgi?query=devd> is written in C++, and linked to libc++.)> > Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: > https://webkit.org/security-policy/ <https://webkit.org/security-policy/> > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> > https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> > https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> > https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> > https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.I haven't interacted with any of the above security groups. But I have interacted with the FreeBSD Security Team, which has a page here <https://www.freebsd.org/security/>. Their process is fairly straightforward, and most of it is handled via email and a private Bugzilla instance. -Dimitry -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191205/f609fb2b/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 223 bytes Desc: Message signed with OpenPGP URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191205/f609fb2b/attachment.sig>
Ed Maste via llvm-dev
2019-Dec-10 15:28 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Dimitry had a pretty comprehensive reply for FreeBSD, but I want to expand on one thing: On Thu, 5 Dec 2019 at 13:45, Dimitry Andric <dimitry at andric.com> wrote:> > On this list: Do you agree with the goals listed in the proposal? > > Yes, but I hope we can clarify what "time to investigate" and "timely notification" means, in more precise terms.Other replies in the thread touched on this but I want to again higlight that we should make sure we are clear about what is and is not in scope for the team. Perhaps explicitly positioning this as an "LLVM SIRT" or similar rather than a "security team" to indicate that the focus is vulnerability response. Issues or discussions that are security-related but do not need to be handled in confidence don't require this process, but folks may send such issues to a "security team" (as happens on occasion with the FreeBSD security team).
JF Bastien via llvm-dev
2020-Jan-08 05:36 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hi folks! I want to ping this discussion again, now that the holidays are over. I’ve updated the patch to address the comments I’ve received. Overall it seems the feedback is positive, with some worries about parts that aren’t defined yet. I’m trying to get things started, so not everything needs to be defined yet! I’m glad folks have ideas of *how* we should define what’s still open. Thanks, JF> On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hello compiler enthusiasts, > > > The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. > > A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. > Organize fixes, code reviews, and release management for said issues. > Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. > Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. > Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. > > We’re looking for answers to the following questions: > On this list: Should we create a security group and process? > On this list: Do you agree with the goals listed in the proposal? > On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: > Are you an LLVM contributor (individual or representing a company)? > Are you involved with security aspects of LLVM (if so, which)? > Do you maintain significant downstream LLVM changes? > Do you package and deploy LLVM for others to use (if so, to how many people)? > Is your LLVM distribution based on the open-source releases? > How often do you usually deploy LLVM? > How fast can you deploy an update? > Does your LLVM distribution handle untrusted inputs, and what kind? > What’s the threat model for your LLVM distribution? > > Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: > https://webkit.org/security-policy/ <https://webkit.org/security-policy/> > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> > https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> > https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> > https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> > https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. > > > I’ll go first in answering my own questions above: > Yes! We should create a security group and process. > We agree with the goals listed. > We think the proposal is exactly right, but would like to hear the community’s opinions. > Here’s how we approach the security of LLVM: > I contribute to LLVM as an Apple employee. > I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. > We maintain significant downstream changes. > We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. > Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>. > We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. > This depends on which release of LLVM is affected. > Yes, our distribution sometimes handles untrusted input. > The threat model is highly variable depending on the particular language front-ends being considered. > Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. > > > Thanks, > > JF > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200107/d849f012/attachment-0001.html>
Serge Guelton via llvm-dev
2020-Jan-09 15:54 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hi JF, Answering your question both as an individual and with a red hat:> Should we create a security group and process?Yes! That's a good starter, and some bits of formalization are likely to be beneficial.> Do you agree with the goals listed in the proposal?Yes.> At a high-level, what do you think should be done differently, and whatdo you think is exactly right in the draft proposal? I like the non-intrusive coordination aspect. It also helps to have a group to speak with for responsible disclosure. The dispatch mechanism to actual developers is unclear. Do they need to be part of the group? How are thy contacted / based on which criteria?> Our approach to this issue:> 1. Are you an LLVM contributor (individual or representing a company)?yes and yes (Red Hat)> 2. Are you involved with security aspects of LLVM (if so, which)?In the past: yes, building an obfuscating compiler based on LLVM. In my current role: yes, trying to implement / catch-up with some of the gcc hardening feature clang doesn't have (e.g. -fstack-clash-protection and _FORTIFY_SOURCE improvement recently)> 3. Do you maintain significant downstream LLVM changes?We're trying to have as few patches as possible, so that's a small yes.> 4. Do you package and deploy LLVM for others to use (if so, to how manypeople)? Yes (Fedora and RHEL)> 5. Is your LLVM distribution based on the open-source releases?Yes., with a larger delay for RHEL.> 6. How often do you usually deploy LLVM?At least one for each Major and minor update (Fedora) and then backports + RHEL.> 7. How fast can you deploy an update?For fedora, it can be a matter of days. For RHEL it takes longer but it can be ~ 1 week.> 8.Does your LLVM distribution handle untrusted inputs, and what kind? > 9. What’s the threat model for your LLVM distribution?I don't think we have something specific to LLVM in the threat model, especially as gcc is the system compiler for both distributions. -- Serge On Wed, Jan 8, 2020 at 6:36 AM JF Bastien via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hi folks! > > I want to ping this discussion again, now that the holidays are over. I’ve > updated the patch to address the comments I’ve received. > > Overall it seems the feedback is positive, with some worries about parts > that aren’t defined yet. I’m trying to get things started, so not > everything needs to be defined yet! I’m glad folks have ideas of *how* we > should define what’s still open. > > > Thanks, > > JF > > > On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > Hello compiler enthusiasts, > > The Apple LLVM team would like to propose that a new a security process > and an associated private LLVM Security Group be created under the umbrella > of the LLVM project. > > A draft proposal for how we could organize such a group and what its > process could be is available on Phabricator > <https://reviews.llvm.org/D70326>. The proposal starts with a list of > goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > > 1. Allow LLVM contributors and security researchers to disclose > security-related issues affecting the LLVM project to members of the LLVM > community. > 2. Organize fixes, code reviews, and release management for said > issues. > 3. Allow distributors time to investigate and deploy fixes before wide > dissemination of vulnerabilities or mitigation shortcomings. > 4. Ensure timely notification and release to vendors who package and > distribute LLVM-based toolchains and projects. > 5. Ensure timely notification to users of LLVM-based toolchains whose > compiled code is security-sensitive, through the CVE process > <https://cve.mitre.org/>. > > > We’re looking for answers to the following questions: > > 1. *On this list*: Should we create a security group and process? > 2. *On this list*: Do you agree with the goals listed in the proposal? > 3. *On this list*: at a high-level, what do you think should be done > differently, and what do you think is exactly right in the draft proposal? > 4. *On the Phabricator code review*: going into specific details, what > do you think should be done differently, and what do you think is exactly > right in the draft proposal? > 5. *On this list*: to help understand where you’re coming from with > your feedback, it would be helpful to state how you personally approach > this issue: > 1. Are you an LLVM contributor (individual or representing a > company)? > 2. Are you involved with security aspects of LLVM (if so, which)? > 3. Do you maintain significant downstream LLVM changes? > 4. Do you package and deploy LLVM for others to use (if so, to how > many people)? > 5. Is your LLVM distribution based on the open-source releases? > 6. How often do you usually deploy LLVM? > 7. How fast can you deploy an update? > 8. Does your LLVM distribution handle untrusted inputs, and what > kind? > 9. What’s the threat model for your LLVM distribution? > > > Other open-source projects have security-related groups and processes. > They structure their group very differently from one another. This proposal > borrows from some of these projects’ processes. A few examples: > > - https://webkit.org/security-policy/ > - > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md > - https://wiki.mozilla.org/Security > - https://www.openbsd.org/security.html > - https://security-team.debian.org/security_tracker.html > - https://www.python.org/news/security/ > > When providing feedback, it would be great to hear if you’ve dealt with > these or other projects’ processes, what works well, and what can be done > better. > > > I’ll go first in answering my own questions above: > > 1. Yes! We should create a security group and process. > 2. We agree with the goals listed. > 3. We think the proposal is exactly right, but would like to hear the > community’s opinions. > 4. Here’s how we approach the security of LLVM: > 1. I contribute to LLVM as an Apple employee. > 2. I’ve been involved in a variety of LLVM security issues, from > automatic variable initialization to security-related diagnostics, as well > as deploying these mitigations to internal codebases. > 3. We maintain significant downstream changes. > 4. We package and deploy LLVM, both internally and externally, for > a variety of purposes, including the clang, Swift, and mobile GPU shader > compilers. > 5. Our LLVM distribution is not directly derived from the > open-source release. In all cases, all non-upstream public patches for our > releases are available in repository branches at > https://github.com/apple. > 6. We have many deployments of LLVM whose release schedules vary > significantly. The LLVM build deployed as part of Xcode historically has > one major release per year, followed by roughly one minor release every 2 > months. Other releases of LLVM are also security-sensitive and don’t follow > the same schedule. > 7. This depends on which release of LLVM is affected. > 8. Yes, our distribution sometimes handles untrusted input. > 9. The threat model is highly variable depending on the particular > language front-ends being considered. > > Apple is involved with a variety of open-source projects and their > disclosures. For example, we frequently work with the WebKit community to > handle security issues through their process. > > > Thanks, > > JF > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200109/ee74c30b/attachment.html>
Arnaud Allard de Grandmaison via llvm-dev
2020-Jan-24 18:22 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
On behalf of the board, I'd like to acknowledge that given the growing usage of LLVM in wildly different areas, having some structure or process to address security aspects is important, if not critical, for the health and success of the LLVM project as a whole. The board will fully support this group, but will not "run" it, as this does not fall in the Foundation's remits. We believe this is mostly an entity thing (companies, distributions, ...), and these are notoriously slow to react. It has to interact with their own security groups and their internal processes (SDL, ...) ; the usually active people on the mailing list are not necessarily the ones interested in this topic. Each security advisory being very specific (spectre is quite different from stack protection), plus the LLVM projects spectrum growing overtime (f18, mlir, libc, ...) makes us think that the people in that group are rather well identified security aware / knowledgeable / trusted contacts points in the entities (and used to deal with coordination amongst entities) rather than deep technical experts (the former is mandatory, the second is nice to have). Actual technical experts spot-on the advisory under work will need to be brought in on a need be basis by the security group. The board believes the real benefit with this group is the coordination of the security fix investigation and deployment amongst the different community entity-members. Finally, we believe it's best to begin with a small & motivated group, laying the foundations, and then extend it on a need be basis. On behalf of the board, I'd like to invite those who think their entity should care about this proposal to prod the relevant person(s) in their entity to comment on this proposal, preferably on the mailing list or phabricator, but worst case directly to JF or myself. Once we have some more comments / feedback, we can think of committing this policy, and forming an initial group. Kind regards, Arnaud From: Serge Guelton via llvm-dev <llvm-dev at lists.llvm.org>> Date: Thu, Jan 9, 2020 at 4:55 PM > Subject: Re: [llvm-dev] [RFC] LLVM Security Group and Process > To: JF Bastien <jfbastien at apple.com> > Cc: llvm-dev <llvm-dev at lists.llvm.org> > > > Hi JF, > > Answering your question both as an individual and with a red hat: > > > Should we create a security group and process? > > Yes! That's a good starter, and some bits of formalization are likely to > be beneficial. > > > Do you agree with the goals listed in the proposal? > > Yes. > > > At a high-level, what do you think should be done differently, and what > do you think is exactly right in the draft proposal? > > I like the non-intrusive coordination aspect. It also helps to have a > group to speak with for responsible disclosure. > > The dispatch mechanism to actual developers is unclear. Do they need to be > part of the group? How are thy contacted / based on which criteria? > > > > Our approach to this issue: > > > 1. Are you an LLVM contributor (individual or representing a company)? > > yes and yes (Red Hat) > > > > 2. Are you involved with security aspects of LLVM (if so, which)? > > In the past: yes, building an obfuscating compiler based on LLVM. > In my current role: yes, trying to implement / catch-up with some of the > gcc hardening feature clang doesn't have (e.g. -fstack-clash-protection and > _FORTIFY_SOURCE improvement recently) > > > > 3. Do you maintain significant downstream LLVM changes? > > We're trying to have as few patches as possible, so that's a small yes. > > > > 4. Do you package and deploy LLVM for others to use (if so, to how many > people)? > > Yes (Fedora and RHEL) > > > 5. Is your LLVM distribution based on the open-source releases? > > Yes., with a larger delay for RHEL. > > > > 6. How often do you usually deploy LLVM? > > At least one for each Major and minor update (Fedora) and then backports + > RHEL. > > > 7. How fast can you deploy an update? > > For fedora, it can be a matter of days. For RHEL it takes longer but it > can be ~ 1 week. > > > > 8.Does your LLVM distribution handle untrusted inputs, and what kind? > > 9. What’s the threat model for your LLVM distribution? > > I don't think we have something specific to LLVM in the threat model, > especially as gcc is the system compiler for both distributions. > > -- > Serge > > On Wed, Jan 8, 2020 at 6:36 AM JF Bastien via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Hi folks! >> >> I want to ping this discussion again, now that the holidays are over. >> I’ve updated the patch to address the comments I’ve received. >> >> Overall it seems the feedback is positive, with some worries about parts >> that aren’t defined yet. I’m trying to get things started, so not >> everything needs to be defined yet! I’m glad folks have ideas of *how* we >> should define what’s still open. >> >> >> Thanks, >> >> JF >> >> >> On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev < >> llvm-dev at lists.llvm.org> wrote: >> >> Hello compiler enthusiasts, >> >> The Apple LLVM team would like to propose that a new a security process >> and an associated private LLVM Security Group be created under the umbrella >> of the LLVM project. >> >> A draft proposal for how we could organize such a group and what its >> process could be is available on Phabricator >> <https://reviews.llvm.org/D70326>. The proposal starts with a list of >> goals for the process and Security Group, repeated here: >> >> The LLVM Security Group has the following goals: >> >> 1. Allow LLVM contributors and security researchers to disclose >> security-related issues affecting the LLVM project to members of the LLVM >> community. >> 2. Organize fixes, code reviews, and release management for said >> issues. >> 3. Allow distributors time to investigate and deploy fixes before >> wide dissemination of vulnerabilities or mitigation shortcomings. >> 4. Ensure timely notification and release to vendors who package and >> distribute LLVM-based toolchains and projects. >> 5. Ensure timely notification to users of LLVM-based toolchains whose >> compiled code is security-sensitive, through the CVE process >> <https://cve.mitre.org/>. >> >> >> We’re looking for answers to the following questions: >> >> 1. *On this list*: Should we create a security group and process? >> 2. *On this list*: Do you agree with the goals listed in the proposal? >> 3. *On this list*: at a high-level, what do you think should be done >> differently, and what do you think is exactly right in the draft proposal? >> 4. *On the Phabricator code review*: going into specific details, >> what do you think should be done differently, and what do you think is >> exactly right in the draft proposal? >> 5. *On this list*: to help understand where you’re coming from with >> your feedback, it would be helpful to state how you personally approach >> this issue: >> 1. Are you an LLVM contributor (individual or representing a >> company)? >> 2. Are you involved with security aspects of LLVM (if so, which)? >> 3. Do you maintain significant downstream LLVM changes? >> 4. Do you package and deploy LLVM for others to use (if so, to how >> many people)? >> 5. Is your LLVM distribution based on the open-source releases? >> 6. How often do you usually deploy LLVM? >> 7. How fast can you deploy an update? >> 8. Does your LLVM distribution handle untrusted inputs, and what >> kind? >> 9. What’s the threat model for your LLVM distribution? >> >> >> Other open-source projects have security-related groups and processes. >> They structure their group very differently from one another. This proposal >> borrows from some of these projects’ processes. A few examples: >> >> - https://webkit.org/security-policy/ >> - >> https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md >> - https://wiki.mozilla.org/Security >> - https://www.openbsd.org/security.html >> - https://security-team.debian.org/security_tracker.html >> - https://www.python.org/news/security/ >> >> When providing feedback, it would be great to hear if you’ve dealt with >> these or other projects’ processes, what works well, and what can be done >> better. >> >> >> I’ll go first in answering my own questions above: >> >> 1. Yes! We should create a security group and process. >> 2. We agree with the goals listed. >> 3. We think the proposal is exactly right, but would like to hear the >> community’s opinions. >> 4. Here’s how we approach the security of LLVM: >> 1. I contribute to LLVM as an Apple employee. >> 2. I’ve been involved in a variety of LLVM security issues, from >> automatic variable initialization to security-related diagnostics, as well >> as deploying these mitigations to internal codebases. >> 3. We maintain significant downstream changes. >> 4. We package and deploy LLVM, both internally and externally, for >> a variety of purposes, including the clang, Swift, and mobile GPU shader >> compilers. >> 5. Our LLVM distribution is not directly derived from the >> open-source release. In all cases, all non-upstream public patches for our >> releases are available in repository branches at >> https://github.com/apple. >> 6. We have many deployments of LLVM whose release schedules vary >> significantly. The LLVM build deployed as part of Xcode historically has >> one major release per year, followed by roughly one minor release every 2 >> months. Other releases of LLVM are also security-sensitive and don’t follow >> the same schedule. >> 7. This depends on which release of LLVM is affected. >> 8. Yes, our distribution sometimes handles untrusted input. >> 9. The threat model is highly variable depending on the particular >> language front-ends being considered. >> >> Apple is involved with a variety of open-source projects and their >> disclosures. For example, we frequently work with the WebKit community to >> handle security issues through their process. >> >> >> Thanks, >> >> JF >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200124/ae5043bc/attachment.html>
JF Bastien via llvm-dev
2020-Jun-11 15:39 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hi security-minded folks! I published this RFC quite a while ago, and have received good feedback from y’all, as well as enthusiasm from a few folks whose distribution would benefit from having a security process for LLVM. Arnaud and the Board approved the patch <https://reviews.llvm.org/D70326#2005279> a few weeks ago, I’ll therefore commit it in the next few days and start moving the missing parts forward. Some folks have self-identified as being interested in being part of the original Security Group. Let’s take this opportunity to hear from anyone else who’s interested: please speak up! Thanks, JF> On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hello compiler enthusiasts, > > > The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. > > A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. > Organize fixes, code reviews, and release management for said issues. > Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. > Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. > Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. > > We’re looking for answers to the following questions: > On this list: Should we create a security group and process? > On this list: Do you agree with the goals listed in the proposal? > On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: > Are you an LLVM contributor (individual or representing a company)? > Are you involved with security aspects of LLVM (if so, which)? > Do you maintain significant downstream LLVM changes? > Do you package and deploy LLVM for others to use (if so, to how many people)? > Is your LLVM distribution based on the open-source releases? > How often do you usually deploy LLVM? > How fast can you deploy an update? > Does your LLVM distribution handle untrusted inputs, and what kind? > What’s the threat model for your LLVM distribution? > > Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: > https://webkit.org/security-policy/ <https://webkit.org/security-policy/> > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> > https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> > https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> > https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> > https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. > > > I’ll go first in answering my own questions above: > Yes! We should create a security group and process. > We agree with the goals listed. > We think the proposal is exactly right, but would like to hear the community’s opinions. > Here’s how we approach the security of LLVM: > I contribute to LLVM as an Apple employee. > I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. > We maintain significant downstream changes. > We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. > Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>. > We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. > This depends on which release of LLVM is affected. > Yes, our distribution sometimes handles untrusted input. > The threat model is highly variable depending on the particular language front-ends being considered. > Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. > > > Thanks, > > JF > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200611/d4d26f95/attachment.html>
Kristof Beyls via llvm-dev
2020-Jun-12 13:50 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Thank you for progressing this, JF! As vendor contacts for Arm, myself (kristof.beyls at arm.com<mailto:kristof.beyls at arm.com>) and Peter Smith (peter.smith at arm.com<mailto:peter.smith at arm.com>) are interested in being part of the Security Group. We’re also interested in helping in the forming of this group. Thanks, Kristof On 11 Jun 2020, at 17:39, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: Hi security-minded folks! I published this RFC quite a while ago, and have received good feedback from y’all, as well as enthusiasm from a few folks whose distribution would benefit from having a security process for LLVM. Arnaud and the Board approved the patch<https://reviews.llvm.org/D70326#2005279> a few weeks ago, I’ll therefore commit it in the next few days and start moving the missing parts forward. Some folks have self-identified as being interested in being part of the original Security Group. Let’s take this opportunity to hear from anyone else who’s interested: please speak up! Thanks, JF On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: Hello compiler enthusiasts, The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. A draft proposal for how we could organize such a group and what its process could be is available on Phabricator<https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: The LLVM Security Group has the following goals: 1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. 2. Organize fixes, code reviews, and release management for said issues. 3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. 4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. 5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process<https://cve.mitre.org/>. We’re looking for answers to the following questions: 1. On this list: Should we create a security group and process? 2. On this list: Do you agree with the goals listed in the proposal? 3. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? 4. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? 5. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: * Are you an LLVM contributor (individual or representing a company)? * Are you involved with security aspects of LLVM (if so, which)? * Do you maintain significant downstream LLVM changes? * Do you package and deploy LLVM for others to use (if so, to how many people)? * Is your LLVM distribution based on the open-source releases? * How often do you usually deploy LLVM? * How fast can you deploy an update? * Does your LLVM distribution handle untrusted inputs, and what kind? * What’s the threat model for your LLVM distribution? Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: * https://webkit.org/security-policy/ * https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md * https://wiki.mozilla.org/Security * https://www.openbsd.org/security.html * https://security-team.debian.org/security_tracker.html * https://www.python.org/news/security/ When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. I’ll go first in answering my own questions above: 1. Yes! We should create a security group and process. 2. We agree with the goals listed. 3. We think the proposal is exactly right, but would like to hear the community’s opinions. 4. Here’s how we approach the security of LLVM: * I contribute to LLVM as an Apple employee. * I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. * We maintain significant downstream changes. * We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. * Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple. * We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. * This depends on which release of LLVM is affected. * Yes, our distribution sometimes handles untrusted input. * The threat model is highly variable depending on the particular language front-ends being considered. Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. Thanks, JF _______________________________________________ LLVM Developers mailing list llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev _______________________________________________ LLVM Developers mailing list llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200612/6154810e/attachment.html>
JF Bastien via llvm-dev
2020-Jul-10 22:31 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
Hello security-minded folks! After lots of good feedback from many people, I’ve finally committed the first version of the LLVM Security Group and Process: https://reviews.llvm.org/D70326 <https://reviews.llvm.org/D70326> There’s plenty more to improve, but this is a good start! Thanks, JF> On Nov 15, 2019, at 10:58 AM, JF Bastien <jfbastien at apple.com> wrote: > > Hello compiler enthusiasts, > > > The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project. > > A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here: > > The LLVM Security Group has the following goals: > Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community. > Organize fixes, code reviews, and release management for said issues. > Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings. > Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects. > Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>. > > We’re looking for answers to the following questions: > On this list: Should we create a security group and process? > On this list: Do you agree with the goals listed in the proposal? > On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal? > On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue: > Are you an LLVM contributor (individual or representing a company)? > Are you involved with security aspects of LLVM (if so, which)? > Do you maintain significant downstream LLVM changes? > Do you package and deploy LLVM for others to use (if so, to how many people)? > Is your LLVM distribution based on the open-source releases? > How often do you usually deploy LLVM? > How fast can you deploy an update? > Does your LLVM distribution handle untrusted inputs, and what kind? > What’s the threat model for your LLVM distribution? > > Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples: > https://webkit.org/security-policy/ <https://webkit.org/security-policy/> > https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md> > https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security> > https://www.openbsd.org/security.html <https://www.openbsd.org/security.html> > https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html> > https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better. > > > I’ll go first in answering my own questions above: > Yes! We should create a security group and process. > We agree with the goals listed. > We think the proposal is exactly right, but would like to hear the community’s opinions. > Here’s how we approach the security of LLVM: > I contribute to LLVM as an Apple employee. > I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases. > We maintain significant downstream changes. > We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers. > Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>. > We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule. > This depends on which release of LLVM is affected. > Yes, our distribution sometimes handles untrusted input. > The threat model is highly variable depending on the particular language front-ends being considered. > Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process. > > > Thanks, > > JF >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200710/57a9a11e/attachment.html>