James Y Knight via llvm-dev
2019-Nov-19 14:59 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
On Mon, Nov 18, 2019 at 6:00 PM JF Bastien via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > > On Nov 18, 2019, at 2:42 PM, David Blaikie via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > > On Mon, Nov 18, 2019 at 2:31 PM Robinson, Paul via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> One problem with defining away “arbitrary code execution in Clang” as >> “not security relevant” is that you are inevitably making probably-wrong >> assumptions about the set of all possible execution contexts. >> >> >> >> Case in point: Sony, being on the security-sensitive side these days, >> has an internal mandate that we incorporate CVE fixes into open-source >> products that we deliver. As it happens, we deliver some GNU Binutils >> tools with our PS4 toolchain. There are CVEs against Binutils, so we were >> mandated to incorporate these patches. “?” I said, wondering how some >> simple command-line tool could have a CVE. Well, it turns out, lots of the >> Binutils code is packaged in libraries, and some of those libraries can be >> used by (apparently) web servers, so through some chain of events it would >> be possible for a web client to induce Bad Stuff on a server (hopefully no >> worse than a DoS, but that’s still a security issue). Ergo, >> security-relevant patch in GNU Binutils. >> >> >> >> For **my** product’s delivery, the CVEs would be irrelevant. (Who cares >> if some command-line tool can crash if you feed it a bogus PE file; clearly >> not a security issue.) But, for someone *else’s* product, it *would* be >> a security issue. You can be sure that the people responsible for Binutils >> dealt with it as a security issue. >> >> >> >> So, yeah, arbitrary code-execution in Clang, or more obviously in the >> JIT, is a potential security issue. Clangd probably should worry about >> this kind of stuff too. And we should be ready to handle it that way. >> > > The reality is that clang is a long way from being hardened in that way > (pretty much every crash/assertion failure on invalid input is probably a > path to arbitrary code execution if someone wanted to try hard enough) & I > don't think the core developers are currently able to/interested > in/motivated to do the work to meet that kind of need - so I tend to agree > with James that it's better that this is clearly specified as a non-goal > than suggest some kind of "best effort" behavior here. > > > I’d rephrase this: it’s not currently something that LLVM developers have > tried to address, and it’s known to be insecure. Were someone to come in > and commit significant amount of work, it would definitely be something we > can support. > > I don’t want to say “non-goal” without explaining *why* that’s the case, > and what can be done to change things. In other words, if the security > group is willing to call something security-related, then it is. Whoever is > in that group has to put in the effort to address an issue. Until such > people are part of the group, the group should respond to issues of this > kind as “out of scope because <good reason>”. > > I agree we should document those reasons as we encounter them! I just > don’t think we should try to enumerate them right now. We’ll have a > transparency report, and that’s a great opportunity to revisit what we > think is / isn’t in scope, and call it out. >I think that's a problematic way to go about things, because the security group has limited membership and the discussions are private and limited -- even if there's limited visibility after-the-fact. That is certainly a necessary and desirable property when working to resolve undisclosed vulnerabilities, but it is not when making general decisions about what we as a project want to claim to support. Of course, we all will need to trust the people on the security group to make certain decisions on a case-by-case basis, but the discussion about what we want to be security supported should be -- must be -- public. This is not simply about deciding how to resolve an *issue* that's reported externally, it's about the entire process. The project needs to be on the same page as to what our security boundaries are, otherwise the security group will just end up just doing CVE-issue-response theater. And I do agree that if someone were to come in and put in the significant amounts of work to make LLVM directly usable in security-sensitive places, then we could support that. But none of that should have anything to do with the security group or its membership. All of that work and discussion, and the decision to support it in the end, should be done as a project-wide discussion and decision, just like anything else that's worked on. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191119/00246ed0/attachment.html>
JF Bastien via llvm-dev
2019-Nov-19 15:46 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
> On Nov 19, 2019, at 6:59 AM, James Y Knight <jyknight at google.com> wrote: > > On Mon, Nov 18, 2019 at 6:00 PM JF Bastien via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > > >> On Nov 18, 2019, at 2:42 PM, David Blaikie via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> >> >> On Mon, Nov 18, 2019 at 2:31 PM Robinson, Paul via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> One problem with defining away “arbitrary code execution in Clang” as “not security relevant” is that you are inevitably making probably-wrong assumptions about the set of all possible execution contexts. >> >> >> >> Case in point: Sony, being on the security-sensitive side these days, has an internal mandate that we incorporate CVE fixes into open-source products that we deliver. As it happens, we deliver some GNU Binutils tools with our PS4 toolchain. There are CVEs against Binutils, so we were mandated to incorporate these patches. “?” I said, wondering how some simple command-line tool could have a CVE. Well, it turns out, lots of the Binutils code is packaged in libraries, and some of those libraries can be used by (apparently) web servers, so through some chain of events it would be possible for a web client to induce Bad Stuff on a server (hopefully no worse than a DoS, but that’s still a security issue). Ergo, security-relevant patch in GNU Binutils. >> >> >> >> For *my* product’s delivery, the CVEs would be irrelevant. (Who cares if some command-line tool can crash if you feed it a bogus PE file; clearly not a security issue.) But, for someone else’s product, it would be a security issue. You can be sure that the people responsible for Binutils dealt with it as a security issue. >> >> >> >> So, yeah, arbitrary code-execution in Clang, or more obviously in the JIT, is a potential security issue. Clangd probably should worry about this kind of stuff too. And we should be ready to handle it that way. >> >> >> The reality is that clang is a long way from being hardened in that way (pretty much every crash/assertion failure on invalid input is probably a path to arbitrary code execution if someone wanted to try hard enough) & I don't think the core developers are currently able to/interested in/motivated to do the work to meet that kind of need - so I tend to agree with James that it's better that this is clearly specified as a non-goal than suggest some kind of "best effort" behavior here. > > I’d rephrase this: it’s not currently something that LLVM developers have tried to address, and it’s known to be insecure. Were someone to come in and commit significant amount of work, it would definitely be something we can support. > > I don’t want to say “non-goal” without explaining *why* that’s the case, and what can be done to change things. In other words, if the security group is willing to call something security-related, then it is. Whoever is in that group has to put in the effort to address an issue. Until such people are part of the group, the group should respond to issues of this kind as “out of scope because <good reason>”. > > I agree we should document those reasons as we encounter them! I just don’t think we should try to enumerate them right now. We’ll have a transparency report, and that’s a great opportunity to revisit what we think is / isn’t in scope, and call it out. > > I think that's a problematic way to go about things, because the security group has limited membership and the discussions are private and limited -- even if there's limited visibility after-the-fact.It has full visibility after the facts, not limited.> That is certainly a necessary and desirable property when working to resolve undisclosed vulnerabilities, but it is not when making general decisions about what we as a project want to claim to support. Of course, we all will need to trust the people on the security group to make certain decisions on a case-by-case basis, but the discussion about what we want to be security supported should be -- must be -- public.What we’re really discussing here is: how do we go from today’s status (nothing is security) to where we want to be (the right things are security). I think we agree what we have today isn’t good, and we also agree that we eventually want to get to a point where some issues are treated as security. We also agree that the criteria for what is treated as security should be documented. I’ll gladly add a section to that effect in the documentation, it is indeed missing so thanks for raising the issue.> This is not simply about deciding how to resolve an issue that's reported externally, it's about the entire process. The project needs to be on the same page as to what our security boundaries are, otherwise the security group will just end up just doing CVE-issue-response theater.I definitely don’t want theater.> And I do agree that if someone were to come in and put in the significant amounts of work to make LLVM directly usable in security-sensitive places, then we could support that. But none of that should have anything to do with the security group or its membership. All of that work and discussion, and the decision to support it in the end, should be done as a project-wide discussion and decision, just like anything else that's worked on.Here’s where we disagree: how to get from nothing being security to the right things being security. I want to put that power in the hands of the security group, because they’d be the ones with experience handling security issues, defining security boundaries, fixing issues in those boundaries, etc. I’m worried that the community as a whole would legislate things as needing to be secure, without anyone in the security group able or willing to make it so. That’s an undesirable outcome because it sets them up for failure. Of course neither of us is saying that the community should dictate to the security group, nor that the security group should dictate to the community. It should be a discussion. I agree with you that, in that transition period from no security to right security there might be cases where the security group disappoints the community, behind temporarily closed doors. There might be mistakes, an issue which should have been treated as security related won’t be. I would rather trust the security group, expect that it’ll do outreach when it feels unqualified to handle an issue, and fix any mistakes it makes if it happens. Doing so is better than where we are today. And again, I expect that the security group will document what is treated as security over time. The transparency report ensures this, but as I said above we should have documentation to that effect as well (I’ll add it). Does this help mitigate your concerns? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191119/0261a60b/attachment.html>
James Y Knight via llvm-dev
2019-Nov-25 15:36 UTC
[llvm-dev] [RFC] LLVM Security Group and Process
On Tue, Nov 19, 2019 at 10:46 AM JF Bastien <jfbastien at apple.com> wrote:> And I do agree that if someone were to come in and put in the significant > amounts of work to make LLVM directly usable in security-sensitive places, > then we could support that. But none of that should have anything to do > with the security group or its membership. All of that work and discussion, > and the decision to support it in the end, should be done as a project-wide > discussion and decision, just like anything else that's worked on. > > > Here’s where we disagree: how to get from nothing being security to the > right things being security. > > I want to put that power in the hands of the security group, because > they’d be the ones with experience handling security issues, defining > security boundaries, fixing issues in those boundaries, etc. I’m worried > that the community as a whole would legislate things as needing to be > secure, without anyone in the security group able or willing to make it so. > That’s an undesirable outcome because it sets them up for failure. > > Of course neither of us is saying that the community should dictate to the > security group, nor that the security group should dictate to the > community. It should be a discussion. I agree with you that, in that > transition period from no security to right security there might be cases > where the security group disappoints the community, behind temporarily > closed doors. There might be mistakes, an issue which should have been > treated as security related won’t be. I would rather trust the security > group, expect that it’ll do outreach when it feels unqualified to handle an > issue, and fix any mistakes it makes if it happens. Doing so is better than > where we are today. >My worry is actually the inverse -- that there may be a tendency to treat more issues as "security" than should be. When some bug is reported via the security process, I suspect there will be a default-presumption towards using the security process to resolve it, with all the downsides that go along with that. What I want is for it to be clear that certain kinds of issues are currently explicitly out-of-scope. E.g. crashes/code-execution/etc resulting from parsing or compiling untrusted C source-code with Clang, or parsing/compiling untrusted bitcode with LLVM, or linking untrusted files with LLD. These sorts of things should not, currently, be treated with a "security" mindset. They're *bugs*, which should be fixed, but if something's security depends on llvm being able to securely process untrusted inputs, sorry, that's not reasonable. (And yes, that's maybe sad, but is the reality right now). Until someone is willing to put in the significant effort to make those processes generally secure for use on untrusted inputs, handling individual bug-reports of this kind via a special process is not going to realistically improve security. Furthermore, if people disagree with the above paragraph, I'd like that discussion to be had in the open ahead of time, rather than in private after the first time such an issue is reported via a defined security process. It feels like you want the security team to be two different things: 1. A way to privately report security issues to LLVM, and a group of people to privately work on fixing such issues, for a coordinated release. 2. A group of people working generally on defining or improving security properties of LLVM. I don't think these two need or should be linked, though some of the same people might be involved in both. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191125/2f6ecb65/attachment.html>