Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Incorporate recommendations from Mark Thomas

...

  • We support the notion of identifying critical components.  As our software is without cost and can be downloaded by anybody, we have no way of knowing how widely our projects are used.  This means that by necessity determining if a component is critical can only either be determined by automated means which is always going to be incomplete and imperfect, or more manually such as by identifying critical products first, and then getting an inventory of what components those products embed.  This problem is not unique to the ASF, so it makes sense for there to be industry standards and best practices.  We are prepared to participate in these activities.
    • The owner of a system is best placed to determine the criticality of that system and the degree to which individual components contribute to that criticality.

    • Criticality depends on usage. We need to avoid solutions that create burdens for all system owners using a software component when the component is only critical for some systems.

  • We eagerly welcome audits and fixes from any source.  We have a process defined for doing so: https://www.apache.org/security/.  Our one caution is that we have prior experiences with audits and it is not productive when those audits produce a number of false positives.  See https://s.apache.org/fhoji.  It is worth noting that many automated means of performing an audit are more suited to detecting bugs inside a single component rather than identifying a vulnerability that is composed of combination of intentional features such as the one we saw with log4j.  It is better when these audits are performed by skilled resources who are able to accurately identify actual issues, produce fixes, and therefore reduce the burden on each project.
  • It is particularly frustrating to us that we can produces fixes for critical issues in a matter of days or weeks and not to have those fixes be picked up for months or years, despite having a common and industry standard process for disclosing vulnerabilities and fixes.  This is the problem that we see as most pressing.
    • "We also find that a typical zero-day attack lasts 312 days on average and that, after vulnerabilities are disclosed publicly, the volume of attacks exploiting them increases by up to 5 orders of magnitude." – https://dl.acm.org/doi/10.1145/2382196.2382284
    • This is a really hard problem to solve. Like with criticality, whether a system is exposed to a particular vulnerability depends on usage. We don't want to introduce something that impacts all systems when only some are vulnerable.
    • Adding automatic update checks could work for products but not libraries. Even then there are plenty of system owners (including most of the US Federal Government) who don't want their software phoning home for any reason.
    • Logging regular warnings once the software reaches a certain age requires assumptions to be made in terms of expose to vulnerabilities that are likely to be wrong for a reasonable proportion of systems.  And, again, is only really workable for products.
    • The most viable approach is likely to be to incentivize system owners to ensure that they are not exposed to vulnerabilities via using a product within their system:
      • with known vulnerabilities;
      • that has no established process for handling vulnerability reports and publishing details of confirmed vulnerabilities; and/or
      • has reached End of Life without effective mitigation for the associated security risks.

The incentive has to be some form of liability (civil and/or criminal) if a system owner does any of the above. Such liability may already exist, for example in the FTC. Expanding the funding of the organizations tasked with enforcement in one possible option

Further reading