Abstract :
For optimum success, static-analysis tools must balance the ability to find important defects against the risk of false positive reports. A human must interpret each reported warning to determine if any action is warranted, and the criteria for judging warnings can vary significantly depending on the analyst´s role, the security risk, the nature of the defect, the deployment environment, and many other factors. These considerations mean that it can be difficult to compare tools with different characteristics, or even to arrive at the optimal way to configure a single tool. This article presents a model for computing the value of using a static-analysis tool. Given inputs such as engineering effort, the cost of an exploited security vulnerability, and some easily measured tool properties, the model lets users make rational decisions about how best to deploy static analysis.
Keywords :
program diagnostics; security of data; false positive reports; security risk; security vulnerability; static-analysis tool deployments; Algorithm design and analysis; Analytical models; Approximation algorithms; Computer security; Human factors; Privacy; Software quality; Testing; software quality; software security; static analysis;