Semi-Automatic Annotation of Natural Language Vulnerability Reports

Semi-Automatic Annotation of Natural Language Vulnerability Reports

Yan Wu, Robin Gandhi, Harvey Siy
Copyright: © 2013 |Pages: 24
DOI: 10.4018/jsse.2013070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Those who do not learn from past vulnerabilities are bound to repeat it. Consequently, there have been several research efforts to enumerate and categorize software weaknesses that lead to vulnerabilities. The Common Weakness Enumeration (CWE) is a community developed dictionary of software weakness types and their relationships, designed to consolidate these efforts. Yet, aggregating and classifying natural language vulnerability reports with respect to weakness standards is currently a painstaking manual effort. In this paper, the authors present a semi-automated process for annotating vulnerability information with semantic concepts that are traceable to CWE identifiers. The authors present an information-processing pipeline to parse natural language vulnerability reports. The resulting terms are used for learning the syntactic cues in these reports that are indicators for corresponding standard weakness definitions. Finally, the results of multiple machine learning algorithms are compared individually as well as collectively to semi-automatically annotate new vulnerability reports.
Article Preview
Top

1. Introduction

Software design and development is an error-prone process, which inadvertently introduces a number of weaknesses that can later be exploited for malicious purposes. A weakness is the result of a flaw that creates pre-conditions necessary for the introduction of vulnerabilities within that software. Weaknesses in software present opportunities for malicious attacks that compromise the security policy expectations. Common sources of weaknesses are validation errors, domain errors, and serialization/aliasing errors (Landwehr et al., 1994). Vulnerabilities and weaknesses are related but different, i.e. only exploitable (or already exploited) weaknesses can be considered as vulnerabilities. Weaknesses might stay in software but never cause any problems if they remain un-exploitable by an attacker. In this work we focus on annotating reports that document exploitable and publicly known vulnerabilities with corresponding standard weakness definitions. Several vulnerability reports in widely used software packages are cross-referenced from the National Vulnerability Database (NVD) (NIST, 2012) that maintains the Common Vulnerabilities and Exposures (CVE) list (MITRE, 2011b).

Most software development projects have some processes, tools and techniques to document and track reported vulnerabilities. This information is recorded in existing project repositories such as change logs in version control systems, entries in bug tracking systems and communication threads in mailing lists. As these repositories were created for different purposes, it is not straightforward to extract useful vulnerability-related information. In large projects, these repositories store vast amounts of data, and as a result, the relevant information is buried in a mass of irrelevant data. Natural language text descriptions and discussions do not facilitate mechanisms to aggregate vulnerability artifacts from multiple sources or pinpoint the actual software fault and affected software elements.

Two key problems exist: information overload of vulnerability information in software repositories and, paradoxically, lack of techniques to systematically annotate and analyze this data. The large volume of data in software repositories and other project information sources make it difficult to locate the artifacts needed to identify, track and study previously recorded vulnerabilities. This condition is compounded by the fact that the complete record of information is scattered over several separate systems with different information schemas and natural language descriptions. Even if the information is found, a significant amount of work is needed to reconstruct the trail of artifacts that help understand the actual vulnerability. Thus, the information within software project repositories is not in a representation that can be easily extracted and analyzed for vulnerability-related questions.

On the other hand, while a significant body of knowledge exists for classifying and categorizing software weaknesses, it is hardly applied in the context of a software project. From the information assurance community, there have been several attempts to catalog the knowledge and information gathered from past vulnerabilities in order to avoid future occurrences. The Common Vulnerabilities and Exposures (CVE) (MITRE, 2011b) provides an identification scheme that enables the collection and recording of known vulnerabilities from software development organizations, coordination centers and individuals. This facilitates review by experts which in turn leads to the generalization of similar vulnerabilities into higher-order software weaknesses. Several such software weakness categorizations have been attempted. Among these, the Common Weakness Enumeration (CWE) (MITRE, 2011c) is a community developed dictionary of such software weakness types that attempts to encompass other weakness categorization efforts.

  • IGI Global’s Seventh Annual Excellence in Research Journal Awards
    IGI Global’s Seventh Annual Excellence in Research Journal AwardsHonoring outstanding scholarship and innovative research within IGI Global's prestigious journal collection, the Seventh Annual Excellence in Research Journal Awards brings attention to the scholars behind the best work from the 2014 copyright year.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing