MITRE Translates Software Quality Issues into a Common LanguageOctober 2016
Topics: Software Engineering, Software (General), Cybersecurity, Computer Security
Other than software engineers and developers, the average person doesn't give much thought to the quality of the software and apps they use every day. Not until they freeze, malfunction, or crash, that is.
Software quality issues can be traced to a wide variety of reasons, from simple coding errors to lack of static and dynamic testing. Yet seemingly small quality errors can have a big—and sometimes devastating—impact. Consider the latest version of the Pokémon Go game with its (frustrating yet innocuous) glitches, crashes and bugs, which were further exacerbated by an unprecedented amount of people trying to interact with Pokémon Go servers. Nobody was hacked; no one stole any data. The problem wasn’t a software security weakness. The problems arose from quality issues that kept the system from delivering on its initial promise.
That's where MITRE's Common Quality Enumeration (CQE) initiative comes in. CQE focuses on the quality issues that are in the gray area, somewhere between pure black (security focused) and pure white (other quality issues). It's a natural outgrowth of our longstanding work in defining, cataloguing, and sharing information about common security issues in software.
As MITRE's John Marien, a software systems engineer explains, "The Common Weakness Enumeration [CWE] lists quality issues that can be exploited. One or more weaknesses can create a vulnerability. Yet beyond these security relevant weaknesses, there's a large set of quality issues not covered by CWE."
The quality issues not covered by CWE can still significantly affect a system, drive up testing, maintenance, and in-field failures as well as degrade performance. These are the issues covered by CQE.
Defining Software Quality
In soliciting research ideas from MITRE's staff that will benefit our sponsors in the long-term, Matt Patron pondered the question, "Why is government software quality, and to some extent software in general, so poor?"
To answer that question, Patron, who leads the software engineering focus area for MITRE's independent research and development program, set out to uncover what was keeping so many software developers from their promise of delivering high-quality software.
But first, his research team had to consider what constitutes "quality"?
Examples of International Organization for Standardization (ISO) software quality standards include:
- Reliability: Does the software consistently do what it’s supposed to?
- Performance efficiency: How long is it taking to do something, such as load a webpage?
- Maintainability: How much time and effort does it take to keep it running smoothly? (Think of those endless Windows updates!)
- Functional suitability: Is the software appropriate for the tasks the user wants to perform?
Security is also an attribute of quality. As noted earlier, security issues often fall under the scope of common weaknesses, but "nearly three-quarters of all security violations start off with quality violations," Marien says.
Different Languages to Describe the Same Problems
Currently, commercial off-the-shelf automated software analyzers all use their own language or tagging system to describe software quality issues. A developer using multiple tools to analyze software quality (an industry best practice) has a difficult time making sense of the results if each tool speaks a "different language."
For example, say a developer runs three tools on a piece of code, and each tool comes up with 100 quality issues. There’s no way to know if all three tools are revealing the same 100 issues, 300 different issues, or if there is some overlap among the detected issues because there is no standardized language to define a specific quality issue.
Without out a standardized system to reconcile the findings from analysis tools, it’s understandable how quality issues slip through the cracks and lead to issues down the road.
"That’s the goal of CQE—to come up with a common language so quality issues are described the same way, using the same code words, and same numbers in all analysis tools," Patron says.
It Takes Trust to Create a New Language
MITRE’s years of work enumerating common weaknesses (CWE) and vulnerabilities (CVE) laid the groundwork of expertise and trust with a number of software tool creators. This made MITRE the natural choice to develop the content for the missing lingua-franca (a language understood by people speaking different languages) of quality issues. Additionally, as the not-for-profit operator of multiple federally funded research and development centers working in the public interest, MITRE doesn't compete with private industry.
"MITRE is helming the CQE project because automated-tool creators trust us with proprietary data they would not share with each other," Marien says. "They know we won't use it for a competitive advantage."
The data from tool creators is necessary for MITRE to develop a standardized list of quality issues. Or as Patron puts it, "Understanding what they call a 'chicken' and making sure everybody's chicken is the same animal."
Software tool creators have reacted positively to the CQE project. To date, 10 have signed non-disclosure agreements and have provided their data to MITRE for this development effort.
The goal is for commercial and open source tool developers to adopt the CQE standards and integrate them into their products. In turn, software engineers and people developing software on behalf of MITRE’s sponsors will use these tools to detect more issues earlier in the development process, resulting in the delivery of higher quality software.
If the lingua-franca of CQE becomes widely adopted, it could have a beneficial impact on the quality of every developer's software—whether commercial or open source—making software better and safer for both the public and the government.
Now that’s sure to get the attention of the average person. Parlez-vous CQE?
—by Lisa Pacitto