Property-Based Testing to Verify an Environment



Research Goal

The goal of this project is to provide support for an effort to improve the security of NASA's computing environment for the Mars Lander Project. Our focus will be to update the Tester's Assistant to: (1) Include specifications that describe potential vulnerabilities in the environment; (2) Operate on programs using the languages critical to a correct functioning of the NASA environment; and (3) Perform run-time checking of unsliced programs as well as static analysis of sliced programs. The effectiveness of this tool will also be evaluated.

Approach

(1) Determine the security policy and potential vulnerabilities.
A security policy says what is and is not allowed. A vulnerability is a condition that enables someone (the attacker) to violate the security policy. Security policies vary from site to site. For example, consider a race condition problem in UNIX software that is to be run with root privileges. If all users of that system have root privileges, then the race condition is not a vulnerability because it allows users to access root--an access they already enjoy. Usually, however, UNIX systems have users not authorized to work as root. If they exploit the race condition to gain root access, then the security policy (which says that only users authorized to acquire root privileges may acquire them) has been violated.

In order to develop the property-based testing tool to analyze programs for potential security vulnerabilities, we must understand exactly what a vulnerability is in the NASA environment. To do this, we will assisst JPL in creating a summary of relevant security policy items, and present examples of possible security violations and their consequences. We will summarize our findings in a matrix of potential vulnerabilities and their consequences when exploited, ranked according to seriousness.

One ancillary issue is the mapping of a high-level security policy to an implementation level. Continuing the above example, the policy states a rule at an abstract level ("only users authorized to acquire root privileges may acquire root privileges"). At the implementation level, this translates into several rules ("butter overflows are a vulnerability," "failure to check arguments are a vulnerability," and so forth). Unfortunately, testing for security flaws requires the implementation level statement of the policy. Hence the vulnerabilities must be stated at that level to prepare for the next step.

(2) Plan the testing for the vulnerabilities
Given the statement of vulnerabilities, we must test the software composing the environment to determine which programs, if any, have those vulnerabilities. We must focus on the software as configured in the NASA environment, rather than in arbitrary environments. We also must focus on the vulnerabilities of interest.
We will survey existing tools to determine if, and how, they can be used to locate these vulnerabilities. Our intent is to provide a mapping between vulnerabilities and tools to provide guidance on stopgap measures. We will also determine what vulnerabilities will need new tools developed. Typically, the tools will require some configuration, and/or make assumptions about the environment; our survey will include these constraints and assumptions as well.

(3) Develop prototype testing tool for NASA environment
This step develops a prototype testing mechanism for the NASA environment. We will use a low-level specification language to specify the vulnerabilities in the matrix, and put the specifications into a library for future use. Our goal here is to provide an implementation level specification of the vulnerabilities suitable for testing.

We will then take the property-based testing work and apply it to the NASA environment. This most likely will involve porting the Tester's Assistant to, or (more likely) rewriting it for, the systems being used and the language of interest. Success will be determined by the property-based testing tool performing static analysis of several programs of interest. Some known flaws will be injected into the program to provide a baseline of measurement. Once this is completed, we will develop and prototype an engine to perform testing on programs in a production environment.

(4) Perform study of prototype to evaluate its effectiveness
At this point we will examine the use of the run-time testing tool to determine how much its use degrades performance (if at all), and how much extra overhead the testing adds to the system in general. We will also perform:"friendly" penetration attacks on the system and determine if the run-time testing environment detects the exploiting of the vulnerabilities.

(5) Write-up the results, lessons learned, open questions, and alternate approaches
We will review all work done, describe the lessons learned, and examine any alternate approaches in light of what we did. In particular, the results of step 4 will provide insight into our approach, and how run-time testing can be used to augment static testing and analysis. Finally, any research such as this raises questions that cannot be answered in the given time frame, and we shall discuss these, and their importance.

(6) Refine the prototype testing tool
Based upon the results of step 4, the analysis in step 5, and the experience gained in a year's use of the testing tool, we will make appropriate and requested modifications to the tool to improve its effectiveness and usability. We will also agument the library of vulnerability specifications as needed.

Funding

NASA / JPL

 

Contact person:
Matt Bishop
bishop@cs.ucdavis.edu

last modified 3/29/02