About Us Our Work Employment News & Events
MITRE Remote Access for MITRE Staff and Partners Site Map

Technology Symposium banner

» Complete Project List

» Table of Contents

»

Projects Featured in Computing and Software:


Adaptation Policies for Managing Configurable Software

Evolving Software Through Metrics

High Productivity Computing Systems

Optical Interconnects for High-Productivity Computing Systems

Predictable End-to-End Timeliness in Network Centric Warfare Systems

Research Computing Facility (RCF)

Server Consolidation Using VMWare Virtualization

blue line

2006 Technology Symposium > Computing and Software

Computing and Software

The mission of the Computing and Software Area Team is to maintain awareness of developments outside MITRE related to the technologies of computer architecture and engineering, computer science, software engineering and the software profession.


Adaptation Policies for Managing Configurable Software

Lashon Booker, Principal Investigator

Location(s): Washington

Problem
Software applications must cope with computing environments in which resource constraints and user demands vary dynamically and often unpredictably. In domains such as tactical C2 computing environments, where loss of system availability is unacceptable, how can adequate levels of performance and functionality be maintained? Part of the answer must include software that can flexibly adapt its behavior in response to changing operating conditions.

Objectives
This research will determine the control and coordination qualities needed to make adaptation policies effective in a tactical C2 computing environment. The work should also help identify indicators for situations where successful software adaptation requires human intervention. We will develop prototypes for the infrastructure and adaptation policies needed to direct adaptive changes in configurable software under dynamic operating conditions.

Activities
The research will address issues regarding how to monitor the computing environment, interpret measurements, invoke adaptation mechanisms in a coordinated way, and assess the benefits of adaptation. We will select a target application system for testing, develop an initial prototype infrastructure supporting adaptive policies and mechanisms, and then expand the prototype to include policies that work with user-specified goals.

Impact
Sponsor software applications will have an increased capability to maintain acceptable levels of performance and functionality under dynamically changing conditions. The prototype may lead to a transition opportunity in the selected application area. This applied research will provide empirical results on a challenging real-world software application, forming the basis for publications and giving MITRE the opportunity to build collaborative relationships with academia.

Presentation [PDF]


^TOP

Evolving Software Through Metrics

Mark Doernhoefer, Principal Investigator

Location(s): Washington and Bedford

Problem
Many commercial software development tools analyze code and produce metrics reports, but there is a lack of proven correlation between specific software measurements and "goodness" of code. This research effort will focus on identifying which software measurements should be used during a development effort, investigating how to interpret those metrics, and finding new combinations of software measurements that produce objective indications of code quality.

Objectives
Our main objective is to demonstrate that certain software measurements can provide objective measures of software quality and directly relate these measures to risk mitigation in a given software development. We will identify specific metrics and relate them to specific code quality deficiencies. We will also develop a comprehensive database of metrics threshold values that can be traced to past code quality results.

Activities
We will examine source control repositories and bug tracking systems of open source projects, correlate bug reports to points in code, and run metrics to see if they can "predict" the bug. We intend to demonstrate a correlation between code segments experiencing high error rates and certain code metrics, identify metrics that predict code quality, and show how certain metrics relate to certain types of failures.

Impact
This research will help advance the state of software development from art to science by generating an objective set of measures with proven threshold values. The metrics will provide a measurable level of risk associated with a software project, and thus assist program managers to keep project costs and schedule in line. We will also share our results with the open source community.

Presentation [PDF]


^TOP

High Productivity Computing Systems

Brian Sroka, Principal Investigator

Location(s): Washington and Bedford

Problem
A critical high-performance computing technology and capability gap has developed in the intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, and airborne contaminant modeling fields as we scale computing to terascale and petascale levels and beyond. These mission areas need high-end computer technologies with significantly improved performance (time-to-solution), programmability (idea-to-first-solution), portability (transparency), and robustness (reliability) over today's technologies.

Objectives
HPCS aims to create a new generation of economically viable HEC systems to be delivered in 2010, and a validated procurement methodology for the national security and industrial user community to be delivered in the 20072010 timeframe.

Activities
MITRE directs the HPCS Benchmarking working group that will develop all benchmarks and applications needed to scope the HPCS Mission Partners' requirements, drive technology development, and test the productivity of delivered technologies and systems. We seek to understand the Mission Partners' computing requirements, and will provide numerous, diverse benchmarks to support vendors developing HPCS technologies and other members of the Productivity Team.

Impact
HPCS is a critical technology effort in the larger government High End Computing Revitalization Task Force effort. The technologies developed for HPCS will be widely used by the national security and industrial communities. HPCS will close the gap between existing technologies and the promise of quantum computing by offering systems where value doubles every 18 months, not just the Moore's Law doubling of peak performance.


^TOP

Optical Interconnects for High-Productivity Computing Systems

Ravi Athale, Principal Investigator

Location(s): Washington

Presentation [PDF]


^TOP

Predictable End-to-End Timeliness in Network Centric Warfare Systems

Douglas Jensen, Principal Investigator

Location(s): Washington and Bedford

Problem
NCW systems are subject to combat-induced overloads, network variabilities, and failures, and thus are more dynamic than civilian systems. There are gaps in both theory and technology between the subsecond timeframe for on-line scheduling of traditional real-time computing and the many-hour timeframe for off-line scheduling and planning for traditional logistical and manufacturing applications.

Objectives
We will create the first-ever solutions to provide formally assured, predictable, end-to-end application-level timeliness under failures and network variabilities, using time/utility function/utility accrual resource management with distributed threads. We will work with DoD sponsors (e.g., the Air Force Research Laboratory (AFRL)) and COTS vendors to demonstrate and transition technology (e.g., to Echelon 4's product for USAF/DARPA's Joint Air-Ground Unified Adaptive Re-planning project).

Activities
We will team with Virginia Tech to create resource management algorithms that accommodate failures of nodes and network paths, and then prove, simulate, implement, and evaluate the algorithms. Working with MITRE colleagues, we will adapt operations research and other mathematical approaches to the NCW resource management timeframes and work with Echelon 4 to support the transition of our results into its product.

Impact
We will help NCW systems begin to become more dependable and cost-effective. The Echelon 4 COTS product, incorporating our concepts and techniques, has been selected to be showcased in AFRL's Project Integration Center. We will publish papers in scholarly computer science journals and conference proceedings, and establish a lasting resource management research collaboration with various MITRE colleagues.

Presentation [PDF]


^TOP

Research Computing Facility (RCF)

David Goldberg, Principal Investigator

Location(s): Washington and Bedford

Problem
Technical projects at MITRE such as MSR and MOIE often require access to computing and data storage resources beyond what individual projects can afford to purchase and manage on their own.

Objectives
The Research Computing Facility (RCF) was created in the early 1980s to provide a distributed computing environment to the MITRE technical community. Its mission is to help MITRE researchers focus more on their research efforts and less on their computing and data storage assets.

Activities
The RCF has developed into a highly scalable environment that provides users a common view of their home directory, project spaces, and application suite regardless of geographic location or platform. The RCF team is constantly evaluating and deploying new technologies. This year the RCF team is focusing on improving services based on virtual machine technology and storage area networking and developing a capability to provision computing power to specific tasks.

Impact
The RCF provides the MITRE technical community with access to computer systems and managed storage facilities that would be very expensive for individual projects to maintain. The storage area network has made it relatively easy to provide projects with terabytes of backed-up data space and virtual machines have allowed for quick turnaround on requests for servers.

Presentation [PDF]


^TOP

Server Consolidation Using VMWare Virtualization

Dan Sorensen, Principal Investigator

Location(s): Washington and Bedford

Problem
Server sprawl has over the years been a necessary evil to support MITRE's ever-expanding list of IT services. Individual applications often require their own dedicated server and operating systems for a variety of reasons, including software incompatibilities, access control and testing needs, and high availability/disaster recovery. Unfortunately, use of these machines is often very low, and the cost of owning and maintaining these systems is very high.

Activities
Come see how CI&T has adopted VMWare to help solve the server sprawl dilemma and discover other benefits along the way. VMWare is a technology that allows virtualization and consolidation of multiple operating systems onto a single shared server. It also provides simple and transparent migration of virtual servers between physical servers in a VMWare "domain".

Impact
Some of the benefits of this technology include much higher hardware utilization rates and resulting reductions in server counts and related infrastructure needs such as floor space, power, and cooling. It has also greatly reduced the turnaround time and labor required to provision new servers, and is allowing CI&T to pursue significant improvements to our server disaster recovery and availability capabilities.


^TOP

 

 

Homeland Security Center Center for Enterprise Modernization Command, Control, Communications and Intelligence Center Center for Advanced Aviation System Development

 
 
 

Solutions That Make a Difference.®
Copyright © 1997-2013, The MITRE Corporation. All rights reserved.
MITRE is a registered trademark of The MITRE Corporation.
Material on this site may be copied and distributed with permission only.

IDG's Computerworld Names MITRE a "Best Place to Work in IT" for Eighth Straight Year The Boston Globe Ranks MITRE Number 6 Top Place to Work Fast Company Names MITRE One of the "World's 50 Most Innovative Companies"
 

Privacy Policy | Contact Us