Adaptation Policies for Managing Configurable Software Lashon Booker, Principal Investigator Problems: Software applications must cope with computing environments in which resource constraints and user demands vary dynamically and often unpredictably. In domains such as tactical C2 computing environments, where loss of system availability is unacceptable, how can adequate levels of performance and functionality be maintained? Part of the answer must include software that can flexibly adapt its behavior in response to changing operating conditions.
Objectives: This research will determine the control and coordination qualities needed to make adaptation policies effective in a tactical C2 computing environment. The work should also help identify indicators for situations where successful software adaptation requires human intervention. We will develop prototypes for the infrastructure and adaptation policies needed to direct adaptive changes in configurable software under dynamic operating conditions.
Activities: The research will address issues regarding how to monitor the computing environment, interpret measurements, invoke adaptation mechanisms in a coordinated way, and assess the benefits of adaptation. We will select a target application system for testing, develop an initial prototype infrastructure supporting adaptive policies and mechanisms, and then expand the prototype to include policies that work with user-specified goals.
Impact: Sponsor software applications will have an increased capability to maintain acceptable levels of performance and functionality under dynamically changing conditions. The prototype may lead to a transition opportunity in the selected application area. This applied research will provide empirical results on a challenging real-world software application, forming the basis for publications and giving MITRE the opportunity to build collaborative relationships with academia.
Approved for Public Release: 05-1438 Presentation [PDF]
High Productivity Computing Systems David Koester, Principal Investigator Problems: A critical high-performance computing technology and capability gap has developed in the intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, and airborne contaminant modeling fields as we scale computing to terascale and petascale levels and beyond. These mission areas need high-end computer technologies with significantly improved performance (time-to-solution), programmability (idea-to-first-solution), portability (transparency), and robustness (reliability) over today’s technologies.
Objectives: HPCS aims to create a new generation of economically viable High End Computing systems to be delivered in 2010, and a validated procurement methodology for the national security and industrial user community to be delivered in by 2010.
Activities: MITRE directs the HPCS Benchmarking working group that will develop all benchmarks and applications needed to scope the HPCS Mission Partners’ requirements, drive technology development, and test the productivity of delivered technologies and systems. We seek to understand the Mission Partners’ computing requirements, and will provide numerous, diverse benchmarks to support vendors developing HPCS technologies and other members of the Productivity Team.
Impact: HPCS is a critical technology effort in the larger government High End Computing Revitalization Task Force effort. The technologies developed for HPCS will be widely used by the national security and industrial communities. HPCS will close the gap between existing technologies and the promise of quantum computing by offering systems where value doubles every 18 months, not just the Moore’s Law doubling of peak performance.
Approved for Public Release: 05-0042
High Productivity Computing Systems Optical Ravi Athale, Principal Investigator
Medical Device Interoperability Kathryn Lesh, Principal Investigator Problems: The Food and Drug Administration (FDA) Center for Devices and Radiological Health regulates medical devices, but there is no requirement for devices to interoperate. There is a dearth of medical device interoperability standards and the few existing standards are complex. As a result, "interoperable" medical devices can interoperate only with devices from the same manufacturer or their partners.
Objectives: The primary objectives of this project are to clearly define the medical device interoperability issue and identify and assess possible solutions to the problem. Secondary objectives are to nurture relationships with multidisciplinary public and private entities, publish a paper, and participate in interoperability standards development.
Activities: We will review existing standards for medical device interoperability, and will identify and assess current projects addressing medical device interoperability. We will collaborate with the Medical Device Plug and Play project, the Patient Health Situational Awareness CEM IR&D, and the FDA Center for Devices and Radiological Health. This collaboration will include writing a white paper.
Impact: Medical device interoperability is critical to improving patient safety in hospitals, emergency/disaster situations, and home care. This research presents an opportunity for multidisciplinary public/private collaboration between MITRE staff and academia, industry, federal government, engineers, clinicians, computer scientists, and informaticists. Among the MITRE customers affected will be the Departments of Veterans Affairs, Defense, and Health and Human Services.
Approved for Public Release: 07-0279
Predictable End-to-End Timeliness in Network Centric Warfare Systems Douglas Jensen, Principal Investigator Problems: NCW systems are subject to combat-induced overloads, network variabilities, and failures, and thus are more dynamic than civilian systems. There are gaps in both theory and technology between the subsecond timeframe for on-line scheduling of traditional real-time computing and the many-hour timeframe for off-line scheduling and planning for traditional logistical and manufacturing applications.
Objectives: We will create the first-ever solutions to provide formally assured, predictable, end-to-end application-level timeliness under failures and network variabilities, using time/utility function/utility accrual resource management with distributed threads. We will work with DoD sponsors (e.g., the Air Force Research Laboratory (AFRL)) and COTS vendors to demonstrate and transition technology (e.g., to Echelon 4's product for USAF/DARPA's Joint Air-Ground Unified Adaptive Re-planning project).
Activities: We will team with Virginia Tech to create resource management algorithms that accommodate failures of nodes and network paths, and then prove, simulate, implement, and evaluate the algorithms. Working with MITRE colleagues, we will adapt operations research and other mathematical approaches to the NCW resource management timeframes and work with Echelon 4 to support the transition of our results into its product.
Impact: We will help NCW systems begin to become more dependable and cost-effective. The Echelon 4 COTS product, incorporating our concepts and techniques, has been selected to be showcased in AFRL's Project Integration Center. We will publish papers in scholarly computer science journals and conference proceedings, and establish a lasting resource management research collaboration with various MITRE colleagues.
Approved for Public Release: 05-1471 Presentation [PDF]
Project Monsoon: Secure, Prioritized, Reliable, Distributed Data Dissemination from Source to Tactical Edge Howard Kong, Principal Investigator Problems: In a net-centric environment, moving data from the source(s) to the consumers is the most critical and essential requirement. Monsoon aims to help programs do this in a secure, reliable, and prioritized way, over unreliable and evolving networks, in anticipation of increases in the amount of data and in communities of users.
Objectives: Our objectives are twofold. First, we seek to understand the general characteristics of data dissemination on the basis of key requirements and constraints from select programs. Second, we seek to identify the benefits, weaknesses, and applicability of the peer-to-peer approach to solving data dissemination problems.
Activities: To meet our objectives, we are building an open-source, peer-to-peer data dissemination prototype, enhanced for end-to-end performance and security using plug-ins. In addition, we are modeling the prototype's behavior in the OPNET simulation environment to study the prototype "in the large" using program-realistic scenarios. Finally, we will compare the characteristics of the peer-to-peer prototype to traditional methods.
Impact: The work will have a direct impact on AF Weather's next-generation data dissemination architecture. We will also investigate applicability of the technology to other programs, such as E-10A and the Single Integrated Air Picture. In addition, we believe the open source prototype will provide a compelling base for commercial companies to leverage and enrich other government, DoD, and commercial applications.
Approved for Public Release: 06-1487 Presentation [PDF]
Quantum Information Science Gerry Gilbert, Principal Investigator Problems: Quantum information science is a new, interdisciplinary field that holds the promise of providing the means for solving practical problems that would otherwise be impossible. Quantum computers solve certain types of previously intractable computational problems, such as breaking public key encryption systems, as well as a variety of challenging, computationally-intensive mathematical problems. The problem is to discover a scalable, efficient, fault-tolerant design.
Objectives: We plan to develop the world's first efficient, scalable, fault-tolerant quantum computer design.
Activities: We will perform theoretical and systems-engineering quantum computing analyses and develop quantum information processing components using the linear quantum optics or cluster approach. We will design and demonstrate a quantum memory device, prototype a non-linear sign shift gate or cluster fusion operator, and demonstrate the quantum computing components.
Impact: This work will have significant impact on MITRE's sponsors, as well as the academic and industrial scientific and technology communities. It will provide the basis for technology that will enhance our abilities in code breaking, real-time analysis of frequency-hopped spread-spectrum communications, steganographic analysis, and other computationally intensive problems. This work maintains and enhances MITRE's leading position in an important area of science and technology.
Approved for Public Release: 05-1439 Presentation [PDF]
Research Computing Facility (RCF) David Goldberg, Principal Investigator Problems: Technical projects at MITRE such as MSR and MOIE often require access to computing and data storage resources beyond what individual projects can afford to purchase and manage on their own.
Objectives: The Research Computing Facility (RCF) was created in the early 1980s to provide a distributed computing environment to the MITRE technical community. Its mission is to help MITRE researchers focus more on their research efforts and less on their computing and data storage assets.
Activities: The RCF has developed into a highly scalable environment that provides users a common view of their home directory, project spaces, and application suite regardless of geographic location or platform. The RCF team is constantly evaluating and deploying new technologies. This year the RCF team is focusing on improving services based on virtual machine technology and storage area networking and developing a capability to provision computing power to specific tasks.
Impact: The RCF provides the MITRE technical community with access to computer systems and managed storage facilities that would be very expensive for individual projects to maintain. The storage area network has made it relatively easy to provide projects with terabytes of backed-up data space and virtual machines have allowed for quick turnaround on requests for servers.
Approved for Public Release: 05-0348 Presentation [PDF]
The 3x2 Challenge: Improve Fingerprint Recognition Speed by Two Orders of Magnitude While Decreasing Cost by Three Orders of Magnitude Steve Barry, Principal Investigator Problems: Biometric-based identification is exhaustive matching of a biometric marker against a database of such markers. Increases in database size and number of fingerprints matched increases computational load and cost unacceptably. Current matchers perform 1.5-2.5 matches/second/dollar. To meet projected needs, matching must be done at 250,000 matches/second/dollar. This project will meet this challenge by using commodity hardware and innovative software.
Objectives: We will distribute the fingerprint matching application efficiently onto parallel hardware to realize three orders of magnitude better cost/match. Hardware includes the Cell BE processor, general-purpose CPUs, vector processing units, and high-speed graphics processors. We will also use advanced algorithms, indexing, and pattern-based classification to reduce the search set size to improve matches per second by two orders of magnitude
Activities: Our initial work will establish the feasibility of our goals through simulation based on a theoretical model and through incremental targeted studies to implement and port image processing functions on specialized hardware. We will examine database scalability and classification properties for binning and filtering and for synchronizing single instruction, multiple data functions. Select hardware platforms will be clustered and individually researched.
Impact: Our program will develop an open API for high-volume fingerprint matching systems and will identify and demonstrate practical ways to improve the price:performance ratio of such systems. Work will incorporate promising research and integrate an operational system. Our goal is to demonstrate the value of our approach to four U.S. government departments: DHS, DoD, DoJ, and DoS.
Approved for Public Release: 07-0271 Presentation [PDF]
^TOP |