| 2005 Technology
Symposium > Computing and Software
Computing and Software
The mission of the Computing and Software Area Team is to maintain awareness
of developments outside MITRE related to the technologies of computer
architecture and engineering, computer science, software engineering and
the software profession.
Extending Enterprise Services to the "Tactical Edge"
Bill DeCoste, Principal Investigator
Location(s): Washington and Bedford
^TOP
High Productivity Computing Systems
David Koester, Principal Investigator
Location(s): Washington and Bedford
Problem
A critical high-performance computing technology and capability gap has developed in the intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, and airborne contaminant modeling fields as we scale computing to terascale and petascale levels and beyond. These mission areas need high-end computer technologies with significantly improved performance (time-to-solution), programmability (idea-to-first-solution), portability (transparency), and robustness (reliability) over today's technologies.
Objectives
HPCS aims to create a new generation of economically viable HEC systems to be delivered in 2010, and a validated procurement methodology for the national security and industrial user community to be delivered in the 20072010 timeframe.
Activities
MITRE directs the HPCS Benchmarking working group that will develop all benchmarks and applications needed to scope the HPCS Mission Partners' requirements, drive technology development, and test the productivity of delivered technologies and systems. We seek to understand the Mission Partners' computing requirements, and will provide numerous, diverse benchmarks to support vendors developing HPCS technologies and other members of the Productivity Team.
Impact
HPCS is a critical technology effort in the larger government High End Computing Revitalization Task Force effort. The technologies developed for HPCS will be widely used by the national security and industrial communities. HPCS will close the gap between existing technologies and the promise of quantum computing by offering systems where value doubles every 18 months, not just the Moore's Law doubling of peak performance.
^TOP
Internal SourceForge
Jeremy Maziarz, Principal Investigator
Location(s): Washington and Bedford
Problem
Development projects (usually software) within MITRE are decentralized and can consume valuable resources. Management of these projects requires duplicate installation of tools and support services by developers across the company. This problem of decentralization and duplication of effort is costly due to the non-development nature related to the management tasks. Internal SourceForge (iSF) has been working to solve this problem for the past two years.
Objectives
The iSF service provides MITRE developers with a stable, unified service for developing and managing projects. Developers can then focus on project development without the added burden of maintaining the tools and services needed to get their job done. This translates to a reduction in project overhead produced by the high cost of startup and compute resources.
Activities
To improve our service we will contract the outside vendor GForge Group for professional services relating to support and customization. This will result in an upgrade to the most recent stable version of the underlying product GForge (version 4.0) which will include support for Subversion. We will also extend our effort to further provide integration with MITRE Corporate services. This effort will include the implementation of single sign-on with MyMII and integration with CommunityShare, MITRE's Microsoft Sharepoint environment, for more robust document management.
Impact
The iSF service has improved the quality of development projects by providing a unified set of tools that enable developers and project managers to focus on the development of their project rather than the management of their tools. Based on the current activities, projects will gain flexibility in the choice of tools such as code versioning and document management.
^TOP
Performance and Quality Testing Service
Aimee Bechtle, Principal Investigator
Location(s): Washington and Bedford
Problem
The R105 Performance and Quality Testing service combines powerful automation tools with the testing and tool expertise of our team. Our goal is to optimize the responsiveness, availability, and reliability of applications and infrastructure systems at MITRE. We will demonstrate how our performance test tools can emulate armies of concurrent users in diverse enterprise environments to detect bottlenecks or anomalies that cause poor end-user experience or failures and disruptions in the continuity of service. Witness first-hand the sophisticated visualization, analysis, and reporting capabilities that our tools provide to help diagnose problems. Discuss with our Performance Testers how they can assist your project during proof of concept, prototyping, debugging, and load/stress testing phases. We can help you tune your applications, servers, and network infrastructure and to understand how performance benchmarking and regression testing can maintain or improve the user's experience throughout the development of custom applications or the upgrade of a COTS product.
^TOP
Research Computer Facility
David Goldberg, Principal Investigator
Location(s): Bedford
Problem
Technical projects at MITRE such as MSRs and MOIEs often require access to computing and data storage resources beyond what individual projects can afford to purchase and manage on their own.
Objectives
The Research Computer Facility (RCF) was created in the early 1980s to provide a distributed computing environment t the MITRE technical community. Its mission is to help MITRE researchers focus more on their research efforts and less on their computing and data storage assets.
Activities
The RCF has developed into a highly scalable environment that provides users a common view of their home directory, project spaces, and application suite regardless of geographic location or platform. The RCF team is constantly evaluating and deploying new technologies. This year the RCF team is focusing on improving services based on virtual machine technology and storage area networking.
Impact
The RCF provides the MITRE technical community with access to computer systems and managed storage facilities that would be very expensive for individual projects to maintain. The storage area network has made it relaively easy to provide projects with terabytes of backed-up data space and virtual machines have allowed for quick turnaround on requests for servers.
^TOP
Service Level Management and End-User Experience Software
Art Laramee, Principal Investigator
Location(s): Washington and Bedford
Problem
Service Level Management (SLM) is a process for delivering services that consistently meet the needs of MITRE end users for application access and performance. To support this process we define Service Level Agreements (SLAs) and negotiate SLAs with the stakeholders funding the service. We also catalogue our corporate IT services using language our IT customers can understand. We track compliance of these services with the negotiated SLAs using a software solution that measures end-user experience. This software sends alerts based on established thresholds to technical support and provides monthly reporting on performance and availability. MITRE's initial deployment of SLM covers five services and all our domestic sites. We will describe the SLM process and demonstrate the end-user experience software.
^TOP
Time-Critical Resource Management in Dynamic C2 Systems
Douglas Jensen, Principal Investigator
Location(s): Washington and Bedford
Problem
Battle management command and control (BMC2) systems have multiple resources and needs with numerous constraints that must often be satisfied in seconds to minutes. Resource management is greatly complicated because the systems and their environments are dynamic, with many uncertainties. Commanders need dependable indicators and strong assurances about system behavior.
Objectives
We will help fill the void between traditional real-time resource management and traditional any-time planning and decision making for machine-to-machine resource management in dynamic BMC2 systems having seconds-to-minutes timeframes. We will apply the theory, methodologies, and tools of utility theory to seek a formal basis, methodology, and proof-of-concept software tool for time/utility function time constraints.
Activities
In collaboration with academic researchers, we will generalize and expand the theory and coverage of our time-critical resource management concept: time/utility functions and utility accrual scheduling. We will derive a methodology and produce a proof of concept software tool. We will demonstrate the results in a realistic BMC2 application.
Impact
Resource management in most BMC2 systems -- and hence software cost and the system's cost-effectiveness -- usually suffer from insufficient consideration of timeliness, due in large part to inadequate formal bases, methodology, and software tools. This problem is increasingly critical as shorter timeframes of opportunity necessitate automated resource management, and is increasingly difficult to solve as warfare becomes more network-centric.
^TOP
|