About Us Our Work Employment News & Events
MITRE Remote Access for MITRE Staff and Partners Site Map

Technology Symposium banner

 

» Complete Project List

» Table of Contents

»

Projects Featured in Information Management:

Automated Web Service Discovery, Characterization, and Cataloging

Collaborative Data Objects

Content Extraction and Duplicate Analysis and Recognition (CEDAR)

Data Discovery Using Digests

Electronic Journal System in a Tactical Fusion Center

Flexible Data Management

Improving Trustworthiness of Enterprise Data for Decision Making

MITRE's Extranet: Community Share Partners

Montage: Exploiting UAV Video in Mission Context

Next Generation Meeting Support

Sensor Data and Analysis Framework

SezHoo: Using Reputation to Increase the Trustworthiness of Information

Social and Semantic Software Prototypes: Social Bookmarking in the Enterprise and Semantically Enhanced Topic Watching

Standard Rule Language for Enterprise Application Integration

Technical Intelligence Gathering and Evaluation Process

blue line

Information Management

Information Management focuses on technologies and processes that enable the organization, creation, management, and use of information to satisfy the needs of diverse applications and users.


Automated Web Service Discovery, Characterization, and Cataloging

Douglas Troester, Principal Investigator

Problems:
NetOps operational managers do not have the technical capabilities they need to automatically discover, catalogue, and characterize Web services operating on the GIG. Without an accurate dashboard that details what Web services are operating, managers are seriously limited in their ability to monitor and control the services or to accurately assess their impact on other GIG services.

Objectives:
We will design, develop, test, and demonstrate a prototype capability to automatically discover, catalogue, and characterize Web services. We will also investigate ways to integrate information available from existing GIG sensors, such as the NIPRNet Akami infrastructure, to further enhance the Web services information available to NetOps managers.

Activities:
We will explore and survey existing standards, approaches, and products for discovering Web services and investigate whether certain deployed technologies, such as intrusion detection systems, can be adapted to provide information about Web services. We will ascertain what information NetOps managers need about a Web service, develop and apply a data normalization methodology, and design and populate an initial Web service dashboard capability.

Impact:
The capabilities developed will apply directly to many MITRE projects and customers, including Joint Task Force � Global NetOps, Joint and Service NetOps centers, the National Geospatial-Intelligence Agency, DISA�s Net-Centric Enterprise Services program, and other programs engaged in developing or managing Web services. The effort will provide NetOps managers with critical new capabilities and expand MITRE�s knowledge and expertise in Web service management.

Approved for Public Release: 07-0086


Collaborative Data Objects

Dan Winkowski, Principal Investigator

Problems:
The utility of enterprise chat tools has led to their increased use for C2 decision making. However, poor integration with "systems of records" hinders the seamless flow of decision-quality information. Also, for data-intensive collaboration these tools can actually present obstacles to information sharing, shared situational awareness, and self-synchronization. Issues of ambiguity, accuracy, and quality are prevalent.

Objectives:
Collaborative data objects (CDOs) are a proposed methodology to introduce structured data to unstructured conversational chat sessions without disrupting the proven lightweight style of collaboration on which warfighters depend.

Activities:
We will develop a CDO description language as well as a framework to host, manage, and synchronize CDOs across chat sessions. Standards will be defined for accessing mission application and enterprise service functionality through CDOs and to facilitate seamless information flow into and out of chat sessions. A scenario and set of initial CDOs will be developed to support assessment.

Impact:
Our vision is a collaborative environment rich in data objects that enable improved application data exchanges, interaction with agents via a rich semantic foundation, interaction with other collaborative media (maps, shared applications), and exploitation (searching, cataloging, learning) of saved conversational threads based on CDO markers. Technology transfer will take place through interactions with various standards bodies.

Approved for Public Release: 05-1408

Presentation [PDF]


Content Extraction and Duplicate Analysis and Recognition (CEDAR)

Susan Lubar, Principal Investigator

Problems:
Researchers estimate that about 30 percent of documents on the Web are duplicates. Duplicates cause reduced productivity and incomplete analysis by inflating the dataset without adding new information to the data. Because of the presence of extraneous content such as advertisements, navigation bars, and lists of URLs, there is currently no accurate method to identify documents that contain duplicate information.

Objectives:
Our goal is to develop an effective system for detecting duplication in Web documents. We will create a process to identify a document's "core content": the parts of HTML pages that are of interest to analysts. On the basis of the core content, these documents will then be analyzed to determine the presence of duplicates.

Activities:
We will create a dataset containing hand-tagged duplicate Web pages to use as a gold standard for system evaluation. We will then define criteria to be varied as system input to determine what should be considered a duplicate document, develop a system for identifying core content, and detect duplicates. We will evaluate system performance by measuring precision and recall.

Impact:
Our research will enable our sponsors to significantly improve their ability to analyze information from Web pages. Detecting duplicates will improve the quality of their collections, allowing analysts to achieve higher quality analytical results in a shorter timeframe.

Approved for Public Release: 06-1423

Presentation [PDF]


Data Discovery Using Digests

Peter Mork, Principal Investigator

Problems:
In this project, we seek to facilitate the discovery of structured data resources relevant to sponsor missions. Current search technology indexes primarily content expressed using textual formats. However, hidden beneath this surface content lurks a vast array of structured content stored in relational or hierarchical data resources. Our research strives to allow potential data consumers to discover this deep content.

Objectives:
Traditionally, data consumers connect with data producers via social mechanisms. We intend to augment this strategy with technology: Using our software, a data producer summarizes its data as a succinct digest. A discovery service aggregates these digests into a repository. Using our search tool, data consumers search the repository for relevant datasets, which they obtain directly from their respective owners.

Activities:
We will first determine how best to summarize data resources in a manner that protects the interests of data producers and meets the needs of data consumers. We will then implement a simple discovery service to support simple ranges (for example, imagery from a geographic region). Finally, we will extend the discovery service to allow more complex searches, including query-by-example.

Impact:
Our work dramatically improves our sponsors' ability to find mission-critical data languishing in hidden databases. Data producers are able to advertise the existence of their datasets to gain visibility, without compromising sensitive data points. We will transition our results to the developers of metadata registries (such as the Department of Defense Metadata Registry) to extend the breadth and depth of these resources.

Approved for Public Release: 06-1398

Presentation [PDF]


Electronic Journal System in a Tactical Fusion Center

Nathan Vuong, Principal Investigator

Presentation [PDF]


Flexible Data Management

Len Seligman, Principal Investigator

Problems:
The military lacks mechanisms to balance flexibility and information sharing effectively. Operational "power users" often meet local needs by developing one-off data sources, but other potential users currently have no effective way of discovering or adapting this data. Finally, vendors offer a bewildering assortment of implementation mechanisms for more flexible data management, yet published design principles do not exist.

Objectives:
We will extend the state of the art in data management to improve the sharing of autonomously developed sources, by developing an enhanced discovery capability for structured data, a smart data extender that helps power users adapt existing data resources to their needs, and best practices for flexible data implementation. We will transition lessons learned to government programs, vendors, and researchers.

Activities:
We will develop an enhanced discovery tool by adapting linguistically-aware processing from MITRE�s patent-pending Harmony schema matcher. Next, we will develop a tool to help operational power users extend existing resources and advertise new ones to facilitate discovery. Finally, we will empirically evaluate trade-offs among the many options for implementing more flexible data management and will publish design guidelines.

Impact:
All our customers need substantial improvements in both agility (e.g., ability to respond to new threats) and information sharing. Our research will create tools and techniques that address this need by easing discovery and adaptation of autonomously developed data resources. In addition, we will fill a void in design principles for more flexible data management.

Approved for Public Release: 06-1513


Improving Trustworthiness of Enterprise Data for Decision Making

David Becker, Principal Investigator

Problems:
A primary assertion pertaining to decision making is: "Data of unknown quality is inherently untrustworthy. Conversely, data of known quality can be treated appropriately in the decision-making process." Unfortunately, today there is no systematic approach to representing, measuring, capturing, and using information about data quality. Too often, decisions are based on low-quality data.

Objectives:
We will use emerging Web tools (XML, RDF, OWL, etc.) to provide decision makers a more complete view of available data. The research will create ontologies and semantic rules for data quality, and add-ons or plug-ins for commercial tools, that allow data quality to be automatically factored into decision making.

Activities:
We will study this problem by modeling the semantics of data quality and decision support. We will survey various data quality and decision support tools and techniques and target further experimentation to areas not adequately covered. We will create a prototype environment that uses our semantic constructs and commercial tools and which demonstrates the advantages of using these techniques in targeted scenarios.

Impact:
Our work has the potential to reduce dramatically the number of situations where decision makers must act on data that is incomplete or questionable for various reasons. Our results will clarify the relevance and limitations of a data set to support a given decision and suggest ways to improve how data quality information is incorporated into the decision making process.

Approved for Public Release: 05-1458

Presentation [PDF]


MITRE's Extranet: Community Share Partners

Michele Smith, Principal Investigator


Montage: Exploiting UAV Video in Mission Context

Dave Anderson, Principal Investigator

Problems:
Unmanned aerial vehicle (UAV) sensor data and telemetry are being aggregated in huge archives, yet retrospective analysis of those resources remains difficult. This is partly due to a lack of technical support for that analysis, but also to a lack of any searchable description of the sensor content. Extracting descriptions from the video imagery remains a difficult research problem.

Objectives:
We will demonstrate improved exploitation of historical UAV mission archives by providing a simple search service over the "what" as well as the "where and when," and enabling targeted analysis in external multi-source visualization tools such as ISR Forensics and Google Earth. We will also define a simple taxonomy of known-important entities, events, and attributes, and tag a corpus of augmented mission data for further research in video content analysis.

Activities:
We will collect operator audio speech and analyst IRC (Internet Relay Chat) during UAV missions, and time-correlate them with video and telemetry. We will provide search services based on Audio Hot Spotting and other tools. We will work with multi-source analysts to identify a taxonomy of important semantic tags for mission data, and develop a corpus with human-vetted annotations.

Impact:
Broad, content-oriented access to historical UAV mission data will improve UAV archive exploitation and situation awareness for deployed soldiers. It will also provide a basis for evaluation of research on video content extraction that is grounded in real-world problems.

Approved for Public Release: 07-0179

Presentation [PDF]


Next Generation Meeting Support

Doug Phair, Principal Investigator


Sensor Data and Analysis Framework

Don Landing, Principal Investigator

Problems:
Current querying techniques for archive and streaming data are insufficient by themselves to harmonize sensor inputs from large volumes of data. These two distinct architectures (push versus pull) have yet to be combined to meet the demands of a data-centric world. The input of sensor streaming data from multiple sensor types further complicates the problem.

Objectives:
The objective is to develop an integrated query capability that simultaneously accesses streaming and archive data sets from multiple sensor types. The research will design and test techniques for incorporating the pedigree of geospatial data and develop an approach that can scale while meeting response times.

Activities:
The objective is to develop an integrated query capability that simultaneously accesses streaming and archive data sets from multiple sensor types. The research will design and test techniques for incorporating the pedigree of geospatial data and develop an approach that can scale while meeting response times.

Impact:
This research will create an opportunity to transfer knowledge from academia to MITRE, which will allow us to better support our sponsors. The proposed framework will be an enabler for national and tactical management of sensor data. We will help to enhance the Borealis Open Source project to help address our challenges.

Approved for Public Release: 06-0014

Presentation [PDF]


SezHoo: Using Reputation to Increase the Trustworthiness of Information

Mark Kramer, Principal Investigator

Problems:
Large-scale collaborations require mechanisms that facilitate trust, enforce behavioral norms, and provide accountability. Trust and reputation systems offer a promising approach; however, existing systems are oriented toward on-line goods markets and recommendation systems. We aim to create a more general model of trust involving organizations, people, short- and long-term transactions, and content quality.

Objectives:
Our objective is to design trust and reputation mechanisms that provide incentives for suppliers to declare accurately the quality of information they provide, encourage information consumers to accurately rate the information they consume, protect the diversity of views in the system and the providers of minority viewpoints, and resist unfair feedback and gaming behaviors.

Activities:
Our technical approach has two complementary parts. The theoretical portion, based in part on market theory, will create a general understanding of the issues of trust and reputation in information exchanges. The applied portion will be based on two or three case studies that will test and demonstrate our ideas, and will pave the way for technology transfer.

Impact:
Information quality is a central issue in a range of practical problems, including distributed intelligence gathering and analysis such as LocaEyes, collaboration environments such as wikis (e.g., MITREpedia), and reputation-based quality of service and service-level agreements. We will use these or similar examples as case studies for our approach.

Approved for Public Release: 06-1087

Presentation [PDF]


Social and Semantic Software Prototypes: Social Bookmarking in the Enterprise and Semantically Enhanced Topic Watching

Donna Cuomo, Principal Investigator


Standard Rule Language for Enterprise Application Integration

Suzette Stoutenburg, Principal Investigator

Problems:
To defeat emerging threats, C4ISR systems must be dynamic and adaptable. Separation of rules from executable code supports the ability to dynamically modify system behavior in complex, changing environments. To realize the benefits of rule separation, a Rule Language Standard is required to support the sharing of rule abstractions across domains, thus enabling agility and interoperability.

Objectives:
Our goal is to advance the state of the art in the Semantic Web Rule Layer by developing a set of demonstrable recommendations for how the rule language standard should evolve. We will demonstrate how rules can be used for agile management of information flows in complex, dynamic C4ISR environments, allowing identification of DoD requirements for the evolving standard.

Activities:
We will explore the interaction between the rule and ontology layers of the Semantic Web and demonstrate how a standard language should best express each in combination. We will examine orchestration of inferencing across layers, rules for dynamic service behavior, dynamic rule distribution, and rule annotation for discovery and reuse. Results will be shared with academia, standards organizations, and sponsors.

Impact:
This research will contribute supporting evidence for how the Standard Rule language for the Semantic Web should evolve. The results will advance some of the most critical DoD requirements, including Enterprise Integration, Interoperability and Net Centric system development. The concept of dynamic Service Oriented Architectures will be advanced, supporting our sponsors' evolution to agile, machine-to-machine environments.

Approved for Public Release: 04-1311

Presentation [PDF]


Technical Intelligence Gathering and Evaluation Process

Doug Phair, Principal Investigator


^TOP

Last Updated:05/02/2007

Homeland Security Center Center for Enterprise Modernization Command, Control, Communications and Intelligence Center Center for Advanced Aviation System Development

 
 
 

Solutions That Make a Difference.®
Copyright © 1997-2013, The MITRE Corporation. All rights reserved.
MITRE is a registered trademark of The MITRE Corporation.
Material on this site may be copied and distributed with permission only.

IDG's Computerworld Names MITRE a "Best Place to Work in IT" for Eighth Straight Year The Boston Globe Ranks MITRE Number 6 Top Place to Work Fast Company Names MITRE One of the "World's 50 Most Innovative Companies"
 

Privacy Policy | Contact Us