About Us Our Work Employment News & Events
MITRE Remote Access for MITRE Staff and Partners Site Map

Home > News & Events > MITRE Publications > The Edge >

Sensor Networks That "Think"

By Walter Kuklinski

As a white sun rises over Iraq's Al Jazirah Desert, only the most observant eye would note the new smattering of stones spread across a scraggly acre near the Syrian border. An even closer look would reveal these stones for what they truly are: a network of several thousand camouflaged sensors scattered the night before by a low-flying U.S. military plane. These sensors will be doing plenty of hard looking, scouting the border for evidence of arms smuggling.

Equipping wireless sensor networks with adaptive control—the ability to intelligently adapt to what a network knows about itself, what it knows about its environment, and most important, what it has learned through its lifetime—is a goal as critical as it is challenging for the Netted Sensors Initiative.

The sun rises and then falls again. As dusk turns to night and stars spill across the sky, a faint rumble stirs the sand. The sensors detect the rumble and match it to the acoustic signature of a heavy truck, perhaps a half-mile away. The network is quickly faced with a bevy of decisions. Employ its infrared capabilities to identify the truck at the current distance, but drain its limited energy storage doing so? Wait for the truck to draw closer so the network can employ its lower-energy sensor capabilities, taking the risk that the truck never approaches within range? Or report the presence of the truck immediately to military command without taking further time to pinpoint the vehicle's nature?
This scenario may seem like a lot of deep thinking for sensors powered by AA batteries, but research by MITRE's Netted Sensors Initiative is revealing that the technology exists now to design wireless sensor networks that can learn, adapt, and make their own decisions.

Markov's Method

Equipping wireless sensor networks with adaptive control—the ability to intelligently adapt to what a network knows about itself, what it knows about its environment, and most important, what it has learned through its lifetime—is a goal as critical as it is challenging for the Netted Sensors Initiative. We have approached the problem using mathematical methods that use the state of knowledge that the sensor network has at any time to predict the potential gain in knowledge that could be realized by operating a given sensor or collection of sensors in a particular mode.

The methods we have applied are referred to as Markov decision process techniques. These techniques were originally developed to solve scheduling problems where limited resources are available to work on a set of tasks. The tasks could be in any one of a number of states that could change in a predictable but random manner. Given a value or reward for working on a given task when it is in a particular state, the Markov decision process predicts the future state of each task, including the consequences of all possible courses of action. Once the overall course of action that yields the largest value is obtained, the system simply looks at the states of each task and works on the task or tracks that will yield the greatest reward.

In determining the best course of action, the method considers tradeoffs between the cost of working on a task and the rewards to be gained. More importantly, this method avoids the pitfalls of being "greedy." While working on a given task might seem like the best thing to do at present, the future consequences of that action—such as using limited sensor energy to get a better look at a target that is a long distance away—may preclude future action that would be more valuable, such as waiting until the target moved closer and then expending sensor energy to determine its identity.

As simple as that process sounds, few guidelines exist for netted sensors. Fortunately, our previous experience in applying Markov decision process methods to the control and operation of individual standoff sensors was invaluable as we moved forward to the energy-constrained wireless sensor network problem. Modeling the sensor capabilities (including their energy storage ability) and the environment was a natural extension of the individual standoff sensor case.

The process of determining the reward or value associated with activating a given sensor, based on some long-term objective—such as the ability to detect, track, and classify objects for as long as possible—was the most challenging and exciting aspect of our analytical studies. To ensure that our solution methods would be useful for large numbers (100,000+) of wireless sensors, we developed hierarchical methods that deconstructed the overall network control problem into a number of smaller problems. These could be solved in real time with the processing power available on sensor platforms. By taking the results of these sub-network control solutions and using them as inputs to a global network Markov decision process, we can obtain nearly optimal performance at greatly reduced computational complexity and with increased reliability.

Markov decision process methods predict the future state of each task, including the consequences of all possible courses of action.

Markov decision process methods predict the future state of each task, including the consequences of all possible courses of action.

Putting Markov to the Test

To determine the potential of letting a sensor network independently decide its course of action, we used REEF (MITRE's Research and Experimentation Fabric—see page 14) to design a detailed simulation environment. In our simulation experiments, we were able to determine how well the adaptive Markov decision process functioned as a whole, and also how well individual sensors were able to predict their ability to improve global knowledge of objects within the sensor network's field of view. Of course, any method can only be considered truly successful when it's used with real data obtained from a live sensor network. We hope that when our methods make their way to fielded wireless sensor networks, the networks will be able to "think" carefully about their future.

Developing "smart" wireless sensor network control methods for the Netted Sensors Initiative has yielded much more than just smart networks. Our experience has helped accelerate progress on other sponsor programs dealing with different types of sensor networks as well. At first, the sensing modalities, number of sensing nodes, and geographic scale of the sponsor programs seemed quite different from our familiar world of wireless sensor networks with many low-power sensors. We found, however, that the fundamental mathematical framework we employed provided a solid foundation to address a wide range of multiple-sensor system design and operation problems.

Netted Sensors

Spring 2006
Vol. 10, No. 1




Introduction

Garry Jacyna and L. Danny Tromp


A "Hitchhiker's Guide" to Netted Sensors

Garry Jacyna and L. Danny Tromp


Good Sensors Make Good Fences

Marcus Glenn, Brian Flanagan, and Mike Otero


Sensor Networks That "Think"

Walter Kuklinski


Distributed Computing Provides the Net(ted) Result

Bryan George, Brian Flanagan, and Burhan Necioglu


Plug and Play for Sensors Makes Good Sense

Michael E. Los


REEF: Putting Sensors to the Test

Daniel Luke, Stephen Theophanis, William Dowling, and Dave Allen


Every Piston Tells a Story: Designing a Vehicle Noise Simulator

Carol Thomas Christou


An Eye on the Sky: Detecting and Identifying Airborne Threats with Netted Sensors

Weiqun Shi, Ronald Fante, John Yoder, and Gregory Crawford


MITRE's Contributions to the DARPA NEST Research Program

Kenneth W. Parker


pdf icon Download this issue [4MB]

 

For more information, please contact Walter Kuklinski using the employee directory.


Page last updated: April 28, 2006   |   Top of page

Homeland Security Center Center for Enterprise Modernization Command, Control, Communications and Intelligence Center Center for Advanced Aviation System Development

 
 
 

Solutions That Make a Difference.®
Copyright © 1997-2013, The MITRE Corporation. All rights reserved.
MITRE is a registered trademark of The MITRE Corporation.
Material on this site may be copied and distributed with permission only.

IDG's Computerworld Names MITRE a "Best Place to Work in IT" for Eighth Straight Year The Boston Globe Ranks MITRE Number 6 Top Place to Work Fast Company Names MITRE One of the "World's 50 Most Innovative Companies"
 

Privacy Policy | Contact Us