About Us Our Work Employment News & Events
MITRE Remote Access for MITRE Staff and Partners Site Map
edge top

 

Home > News & Events > MITRE Publications > The Edge >
Neuroscience Meets Engineering

Biologically inspired processing dates back to the early 1940s when scientists saw that even a coarse macroscopic view of neurons with their complex web of interconnections offers a degree of adaptivity and robustness unparalleled in systems of that generation. About ten years ago processors based on neural networks began to find applications in areas as diverse as speech processing, fingerprint analysis, nonlinear control theory, noise cancellation, and sonar signal processing. These approaches are primarily algorithmic and are only loosely coupled to true biological systems.eye

More recently, there has been increased emphasis on understanding how individual neurons perform their tasks. Surprisingly, researchers found that analog computations performed at the cellular level often outperform their digital counterparts when exact answers to complex problems are not required. The following article discusses this non-algorithmic approach and offers current examples.

Present-day sensor and signal processing systems are challenged by complex, dynamic environments and often must provide real-time processing capabilities. These capabilities include fast recognition, adaptation, and control mechanisms, and involve such tasks as pattern recognition, image processing, target tracking, fusion of data from diverse sensors, speech recognition, sound localization, and autonomous control.

For example, an autonomous robot most likely needs to view its environment, develop an awareness of its surroundings, identify needed responses to its environment, and then be able to carry out those responses. These tasks require abilities in vision, perception, problem solving, and control. Many believe that an important key to developing engineering systems that provide these capabilities can be found in the neuroscience field.

The reason for this belief is simple: biological systems regularly perform tasks such as those mentioned above with apparent ease and usually with more success than can be demonstrated in engineering systems. For example, the human visual system provides reasonable functionality in a much wider range of lighting conditions than standard phototransistors and CCD cameras. Whereas the human retina accommodates roughly 10 orders of magnitude in range of optical input, from dim moonlight to bright sunlight, the dynamic range for state-of-the-art cameras is nearly four orders of magnitude less.

Historically, attempts to incorporate neurological functionality in engineering systems can be found in the development of artificial neural networks (ANN). The basic processing element in an ANN is based on the perceptron, a crude model of a biological neuron first put forward by J. McCulloch and W. Pitts in 1943.

An artificial neural network processes inputs by taking a weighted sum of the inputs and then outputting a thresholded value. In simple models, the output is binary; in more complex cases, the output can assume a continuum of values. The inputs to the perceptron can be the output from sensors, other perceptrons, or other processing elements. Collections of artificial neurons are connected in a network structure to produce an artificial neural network.

Information in the network is stored in the strengths (weights) of the connections between two neurons, that is, the degree to which the output of one neuron affects the output of another neuron. The importance of ANNs lies in their supposed ability to learn by modifying their interconnection weights, a feature that required a learning mechanism. In 1949, D.O. Hebb provided this mechanism by proposing a learning rule whereby, for a given input, a weight connection between two neurons is increased whenever the outputs from both neurons have high values for the given input; otherwise the weight connection is decreased.

Regardless of their success, the basic structure of the artificial neural network remains algorithmic. In other words, the network executes an algorithm that must be encoded by the researcher. The algorithmic approach seemingly constrains the network to only learn that which can be adequately embedded in the network instruction set. Deviation from this idea to one that embeds neural organizing principles within the processing itself (the neuromorphic approach) began to develop in the 1980s, when Carver Mead and his colleagues incorporated neural processing principles into hardware devices. In this paradigm, processing is a direct consequence of the physics of the system so that information is processed without the use of coded algorithms. Early developments focused on visual and auditory sensor processing.

Understanding how biological systems perform these tasks and transitioning this knowledge into engineering systems has become an important focus area within the government and in the commercial sector. For example, programs at the Defense Advanced Research Projects Agency and the Office of Naval Research have been supporting efforts in neuroscience-related research to identify basic principles of biological and neurological processing for application to engineering systems.

Universities are involved as well. The California Institute of Technology, for one, is developing a diverse collection of neuromorphic engineering systems. Many of these efforts have placed considerable emphasis on bringing together researchers from neuroscience and engineering to form a synergistic relationship that will greatly benefit both fields. In line with these efforts, MITRE provided facilities and organizational support for a neuroscience conference in May of 1998 that included many of this country's leading neuroscience researchers.

An example of how neural processing is mimicked in a neuromorphic system is provided by the network structure found in the silicon retina of Carver Mead and Misha Mahowald and described in their 1991 Scientific American article, "The Silicon Retina." The silicon retina directly emulates the functionality of the three primary layers of cells (photoreceptor, horizontal, and bipolar) in the retina (see illustration). The neuromorphic photoreceptor consists of a photodetector along with two transistors, which output a voltage proportional to the logarithm of the current from the photodetector. A resistive network then models the layer of horizontal cells, which effectively provides locally weighted averages in such a way that the influence of one point in the network on another point decreases exponentially with increasing distance. Finally, the retinal bipolar cells are implemented using two amplifiers and provide an output that is the difference between the local photoreceptor and the overall voltage in the resistive network at that node. This silicon device mirrors the functionality of the human retina so well that it even suffers from many of the same "optical illusion" effects that humans perceive. When the silicon retina was applied to the front-end of a face recognition system, performance of the recognition system improved from 73 percent to 96 percent.

Cellular structure of the retina

Cellular structure of the retina.

It is important to note that neuromorphic processing is not being proposed as a replacement technology for the digital computer for numerical computation. One reason is that the 8-bit precision provided by neuromorphic analog processing elements is insufficient for many numerical computations. In fact, neuromorphic processing elements are designed for just the opposite: lack of absolute precision. In biology, the inputs to a neuron are usually imprecise in their values. The imprecision is offset by the large number of inputs that provide highly overlapped information. The neural response therefore is a type of statistical "averaging" mechanism. This model of processing runs counter to that for digital processing in today's computers.

During the past year, MITRE performed a survey of developments in the neuromorphic field, examined performance benefits, and identified some promising neuromorphic technologies. Some benefits inherent in these systems include continuous real-time performance, reduced power requirements over conventional digital processing, a greater degree of fault tolerance for failed or defective processing elements, robustness against variations in input, and high relative precision. These benefits derive directly from neuromorphic architectural designs and its implementation using well-established analog semiconductor technologies. For example, the use of analog processing leads to reduced power requirements, and helps avoid numerical problems that sometimes arise in digital processing from employing a sampling rate which is too low. These characteristics make neuromorphic processing an appealing option for inclusion in future sensor and signal processing designs.

Aside from the obvious development of possibly more robust sensors, research in neuromorphic processing can potentially offer alternative mechanisms to at least several areas of signal processing and sensor systems in the near term. One of these areas is network architectures for the storing, sharing, and distribution of information. An important focus for neuromorphic processing is the construction of large synchronized networks of artificial neurons. The method by which neural information is transferred between neurons and how that information is processed by those neurons could provide clues as to how to process and transmit information over distributed networks. This includes the establishment of a paradigm for local memory, or memory stored at the sight of the processing elements, an important area of active research in neuromorphic engineering. Furthermore, research could also provide clues for the development of more effective data fusion systems, especially when the input (or sensor) data is from diverse source types. The effective encoding of sensor information for use by diverse processing elements and how this information is collated into a coherent situational representation is an important focus for neuromorphic designs and has direct application to such data fusion problems. Finally, in conjunction with efforts in neuroscience, the development of a better understanding of neurological learning mechanisms and organization principles should provide clues as to how to improve smart systems and adaptive signal processing engines.

The eventual aim of neuromorphic engineering is to address higher cognitive abilities as well as the sensor-type processing as provided by the silicon retina and a neuromorphic silicon cochlea. Since cognitive abilities are more complex, this will require considerably more effort to achieve before suitable technology is ready for use in applications. Although this is a daunting task, we are encouraged by the success already achieved in this relatively young field of research.


For more information, please contact David Colella using the employee directory.


Homeland Security Center Center for Enterprise Modernization Command, Control, Communications and Intelligence Center Center for Advanced Aviation System Development

 
 
 

Solutions That Make a Difference.®
Copyright © 1997-2013, The MITRE Corporation. All rights reserved.
MITRE is a registered trademark of The MITRE Corporation.
Material on this site may be copied and distributed with permission only.

IDG's Computerworld Names MITRE a "Best Place to Work in IT" for Eighth Straight Year The Boston Globe Ranks MITRE Number 6 Top Place to Work Fast Company Names MITRE One of the "World's 50 Most Innovative Companies"
 

Privacy Policy | Contact Us