MITRE Invites Technology Innovators to Take the ChallengeMarch 2011
Topics: Technological Innovations
How can federal agents stop a terrorist from boarding a plane? How can the government prevent Social Security fraud? How can the Red Cross reunite families after a hurricane?
These are among the questions MITRE is exploring in the first in a series of recently launched competitions called The MITRE Challenge™. An ongoing, open contest to encourage innovation in technologies of interest to the federal government, the Challenge invites the best ideas and novel approaches to solve critical issues facing MITRE's sponsors.
Challenge #1 entails multicultural name matching—a technology that's a key component of identity matching and involves measuring the similarity of database records that refer to people. Uses for this technology include verifying eligibility for Social Security or medical benefits, identifying and reunifying families in disaster-relief operations, vetting persons against a travel watchlist, and merging or eliminating duplicate records in databases.
Person name matching can also be used to improve the accuracy and speed of document searches, social network analysis, and other tasks in which the same person might be referred to by multiple versions or spellings of a name.
"The applications for this technology are endless," says Keith J. Miller, principal artificial intelligence engineer, Challenge project leader, and one of the founders of MITRE's Identity Matching Lab (IML). The IML is a virtual lab in which Miller and his colleagues evaluate commercial off-the-shelf and government identity matching and resolution tools to help maximize sponsors' effective use of the technology.
"We're helping the Department of Homeland Security and other key national security sponsors with their identity matching needs."
The Way it Works
Anyone can join the Challenge—academic institutions, commercial companies, government laboratories, and individuals. Participants must match a query file and an index file, each containing a list of names, against one another to produce a list of scored matches for each query name. Registered teams receive a dataset and task guidelines, submit responses, and receive immediate feedback on their performance. The names of the best performing teams will be posted on a continuously updated leaderboard.
The team that yields a reproducible result that demonstrates the greatest performance improvement over the baseline algorithm (Mean Average Precision) will be declared the winner and will have an opportunity to present at a MITRE/government technical exchange meeting.
From Concept to Reality
The idea for the Challenge came about during a capabilities review conducted by MITRE's Information and Computing Technologies Technical Center. As part of the review that focused on human language technology, one of the center's core competencies, Miller presented work he and his team had been doing in the IML.
"Seeing the tools and infrastructure we had in place to do methodologically sound evaluation of identity matching technologies, Rich Byrne [senior vice president in MITRE's DoD FFRDC] asked, 'Can we do something like the Netflix prize competition, but for identity matching?'" explains Miller.
Byrne was referring to the movie giant's public contest to improve its algorithm-based recommendation program called Cinematch. Miller took the suggestion and ran with it.
"Of course, we knew we couldn't offer the $1 million Netflix did, but the Challenge provides a unique opportunity for people to come to the table with solutions to complex problems—and get some recognition in the process," he says.
Byrne's office provided the seed funding for the prototype. From there, interest in the Challenge grew, and support for it expanded to include several MITRE departments throughout several FFRDCs. "This was a true cross-corporate effort," Miller says. "It started in our DoD FFRDC, but the technology applies to sponsor problems across MITRE."
In addition to Miller, the Challenge #1 team includes MITRE staffers James Finley and Percy Schmidt (back-end coding), Sarah McLeod and Liz Schroeder (multicultural dataset development), and Chris Johnson (front-end design).
—by Karina H. Wright