ELATE empowers teams to ensure trustworthy artificial intelligence (AI) products throughout their development cycle. The tool consists of exploratory questions for teams to prioritize and answer regularly, illustrated by actual incidents involving AI.
Currently, developing “Trustworthy Artificial Intelligence (AI)” hinges upon top-down, principles-based approaches, which are difficult to put into practice. Thus, we have developed the Evidence-Based List of Exploratory Questions for AI Trust Engineering (ELATE) to help AI developers take a bottom-up approach to developing trustworthy AI.
We collected data from multiple sources about incidents where people gained or lost trust in AI “in the wild” and developed a list of items to consider, each including exploratory questions and evidence from real-world examples. We recommend a notional method to apply ELATE in agile development practices; however, we acknowledge more research will be necessary to effectively employ ELATE and discuss several options for such employment methods.