Raytheon making neural network that explains itself

Under DARPA’s (Defense Research Project Agency) Explainable Artificial Intelligence program (XAI), Raytheon BBN Technologies is developing a neural network that explains itself. The XAI program aims to create machine learning techniques that produce more explainable models while maintaining a high level of performance. It also aims to help human users understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

Raytheon BBN’s Explainable Question Answering System (EQUAS) will allow AI programs to ‘show their work,’ increasing the human user’s confidence in the machine’s suggestions. EQUAS will show users which data mattered most in the AI decision-making process. Using a graphical interface, users can explore the system’s recommendations and see why it chose one answer over another. The technology is still in its early phases of development but could potentially be used for a wide range of applications.

As the system is enhanced, EQUAS will be able to monitor itself and share factors that limit its ability to make reliable recommendations. This self-monitoring capability will help developers refine AI systems, allowing them to inject additional data or change how data is processed.

Read the full release

Related Posts

Previous Post
RiskMinds365: cybersecurity and fintechs among top blips on CRO radar screens
Next Post
ABI Research: Chinese AI startups beat American peers for VC investment at $5bn in 2017

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account